url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://brilliant.org/problems/ionic-molarity/ | Ionic molarity
Chemistry Level pending
In a beaker containing $$500\text{ ml}$$ of $$0.12\text{ M}$$ aqueous $$\ce{Fe{(NO_3)}_3}$$, $$100\text{ ml}$$ of aqueous $$1$$M $$\ce{Fe{Cl}_3}$$ is added, then find the molarity of the cations in the mixture.
Assume that the contents of the beaker do not react.
× | 2017-03-31 00:50:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6773783564567566, "perplexity": 4384.827518607791}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218205046.28/warc/CC-MAIN-20170322213005-00013-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://frenchmaidtv.com/ttkjn8/85ed59-pascal%27s-triangle-100th-row | Every row of Pascal's triangle is symmetric. Example: Input : k = 3: Return : [1,3,3,1] NOTE : k is 0 based. I need to find out the number of digits which are not divisible by a number x in the 100th row of Pascal's triangle. Thereareeightoddnumbersinthe 100throwofPascal’striangle, 89numbersthataredivisibleby3, and96numbersthataredivisibleby5. The 100th row has 101 columns (numbered 0 through 100) Each entry in the row is. Q . Notice that we started out with a number that had one factor of three... after that we kept multiplying and dividing by numbers until we got to a number which had three as a factor and divided it out... but if we go on..we will multiply by another factor of three at 6C4 and we will get another two numbers until we divide by six in 6C6 and lose our factor again. THEOREM: The number of odd entries in row N of Pascal’s Triangle is 2 raised to the number of 1’s in the binary expansion of N. Example: Since 83 = 64 + 16 + 2 + 1 has binary expansion (1010011), then row 83 has 2 4 = 16 odd numbers. Daniel has been exploring the relationship between Pascal’s triangle and the binomial expansion. 2.13 D and direction by the two adjacent sides of a triangle taken in order, then their resultant is the closing side of the triangle taken in the reverse order. Still have questions? When all the odd integers in Pascal's triangle are highlighted (black) and the remaining evens are left blank (white), one of many patterns in Pascal's triangle is displayed. Simplify ⎛ n ⎞ ⎝n-1⎠. To solve this, count the number of times the factor in question (3 or 5) occurs in the numerator and denominator of the quotient: C(100,n) = [100*99*98*...(101-n)] / [1*2*3*...*n]. This math worksheet was created on 2012-07-28 and has been viewed 58 times this week and 101 times this month. Of course, one way to get these answers is to write out the 100th row, of Pascal’s triangle, divide by 2, 3, or 5, and count (this is the basic idea behind the geometric approach). why. Nov 28, 2017 - Explore Kimberley Nolfe's board "Pascal's Triangle", followed by 147 people on Pinterest. Now do each in the 100th row, and you have your answer. The top row is numbered as n=0, and in each row are numbered from the left beginning with k = 0. Of course, one way to get these answers is to write out the 100th row, of Pascal’s triangle, divide by 2, 3, or 5, and count (this is the basic idea behind the geometric approach). Created using Adobe Illustrator and a text editor. Below is the example of Pascal triangle having 11 rows: Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row … 15. The sum of all entries in T (there are A000217(n) elements) is 3^(n-1). The first diagonal contains counting numbers. Can you explain it? Note: The row index starts from 0. 'You people need help': NFL player gets death threats You get a beautiful visual pattern. Here are some of the ways this can be done: Binomial Theorem. ⎛9⎞ ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ + ⎛a⎞ ⎝b⎠ = ⎛12⎞ ⎝ 5 ⎠ 17 . At a more elementary level, we can use Pascal's Triangle to look for patterns in mathematics. Rows 0 thru 16. He has noticed that each row of Pascal’s triangle can be used to determine the coefficients of the binomial expansion of (푥 + 푦)^푛, as shown in the figure. For instance, the first row is 11 to the power of 0 (1), the second is eleven to the power of 1 (1,1), the third is 11 to the power of 2 (1,2,1), etc. My Excel file 'BinomDivide.xls' can be downloaded at, Ok, I assume the 100th row is the one that goes 1, 100, 4950... like that. Here is a question related to Pascal's triangle. ⎛9⎞ ⎝4⎠ + 16. sci_history Colin D. Heaton Anne-Marie Lewis The Me 262 Stormbird. So 5 2 divides ( 100 77). Now think about the row after it. You get a beautiful visual pattern. k = 0, corresponds to the row [1]. Pascal’s Triangle: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . Finding the behaviour of Prime Numbers in Pascal's triangle. Thereareeightoddnumbersinthe 100throwofPascal’striangle, 89numbersthataredivisibleby3, and96numbersthataredivisibleby5. An equation to determine what the nth line of Pascal's triangle … There are76 legs, and 25 heads. Refer to the following figure along with the explanation below. H�b�W�L@��������cL�u2���J�{�N��?��ú���1[�PC���$��z����Ĭd����! n ; # 3's in numerator, # 3's in denominator; divisible by 3? Each number is the numbers directly above it added together. Here I list just a few. This is down to each number in a row being involved in the creation of two of the numbers below it. (n<243) is, int(n/3) + int(n/9) + int(n/27) + int(n/81), where int is the greatest integer function in basic (floor function in other languages), Since we want C(100,n) to be divisible by three, that means that 100! Also what are the numbers? How many chickens and how many sheep does he have? It is named after Blaise Pascal. What about the patterns you get when you divide by other numbers? Each number inside Pascal's triangle is calculated by adding the two numbers above it. When you divide a number by 2, the remainder is 0 or 1. Also, refer to these similar posts: Count the number of occurrences of an element in a linked list in c++. Can you generate the pattern on a computer? Pascal's triangle is named for Blaise Pascal, a French mathematician who used the triangle as part of … I did not the "'" in "Pascal's". Store it in a variable say num. Pascal's triangle is a way to visualize many patterns involving the binomial coefficient. �%�w=�������J�ˮ������3������鸠��Ry�dɢ�/���)�~���d�D���G��L�N�_U�!�v9�Tr�IT}���z|B��S���;�\2�t�i�}�R;9ywI���|�b�_Lڑ��0�k��F�s~�k֬�|=;�>\JO��M�S��'�B�#��A�/;��h�Ҭf{� sl�Bz��8lvM!��eG�]nr���7����K=�l�;�f��J1����t��w��/�� It is easily programmed in Excel (took me 15 min). How many odd numbers are in the 100th row of Pascal’s triangle? Each number inside Pascal's triangle is calculated by adding the two numbers above it. The highest power p is adjusted based on n and m in the recurrence relation. In this example, n = 3, indicates the 4 th row of Pascal's triangle (since the first row is n = 0). Date: 23 June 2008 (original upload date) Source: Transferred from to Commons by Nonenmac. The sum of the numbers in each row of Pascal's triangle is equal to 2 n where n represents the row number in Pascal's triangle starting at n=0 for the first row at the top. combin (100,0) combin (100,1) combin (100,2) ... Where combin (i,j) is … Note: The row index starts from 0. Take any row on Pascal's triangle, say the 1, 4, 6, 4, 1 row. The way the entries are constructed in the table give rise to Pascal's Formula: Theorem 6.6.1 Pascal's Formula top Let n and r be positive integers and suppose r £ n. Then. An equation to determine what the nth line of Pascal's triangle … Now we start with two factors of three, so since we multiply by one every third term, and divide by one every third term, we never run out... all the numbers except the 1 at each end are multiples of 3... this will happen again at 18, 27, and of course 99. What is Pascal’s Triangle? This video shows how to find the nth row of Pascal's Triangle. If you sum all the numbers in a row, you will get twice the sum of the previous row e.g. Note: if we know the previous coefficient this formula is used to calculate current coefficient in pascal triangle. There are 5 entries which are NOT divisible by 5, so there are 96 which are. Get your answers by asking now. Ofcourse,onewaytogettheseanswersistowriteoutthe100th row,ofPascal’striangle,divideby2,3,or5,andcount(thisisthe basicideabehindthegeometricapproach). Function templates in c++. I need to find out the number of digits which are not divisible by a number x in the 100th row of Pascal's triangle. [ Likewise, the number of factors of 5 in n! This solution works for any allowable n,m,p. The ones that are not are C(100,n) where n =0, 1, 9, 10, 18, 19, 81, 82, 90, 91, 99, 100. How many entries in the 100th row of Pascal’s triangle are divisible by 3? You get a beautiful visual pattern. What is the sum of the 100th row of pascals triangle? When you divide a number by 2, the remainder is 0 or 1. Pascal's triangle is an arrangement of the binomial coefficients in a triangle. When n is divisible by 5, the difference becomes one 5, then two again at n+1. Magic 11's. A P C Q B D (i) Triangle law of vectors If two vectors are represented in magnitude A R Fig. The algorithm I applied in order to find this is: since Pascal's triangle is powers of 11 from the second row on, the nth row can be found by 11^(n-1) and can easily be checked for which digits are not divisible by x. If we interpret it as each number being a number instead (weird sentence, I know), 100 would actually be the smallest three-digit number in Pascal's triangle. This works till the 5th line which is 11 to the power of 4 (14641). - J. M. Bergot, Oct 01 2012 Take time to explore the creations when hexagons are displayed in different colours according to the properties of the numbers they contain. From n =1 to n=24, the number of 5's in the numerator is greater than the number in the denominator (In fact, there is a difference of 2 5's starting from n=1. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. For the 100th row, the sum of numbers is found to be 2^100=1.2676506x10^30. Trump's final act in office may be to veto the defense bill. Add the two and you see there are 2 carries. vector AB ! Suppose we have a number n, we have to find the nth (0-indexed) row of Pascal's triangle. pleaseee help me solve this questionnn!?!? How many entries in the 100th row of Pascal’s triangle are divisible by 3? Welcome to The Pascal's Triangle -- First 12 Rows (A) Math Worksheet from the Patterning Worksheets Page at Math-Drills.com. 9; 4; 4; no (Here we reached the factor 9 in the denominator. First 6 rows of Pascal’s Triangle written with Combinatorial Notation. Sum of numbers in a nth row can be determined using the formula 2^n. The number of odd numbers in the Nth row of Pascal's triangle is equal to 2^n, where n is the number of 1's in the binary form of the N. In this case, 100 in binary is 1100100, so there are 8 odd numbers in the 100th row of Pascal's triangle. 132 0 obj << /Linearized 1 /O 134 /H [ 1002 872 ] /L 312943 /E 71196 /N 13 /T 310184 >> endobj xref 132 28 0000000016 00000 n 0000000911 00000 n 0000001874 00000 n 0000002047 00000 n 0000002189 00000 n 0000017033 00000 n 0000017254 00000 n 0000017568 00000 n 0000018198 00000 n 0000018391 00000 n 0000033744 00000 n 0000033887 00000 n 0000034100 00000 n 0000034329 00000 n 0000034784 00000 n 0000034938 00000 n 0000035379 00000 n 0000035592 00000 n 0000036083 00000 n 0000037071 00000 n 0000052549 00000 n 0000067867 00000 n 0000068079 00000 n 0000068377 00000 n 0000068979 00000 n 0000070889 00000 n 0000001002 00000 n 0000001852 00000 n trailer << /Size 160 /Info 118 0 R /Root 133 0 R /Prev 310173 /ID[] >> startxref 0 %%EOF 133 0 obj << /Type /Catalog /Pages 120 0 R /JT 131 0 R /PageLabels 117 0 R >> endobj 158 0 obj << /S 769 /T 942 /L 999 /Filter /FlateDecode /Length 159 0 R >> stream See more ideas about pascal's triangle, triangle, math activities. You can either tick some of the check boxes above or click the individual hexagons multiple times to change their colour. By 5? Note that the number of factors of 3 in the product n! Looking at the first few lines of the triangle you will see that they are powers of 11 ie the 3rd line (121) can be expressed as 11 to the power of 2. From now on (up to n=50), the number of 3's in the numerator (which jumped by four due to the factor of 81) will exceed the number of 3's in the denominator. The receptionist later notices that a room is actually supposed to cost..? But at 25, 50, etc... we get all the row is divisible by five (except for the two 1's on the end). Refer to the figure below for clarification. In mathematics, It is a triangular array of the binomial coefficients. Can you generate the pattern on a computer? Each number is found by adding two numbers which are residing in the previous row and exactly top of the current cell. For the purposes of these rules, I am numbering rows starting from 0, so that row … Below is the example of Pascal triangle having 11 rows: Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row 1 10 45 120 210 256 210 120 45 10 1 Who was the man seen in fur storming U.S. Capitol? Pascal’s Triangle Investigation SOLUTIONS Disclaimer: there are loads of patterns and results to be found in Pascals triangle. For n=100 (assumed to be what the asker meant by 100th row - there are 101 binomial coefficients), I get. Note the symmetry of the triangle. Input number of rows to print from user. It is the second number in the 99th row (or 100th, depending on who you ask), or $$\binom{100}{1}$$ How many entries in the 100th row of Pascal’s triangle are divisible by 3? Please comment for suggestions. It is named after the French mathematician Blaise Pascal. K(m,p) can be calculated from, K(m,j) = L(m,j) + L(m,j^2) + L(m,j^3) + ...+ L(m,j^p), L(m,j) = 1 if m/j - int(m/j) = 0 (m evenly divisible by j). Here are some of the ways this can be done: Binomial Theorem. If you will look at each row down to row 15, you will see that this is true. Given a non-negative integer N, the task is to find the N th row of Pascal’s Triangle.. There are eight odd numbers in the 100th row of Pascal’s triangle, 89 numbers that are divisible by 3, and 96 numbers that are divisible by 5. Create all possible strings from a given set of characters in c++ . When you divide a number by 2, the remainder is 0 or 1. For the 100th row, the sum of numbers is found to be 2^100=1.2676506x10^30. There are also some interesting facts to be seen in the rows of Pascal's Triangle. At n=25, (or n=50, n=75), an additional 5 appears in the denominator and there are the same number of factors of 5 in the numerator and denominator, so they all cancel and the whole number is not divisible by 5. Color the entries in Pascal’s triangle according to this remainder. Fauci's choice: 'Close the bars' and open schools. By 5? The first row has only a 1. Enter the number of rows you want to be in Pascal's triangle: 7 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1. The third row has 3 numbers: 1, 1+1 = 2, 1. must have at least one more factor of three than. Pascal’s Triangle 901 Lesson 13-5 APPLYING THE MATHEMATICS 14. English: en:Pascal's triangle. The ones that are not are C(100, n) where n = 0, 25, 50, 75, 100. 100 90 80 70 60 *R 50 o 40 3C 20 0 12 3 45 0 12 34 56 0 1234567 0 12 34 567 8 Row 5 Row 6 Row 7 Row 8 Figure 2. Although proof and for-4. Can you take it from there up to row 11? To build the triangle, always start with "1" at the top, then continue placing numbers below it in a triangular pattern.. Each number is the two numbers above it added … Now in the next row, the number of values divisible by three will decrease by 1 for each group of factors (it takes two aded together to make one in the next row....). By 5? Each row represent the numbers in the powers of 11 (carrying over the digit if … Can you see the pattern? Pascal’s Triangle: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . To build the triangle, start with "1" at the top, then continue placing numbers below it in a triangular pattern. */ vector Solution::getRow(int k) // Do not write main() function. I need to find the number of entries not divisible by$n$in the 100th row of Pascal's triangle. Color the entries in Pascal’s triangle according to this remainder. Note:Could you optimize your algorithm to use only O(k) extra space? We can write down the next row as an uncalculated sum, so instead of 1,5,10,10,5,1, we write 0+1, 1+4, 4+6, 6+4, 4+1, 1+0. Examples: Input: N = 3 Output: 1, 3, 3, 1 Explanation: The elements in the 3 rd row are 1 3 3 1. In any row of Pascal’s triangle, the sum of the 1st, 3rd and 5th number is equal to the sum of the 2nd, 4th and 6th number (sum of odd rows = sum of even rows) Slain veteran was fervently devoted to Trump, Georgia Sen.-elect Warnock speaks out on Capitol riot, Capitol Police chief resigning following insurrection, New congresswoman sent kids home prior to riots, Coach fired after calling Stacey Abrams 'Fat Albert',$2,000 checks back in play after Dems sweep Georgia, Kloss 'tried' to convince in-laws to reassess politics, Serena's husband serves up snark for tennis critic, CDC: Chance of anaphylaxis from vaccine is 11 in 1M, Michelle Obama to social media: Ban Trump for good. ), 18; 8; 8, no (since we reached another factor of 9 in the denominator, which has two 3's, the number of 3's in numerator and denominator are equal again-they all cancel out and no factor of 3 remains.). Thus the number of k(n,m,j)'s that are > 0 can be added to give the number of C(n,m)'s that are evenly divisible by p; call this number N(n,j), The calculation of k(m,n.p) can be carried out from its recurrence relation without calculating C(n,m). (n<125)is, C(n,m+1) = (n - m)*C(n,m)/(m+1), m = 0,1,...,n-1. Let k(n,m,j) = number of times that the factor j appears in the factorization of C(n,m). %PDF-1.3 %���� How many entries in the 100th row of Pascal’s triangle are divisible by 3? the coefficients for the 1000th row of Pascal's Triangle, the resulting 1000 points would look very much like a normal dis-tribution. Define a finite triangle T(m,k) with n rows such that T(m,0) = 1 is the left column, T(m,m) = binomial(n-1,m) is the right column, and the other entries are T(m,k) = T(m-1,k-1) + T(m-1,k) as in Pascal's triangle. In 15 and 16, fi nd a solution to the equation. Color the entries in Pascal’s triangle according to this remainder. Consider again Pascal's Triangle in which each number is obtained as the sum of the two neighboring numbers in the preceding row. Presentation Suggestions: Prior to the class, have the students try to discover the pattern for themselves, either in HW or in group investigation. Here I list just a few. Looking at the first few lines of the triangle you will see that they are powers of 11 ie the 3rd line (121) can be expressed as 11 to the power of 2. Since Pascal's triangle is infinite, there's no bottom row. The sum of the rows of Pascal’s triangle is a power of 2. By 5? We find that in each row of Pascal’s Triangle n is the row number and k is the entry in that row, when counting from zero. Pascal’s Triangle represents a triangular shaped array of numbers with n rows, with each row building upon the previous row. It is also being formed by finding () for row number n and column number k. When you divide a number by 2, the remainder is 0 or 1. 1, 1 + 1 = 2, 1 + 2 + 1 = 4, 1 + 3 + 3 + 1 = 8 etc. Step by step descriptive logic to print pascal triangle. You get a beautiful visual pattern. Thank you! In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia, China, Germany, and Italy.. Pascal's Triangle. Pascal's triangle is a way to visualize many patterns involving the binomial coefficient. Thus ( 100 77) is divisible by 20. It just keeps going and going. When all the odd integers in Pascal's triangle are highlighted (black) and the remaining evens are left blank (white), one of many patterns in Pascal's triangle is displayed. Calculate the 3rd element in the 100th row of Pascal’s triangle. Pascal's triangle is named for Blaise Pascal, a French It just keeps going and going. For the purposes of these rules, I am numbering rows starting from 0, so that row … I didn't understand how we get the formula for a given row. This increased the number of 3's by two, and the number of factors of 3 in numerator and denominator are equal. 2 An Arithmetic Approach. aՐ(�v�s�j\�n��� ��mͳ|U�X48��8�02. At n+1 the difference in factors of 5 becomes two again. For example, the fifth row of Pascal’s triangle can be used to determine the coefficients of the expansion of (푥 + 푦)⁴. In fact, if Pascal's triangle was expanded further past Row 15, you would see that the sum of the numbers of any nth row would equal to 2^n. The algorithm I applied in order to find this is: since Pascal's triangle is powers of 11 from the second row on, the nth row can be found by 11^(n-1) and can easily be … How many odd numbers are in the 100th row of Pascal’s triangle? Which of the following radian measures is the largest? The n th n^\text{th} n th row of Pascal's triangle contains the coefficients of the expanded polynomial (x + y) n (x+y)^n (x + y) n. Expand (x + y) 4 (x+y)^4 (x + y) 4 using Pascal's triangle. Below is the example of Pascal triangle having 11 rows: Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row 1 10 45 120 210 256 210 120 45 10 1 Examples: Input: N = 3 Output: 1, 3, 3, 1 Explanation: The elements in the 3 rd row are 1 3 3 1. There are eight odd numbers in the 100th row of Pascal’s triangle, 89 numbers that are divisible by 3, and 96 numbers that are divisible by 5. Pascal's triangle is an arrangement of the binomial coefficients in a triangle. They pay 100 each. As we know the Pascal's triangle can be created as follows − In the top row, there is an array of 1. ; To iterate through rows, run a loop from 0 to num, increment 1 in each iteration.The loop structure should look like for(n=0; n 0 and m≠1, prove or disprove this equation:? def mk_row(triangle, row_number): """ function creating a row of a pascals triangle parameters: F�wTv�>6��'b�ZA�)��Iy�D^���\$v�s��>:?*�婐6_k�;.)22sY�RI������t�]��V���5������J=3�#�TO�c!��.1����8dv���O�. Addition of vectors 47 First draw O A ! Subsequent row is made by adding the number above and to the left with the number above and to the right. Can you explain it? I would like to know how the below formula holds for a pascal triangle coefficients. For more ideas, or to check a conjecture, try searching online. The 4th row has 1, 1+2 = 3, 2+1 =3, 1. Let K(m,j) = number of times that the factor j appears in the factorization of m. Then for j >1, from the recurrence relation for C(n.m) we have the recurrence relation for k(n,m,j): k(n,m+1,j) = k(n,m,j) + K(n - m,j) - K(m+1,j), m = 0,1,...,n-1, If k(n,m,j) > 0, then C(n,m) can be divided by j; if k(n,m,j) = 0 it cannot. | 2021-05-16 06:35:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7523982524871826, "perplexity": 438.63504345987707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00509.warc.gz"} |
https://physics.stackexchange.com/questions/474052/finding-the-green-function-of-an-operator-in-qft/474059 | # Finding the Green function of an operator in QFT
I'm working on some quantum field theory and have to operate on a field with the following operator:
$$(x^\mu \partial_\mu + 1)^{-1}$$
I've been trying to find an explicit form of this operator, but it has proved a challenge. Calling it $$\phi$$, I know it has to obey:
$$\int d^4x\> (x^\mu \partial_\mu + 1)\phi = 1$$
but all my attempts have gotten me nowhere. Can anyone point me in the right direction?
• Explicit form how? Would you be happy to know how it works on a convenient basis or you need like an integral expression? – MannyC Apr 21 '19 at 1:11
• @MannyC anything to evaluate it! (So really the former) – Craig Apr 21 '19 at 2:02
• A way is writing down the inverse operator as a series $\sum_{k=0}^\infty (-1)^k (x^\mu \partial_\mu)^k$. It works on real analytic functions a least... – Valter Moretti Apr 21 '19 at 10:30
I would not presume pointing you in the right direction, but you might consider the evident Lorentz-invariant eigenfunctions of your operator, namely the powers $$r^n$$, for $$r\equiv \sqrt{x^\mu x_\mu}$$, $$\frac{1}{1+x^\mu\partial_\mu} ~ r^n = \frac{1}{1+n} r^n ~.$$ It is also easy to quantify the directional eigenfunctions $$(a\cdot x)^n$$ as well, since the kernel $$x^\mu\partial_\mu$$ is just a power counter, $$x^\mu\partial_\mu r^n= x^\mu (2 x_\mu) \frac{n}{2} (r^2)^{n/2-1} =n ~r^n.$$
• Hi Cosmas, how is this fact derived/obvious to you? Am I missing something basic? – Craig Apr 21 '19 at 2:03
• OK, expanding the point. – Cosmas Zachos Apr 21 '19 at 2:46
• Ahh okay I was over thinking the act of inverting the operator. Thanks a ton! – Craig Apr 21 '19 at 2:55
Building on Cosmas' answer. You can extend his definition to a basis of functions on $$\mathbb{R}^d$$. This is the reason behind my comment before. Any function can be written as a radial function times a spherical harmonic $$\varphi(x) = f(|x|) \,Y_{l_1,\ldots l_{d-1}}(\theta_1,\ldots,\theta_{d-1})\,.$$ The spherical harmonics can be equivalently represented as polynomials of degree $$l_{d-1}$$ (where $$l_{d-1}\geq l_{d-2}\geq\cdots \geq |l_1|$$) $$Y_{l_1,\ldots l_{d-1}}(\theta_1,\ldots,\theta_{d-1}) = \frac{1}{|x|^{l_{d-1}}}C(l_1,\ldots,l_{d-1})^{\mu_1\ldots \mu_{l_{d-1}}} x^{\mu_1}\cdots x^{\mu_{l_{d-1}}}\,,$$ where the coefficients are symmetric traceless tensors. Clearly in this representation $$x^\mu \partial_\mu Y_{l_1,\ldots l_{d-1}}(\theta_1,\ldots,\theta_{d-1}) = 0$$ because the function is homogeneous of degree zero. So now your operator acts only on $$f$$ and we have reduced the problem to one dimension.
For any sufficiently regular function ($$L^2(\mathbb{R})$$ should be enough) one can define the Mellin (inverse) transform as $$f(x) = (\mathcal{M}^{-1}g)(x) = \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} \mathrm{d}s\,x^{-s}\,g(x)\,.$$ Check the link for the conditions on $$c$$ and the direct Mellin transform used to find $$g(x)$$. Now for any sufficiently well behaved function $$F$$ one can define $$F\big(-x_\mu\partial^\mu\big)f(|x|) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i \infty}\mathrm{d}s \,F(s)\,|x|^{-s}g(|x|)\,.$$ Your question regards the special case $$F(s) = \frac{1}{1-s}\,.$$
To summarize, you can decompose any field into a radial and an angular part, do the inverse Mellin transform on the radial part, apply the operator in Mellin space and transform back.
• I do not think it works as it stands: you are using the Euclidean metric, whereas the post refers to the Minkowskian one so that what you indicate as the norm of $x$ has actually a sign. – Valter Moretti Apr 21 '19 at 10:09
• This does not matter for the second part of my answer obviously because $f$ is still a one dimensional function. It becomes a bit messier to define the spherical harmonics but it can be done. They will have a continuous label and one of the angles (being the rapidity) will be unbounded. – MannyC Apr 21 '19 at 11:34
• I'm a little wary of the signs, can you elaborate on why theres a minus sign in the argument of the capital F, second last equation. Why does F(-O) correspond to O in the integral? – Craig Apr 23 '19 at 7:14
• It's just because the Mellin transform is defined with $x^{-s}$. So $-\partial_\mu x^\mu$ becomes multiplication by $s$. – MannyC Apr 23 '19 at 14:27 | 2020-04-09 17:50:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040104746818542, "perplexity": 301.6260844797332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371861991.79/warc/CC-MAIN-20200409154025-20200409184525-00398.warc.gz"} |
https://bestbtcxonyngj.netlify.app/wilday86049namy/accounting-rate-of-return-formula-average-investment-124.html | ## Accounting rate of return formula average investment
The accounting rate of return (ARR) is the percentage rate of return expected on an investment or asset as compared to the initial investment cost. ARR divides the average revenue from an asset by the company's initial investment to derive the ratio or return that can be expected over the lifetime of the asset or related project.
22 May 2018 Steps of calculation of ARR. Following steps to be followed to calculate ARR: Calculate the average investment of the project; Determine the Learn about Accounting Rate of Return (ARR), Definition, Meaning, Example, Project Evaluation The business accepts the projects if the ARR exceeds the target rate or cut-off rate. Formula ARR = Average profit / Average investment. 8 Oct 2012 PAY BACK PERIOD, ACCOUNTING RATE OF RETURN METHOD The payback period of the project is estimated by using the straight forward formula: life of the investment and the denominator is the average book value 2 May 2017 The unadjusted rate of return is computed as follows simple rate of return… method or the accounting rate of return method. You take your increase in future average income…and divide it by your initial investment cost. 24 Mar 2012 The internal rate of return (IRR) is a widely used benchmark for assessing the reliability of the accounting rate of return (ROA) as a measure of economic profi. We also show that the average ROA can be used to make meaningful when due allowance is made for differences in investment scale. Video created by Emory University for the course "Finance for Non-Financial Managers". This module will demonstrate a variety of investment decision
## To get the required rate of return, we need to use the formula for ARR or Accounting Rate of Return below: ARR = (Average annual operating profit)/( Average investment) x100%. In order to calculate ARR, we will use the example below.
The average rate of return, also known as the accounting rate of return, is the method to evaluate the profitability of the investment projects and very commonly used for the purpose of investment appraisals. The formula for the accounting rate of return is (average annual accounting profits/investment) x 100% Let us look at an example: A company is considering in investing a project which requires an initial investment in a machine of $40,000. The ARR itself is derived from dividing the average profit (positive or negative) by the average amount of money invested. For instance, if the annual profit for a given project over a three year span averages$100, and the average investment in a given year is $1000, the ARR would be$100 / $1000 = 10%. Accounting rate of return (ARR/ROI) = Average profit / Average book value * 100 The interpretation of the ARR / AAR rate Abbreviated as ARR and known as the Average Accounting Return (AAR) indicates the level of profitability of investments, thus the higher the percentage is the better. The average rate of return ("ARR") method of investment appraisal looks at the total accounting return for a project to see if it meets the target return. An example of an ARR calculation is shown below for a project with an investment of £2 million and a total profit of £1,350,000 over the five years of the project. Accounting rate of return, also known as the Average rate of return, or ARR is a financial ratio used in capital budgeting. The ratio does not take into account the concept of time value of money. ARR calculates the return, generated from net income of the proposed capital investment. The ARR is a percentage return. Say, if ARR = 7%, then it means that the project is expected to earn seven cents out of each dollar invested. If the ARR is equal to or greater than the required rate of return, the ### Accounting rate of return is an accounting technique to measure profit expected from an The formula of ARR is as follows: ARR=(Average annual profit after tax / Initial investment) X 100. Formula. Accounting Rate of Return, = Average Profit, %. Average Average Profit, = Total accounting profit over the investment period. ### 13 Mar 2019 ARR is used in investment appraisal. Formula. Accounting Rate of Return is calculated using the following formula: ARR = Average Accounting The initial investment is 200,000 and therefore we can use below formula to calculate the accounting rate of return: Use the below data for calculation of accounting rate of return Therefore, the calculation of the accounting rate of return is as follows, Accounting rate of return (also known as simple rate of return) is the ratio of estimated accounting profit of a project to the average investment made in the project. ARR is used in investment appraisal. Formula. Accounting Rate of Return is calculated using the following formula: Average Rate of Return formula = Average Annual Net Earnings After Taxes / Initial investment * 100% or Average Rate of Return formula = Average annual net earnings after taxes / Average investment over the life of the project * 100% The accounting rate of return formula is calculated by dividing the income from your investment by the cost of the investment. Usually both of these numbers are either annual numbers or an average of annual numbers. You can also use monthly or even weekly numbers. The time length doesn’t matter. ## The result of the calculation is expressed as a percentage. Thus, if a company projects that it will earn an average annual profit of$70,000 on an initial investment of \$1,000,000, then the project has an accounting rate of return of 7%. There are several serious problems with this concept,
return during period}}{\text{Average investment}}}} {\displaystyle {\text{ARR}}={\ frac {\. where: Average investment = Book value ARR formula. The formula for ARR is: ARR = average annual profit / average investment. Where,. Average investment = (book value at year
13 Mar 2019 ARR is used in investment appraisal. Formula. Accounting Rate of Return is calculated using the following formula: ARR = Average Accounting How will ARR be calculated if the AVERAGE investment amount is used? Reply. Accounting For Management. @bhullar. Both initial Accounting Rate of Return Formula refers to the formula that is used in order to Accounting Rate of Return (ARR) = Average Annual Profit /Initial Investment | 2022-01-18 06:39:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49600112438201904, "perplexity": 1959.3392295536767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300805.79/warc/CC-MAIN-20220118062411-20220118092411-00167.warc.gz"} |
https://www.physicsforums.com/threads/break-second-order-ode-into-a-system-of-first-order-odes.526388/ | # Homework Help: Break Second order ODE into a system of first order ODE's
1. Sep 1, 2011
### Trenthan
1. The problem statement, all variables and given/known data
I havent done this for several years and have forgotten. Kicking myself now over it since it looks like something so simple but i cannot figure it out.... I need to break this second order ODE into a system of first order ODE's in matrix form to use within a crank nicolson method.
$\frac{d\Theta^{2}}{dt^{2}} + c\frac{d\Theta}{dt} + \frac{g}{L}sin(\Theta) = 0$
3. The attempt at a solution
let
$\phi_{1} = \Theta$
$\frac{\phi_{1}}{dt} = \phi_{2}$
$\frac{\phi_{2}}{dt} = -c\phi_{2} - \frac{g}{L}sin{\phi_{1}}$
now problem being the $\sin{\phi}$, how do i take the phi out! K is meant to be the coefficients of the terms infront of phi, but in this case its within the sin :S
$\left[ {\begin{array}{cc} \frac{\phi_{1}}{dt} \\ \frac{\phi_{2}}{dt} \\ \end{array} } \right] = \left[ {\begin{array}{cc} 0 & 1 \\ unknown & -c \\ \end{array} } \right] \left[ {\begin{array}{cc} \phi_{1} \\ \phi_{2} \\ \end{array} } \right]$
Cheers Trent
Last edited: Sep 1, 2011
2. Sep 1, 2011
### lanedance
This is a non-linear DE, hence the difficulties
If theta was very small you could use the small angle approximation to linearise the equation
$$sin(\theta(t))\approx \theta(t)$$
3. Sep 1, 2011
### lanedance
now looking at crank-nicholson which is finite difference method, seems to be set up for partial DEs
http://en.wikipedia.org/wiki/Crank–Nicolson_method
as this is a 2nd order nonlinear ordinary DE, why not something like runge kutta?
4. Sep 1, 2011
### Trenthan
unfortunately its not, we are modelling a pendulum which is lubricated well** :(
5. Sep 2, 2011
### Trenthan
we have been instructed to use Crank-Nicholson for some stupid reason in our design brief.
Im looking up other methods iterative techniques such as newtons method which may be applied within the method....
Any suggestions or thoughts....?????
6. Sep 2, 2011
### lanedance
i haven't used it but eve4rything i see on crank-nicholson is for 2d (x,t) differntial equations, so not really sure how it applies here
7. Sep 2, 2011
### Trenthan
All good
Using the Crank-Nicholson approach, and than applying newtons law, which involves taking the jacobian etc and solving for the residue to be zero works.
Thanks for your time and help lanedance | 2018-05-25 04:00:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.713727593421936, "perplexity": 1432.5853528327173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866938.68/warc/CC-MAIN-20180525024404-20180525044404-00320.warc.gz"} |
https://www.meracalculator.com/unitconverter/heat-flux-density-converter.php | # Heat Flux Density Converter
Is This Tool Helpful?
Heat flux or thermal flux is the rate of heat energy transfer through a given surface. The measurement of heat flux is most often done by measuring a temperature difference over a piece of material with known thermal conductivity.
Heat rate is a scalar quantity, while heat flux is a vectorial quantity. To define the heat flux at a certain point in space, one takes the limiting case where the size of the surface becomes infinitesimally small.
The flux of electric and magnetic field lines is frequently discussed in electrostatics. This is because Maxwell's equations in integral form involve integrals like above for electric and magnetic fields. Heat, a form of kinetic energy, is transferred in three ways: conduction, convection, and radiation. Heat transfer (also called thermal transfer) can occur only if a temperature difference exists, and then only in the direction of decreasing temperature.
Example:
Convert 5 calorie/second square centimeter to various units.
Solution:
BTU/hour square foot = 66360.42984099999
BTU/minute square foot = 1106.007164836955
calorie/second square centimeter = 5
dyne/hour centimeter = 753624000001.6965
erg/hour square millimeter = 7536240000.016965
foot pound/minute square foot = 860660.7879943552
gram calorie/hour square centimeter = 17999.99999879024
horsepower (metric)/square foot = 26.442359541914602
horsepower (UK)/square foot = 26.08062992958372
CHU/hour square foot = 36866.90548278824
joule/second square meter = 209340.00000078825
Kilocalorie/hour square foot = 16722.547192856655
kilocalorie/hour square meter = 180000.00000059672
kilowatt/square meter = 209.34000000078822
watt/square centimeter = 20.93400000007882
watt/square inch = 135.05779430146677
watt/square meter = 209340.00000078825 | 2022-12-04 06:27:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761815428733826, "perplexity": 1717.7438729177768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00383.warc.gz"} |
http://mathoverflow.net/feeds/question/59142 | another solution to PDE possible? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T10:15:04Z http://mathoverflow.net/feeds/question/59142 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/59142/another-solution-to-pde-possible another solution to PDE possible? Regina 2011-03-22T05:40:11Z 2011-03-22T09:38:12Z <p>hi there, i have the following pde: $$(\partial_x x^4 \partial_x - \partial_t^2)y(x,t)=0$$ and found the solution $$y=a+t^2-1/x^2$$, with a a constant. Is this solution unique? Does anyone know of any other solution, tricks to generate other solutions? Thankful for help. Regina</p> http://mathoverflow.net/questions/59142/another-solution-to-pde-possible/59146#59146 Answer by Robert Israel for another solution to PDE possible? Robert Israel 2011-03-22T07:27:46Z 2011-03-22T07:27:46Z <p>Since your equation is linear and homogeneous, linear combinations of solutions are solutions. Basic solutions include $1$ and $t^2 - 1/x^2$ from your solution, as well as $1/x^3$, $t$, and $(1-a/x) \exp(a/x) \exp(a t)$ and $(1-b/x) \exp(b/x) \exp(-b t)$ for any constants a and b.</p> http://mathoverflow.net/questions/59142/another-solution-to-pde-possible/59148#59148 Answer by Robert Israel for another solution to PDE possible? Robert Israel 2011-03-22T07:55:07Z 2011-03-22T07:55:07Z <p>More generally, $F(t + 1/x) - x F'(t + 1/x)$ and $F(t - 1/x) + x F'(t - 1/x)$ for any differentiable function F. </p> http://mathoverflow.net/questions/59142/another-solution-to-pde-possible/59158#59158 Answer by Denis Serre for another solution to PDE possible? Denis Serre 2011-03-22T09:38:12Z 2011-03-22T09:38:12Z <p>PDEs look like ODEs, but only look like. The solution set of an ODE of order $n$ is usually parametrized by $n$ scalar (integration constant). On the contrary, the solution set of a PDE of order $n$ in $d$ independent variables ($d=2$ in your case) is usually parametrized by $n$ functions of $d-1$ variables. This is clear in the hyperbolic case because you just solve a Cauchy problem with initial data on a non-characteristic hypersurface. More generally, if the equation has analytic coefficients, you can apply the Cauchy-Kowalevskaia Theorem.</p> <p>In conclusion, your explicit solutions are far from unique.</p> | 2013-05-24 10:14:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9271390438079834, "perplexity": 587.4200308898186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704517601/warc/CC-MAIN-20130516114157-00065-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41598-018-37536-0?error=cookies_not_supported&code=3c7084d8-6878-47a0-887a-44e4f15f2a75 | Article | Open | Published:
# Multi-step planning of eye movements in visual search
## Abstract
The capability of directing gaze to relevant parts in the environment is crucial for our survival. Computational models have proposed quantitative accounts of human gaze selection in a range of visual search tasks. Initially, models suggested that gaze is directed to the locations in a visual scene at which some criterion such as the probability of target location, the reduction of uncertainty or the maximization of reward appear to be maximal. But subsequent studies established, that in some tasks humans instead direct their gaze to locations, such that after the single next look the criterion is expected to become maximal. However, in tasks going beyond a single action, the entire action sequence may determine future rewards thereby necessitating planning beyond a single next gaze shift. While previous empirical studies have suggested that human gaze sequences are planned, quantitative evidence for whether the human visual system is capable of finding optimal eye movement sequences according to probabilistic planning is missing. Here we employ a series of computational models to investigate whether humans are capable of looking ahead more than the next single eye movement. We found clear evidence that subjects’ behavior was better explained by the model of a planning observer compared to a myopic, greedy observer, which selects only a single saccade at a time. In particular, the location of our subjects’ first fixation differed depending on the stimulus and the time available for the search, which was well predicted quantitatively by a probabilistic planning model. Overall, our results are the first evidence that the human visual system’s gaze selection agrees with optimal planning under uncertainty.
## Introduction
Actively deciding where to direct our eyes is an essential ability in fundamental tasks, which rely on acquiring visual information for survival such as gathering food, avoiding predators, making tools, and social interaction. As we can only perceive a small proportion of our surroundings at any moment in time due to the spatial distribution of our retinal receptor cells1, we are constantly forced to actively target our visual apparatus towards relevant parts of the visual scene using eye movements2. Thus, vision is a sequential process of active decisions3. Perceptually, these decisions have been characterized in terms of targeting gaze towards locations that are most salient4, maximizing knowledge about the environment5,6,7, or optimizing performance in the ongoing task8,9,10,11,12. Much less research has investigated, how the visual system selects sequential decisions in these tasks.
The sequential nature of eye movements raises the question, how each subsequent action is selected to achieve the task goal. This question can only be answered quantitatively with reference to a computational model. Several models have been proposed to describe possible strategies differing with respect to how future rewards influence the selection of the next action. The most naive strategy suggests that the visual system selects the location at which the task relevant criterion such as information about the search target or immediate reward is maximal4,5,6,7,13. This corresponds to always moving to the location, which currently is believed to be the most likely location of the target. E.g., saliency models posit that the visual system maintains an internal relevance map and that the next saccade moves gaze to the location currently having highest saliency4. Similarly, the ‘maximum-a-posteriori-searcher’13,14 moves to the location with the highest probability of containing the target. Empirical studies have found partial support for humans adopting this strategy in a number of tasks4,5,6,7. A more sophisticated strategy has been proposed by a number of other models, according to which the visual system selects a target, such that the criterion of the search task is expected to be maximal after having carried out the single next gaze shift8,9,10,12. E.g., the ‘ideal-searcher’ by Najemnik and Geisler8 will saccade in between two potential targets, which helps maximally in deciding, which one of the two potential targets is the correct target.
For tasks that require only a single action, the ‘ideal searcher’8 behaves optimally. However, many real world tasks involve more than a single isolated action. For sequences of eye movements, delayed rewards obtained only after a sequence of actions can play a crucial role. Thus, what strategy should the visual system employ to select several actions in sequence? The answer to this question leads to the third strategy, which is readily available within the artificial intelligence, machine learning, optimal control, and reinforcement learning literature15,16,17: the optimal sequence of actions in general involves planning. Behavioral sequences are planned when “deciding on a course of action by considering possible future situations before they are actually experienced” (p. 9 of ref.16). Hence, planning is defined as taking future rewards into consideration during current action selection. In contrast, a policy is called “myopic” or “greedy”, if only the immediate reward is taken into account (p. 632 in ref.15).
A classic example from the optimal control and reinforcement learning literature is the mountain car problem, in which a car is located in the valley between two hills. Possible actions for the driver are to accelerate forward or backward and the goal is to reach the top of one of the mountains. However, the mountain is too steep to conquer from the valley. Instead, momentum has to be built by going in the opposite direction first. So, the car first has to move away from the target to build up momentum to later reach it16. Hence, the next action associated with the maximum immediate reward r0 (‘get closer to the target’) is not necessarily the action that yields the maximum reward for the whole action sequence r0 + r1 + $$\cdots$$ + rn. As a consequence, optimal action selection for sequential behavior depends on the horizon n, i.e., the number of future rewards that are incorporated into the selection of the next action. Thus, if two actions are considered, the horizon is two and planning needs to consider the outcome of the first and second action to select both decisions.
Surprisingly, all of the reviewed computational models for eye movement selection are myopic, i.e. they choose actions that maximize the immediate reward8,10,11,12,13,14,18,19, either by moving gaze to the currently most likely target or to the target that promises to reveal the most likely target after a single next eye movement. In this case, the horizon equals to one as only the next reward is used for action selection. In practice, the problem of delayed rewards is circumvented by either investigating only single saccades or by choosing tasks where both policies, myopic and planned, may lead to similar solutions. To our knowledge, there exist neither computational models nor empirical data investigating whether humans are capable of planning eye movements. This is even more surprising considering the results of behavioral investigations which have interpreted a variety of empirical findings as evidence for human gaze planning20,21,22,23. These studies have shown that the latency of the first saccade was higher for longer sequences of saccades21. Also, discrimination performance was enhanced at multiple locations within an instructed sequence of saccades22. Furthermore, if an eye movement sequence was interrupted by additional information midway the execution of the second saccade was delayed23. While these results suggest that a scanpath of at least two saccades is internally prepared before execution, it is unclear whether multiple future fixation locations are jointly chosen to maximize performance in a task, which is a computational signature of planning.
In the present study, we devised computational models for the three search strategies described above, i.e selecting the location of highest target probability (the ‘maximum-a-posteriori searcher’13,14), selecting the target that will lead to best disambiguation after the next gaze shift (ideal-observer based searcher8), and selecting a sequence of gaze targets to maximize overall task performance (a probabilistic planning based searcher). All three strategies were formalized within the framework of partially observable Markov decision processes15,16,17. Using these models, we implemented a visual search task to investigate, whether the human visual system is capable of planning. Because myopic policies such as the ‘maximum-a-posteriori searcher’13 or the ‘ideal searcher’8 choose the next gaze target only based on the immediate reward while ignoring future rewards, actions selected by these models do not depend on the length of the entire action sequence. By contrast, an action within a planned policy depends on the entire sequence15,16,17. This fundamental difference can be used to derive an experimental design for testing the planning capabilities of the visual system. We formalized these three strategies within the framework of partially observable Markov decision processes15,16,17 and then derived algorithms for all three strategies. These models were employed to generate and select stimuli, for which the models predict a significant difference in strategies, such that the two potential models (planned or myopic) led to different gaze sequences. Crucially, we also selected stimuli for which the myopic and planning strategies did not differ substantially. The reason is, that this should not only further validate the proposed planning model but also reconcile the present study with previous empirical investigations, which had found evidence that human gaze strategies are well described by an ‘ideal searcher’ in some tasks. Using behavioral analyses and Bayesian model comparison we found strong evidence that the human visual system is capable of planning gaze sequences.
## Results
In our task, subjects searched for a hidden target within irregularly bounded shapes (Fig. 1a). Using a gaze contingent paradigm the hidden target only became visible if a fixation landed close enough. This search area was made explicit by showing the shape’s texture for all points closer than 6.5° to the fixation location. If a target was located in the search area, it became visible to the participant after a delay of 130 ms. Targets were easily detectable once they became visible (detection proportion: 98.2%). Overall, all shapes contained a target in half the trials, respectively. We used two durations as search intervals: a short interval (250 ms) providing enough time for a single saccade and a long interval (550 ms) providing enough time for two saccades. Trials were presented in blocks either containing only short intervals or long intervals, respectively. The procedure for a single trial is shown in Fig. 1b. By using a blocked design of 100 consecutive trials with the same interval length subjects knew about the upcoming trial duration (short or long).
### Computational models for action selection in visual search
Given the observer has formed a belief about the location of the target in the visual search task, how should gaze targets be selected? We derived models for the myopic observer πmyopic and the planning observer πplanned for our visual search task based on the framework of partially observable Markov decision processes (see Methods). In our experiment, participants directed their gaze to suitable locations within a shape. Subsequently, they indicated whether the shape contained a target or not through a button press. The quality of this decision depends on the fixated locations and improves if the fixation locations are chosen strategically to cover more area. Also, the probability of making a correct statement is proportional to the probability of finding the target. Depending on the search time, the action sequence in our task comprised one ((x1, y1); short condition) or two ((x1, y1, x2, y2); long condition) fixation locations.
For the short condition, a single fixation location (x1, y1) was selected. In this case, both strategies lead to the same action, because for both models only the consequences of a single gaze shift need to be taken into account. Hence, the maximal horizon of the sequence is 1, leading to:
$${\pi }_{{\rm{myopic}}}={\pi }_{{\rm{planned}}}=\mathop{{\rm{argmax}}}\limits_{({x}_{1},{y}_{1})}\,P\,({\rm{correct}}|{x}_{1},{y}_{1})$$
(1)
where P(correct| x1, y1) is the probability of finding the target when fixating (x1, y1), which is proportional to the amount of the shape covered by the search area (see Methods, for how this is computed). The action selection for the short search interval is depicted in the left panel of Fig. 2a.
For the long condition, a sequence of two fixation locations (x1, y1, x2, y2) was chosen resulting in a maximum horizon of two. In this case, the two strategies differ. First, the myopic observer uses the uncertainty of the current observation to select only the next gaze target such that the probability of detecting the location of the target will be maximal after each single saccade. Thus, the myopic observer sequentially chooses the fixation location with the maximum expected immediate reward resulting in the policy:
$${\pi }_{{\rm{myopic}}}:=(\mathop{{\rm{argmax}}}\limits_{({x}_{1},{y}_{1})}\,P({\rm{correct}}|{x}_{1},{y}_{1}),\mathop{{\rm{argmax}}}\limits_{({x}_{2},{y}_{2})}\,P({\rm{correct}}|{x}_{1},{y}_{1},{x}_{2},{y}_{2}))$$
(2)
where xn, yn are the coordinates of nth fixation location and P(correct | xn, yn) denotes the probability of deciding correctly whether a target is present after the nth fixation.
By contrast, the planning observer uses the uncertainty of the current observation to select the upcoming gaze targets such that the probability of detecting the location of the target is expected to be maximal after the sequence of two saccades. Thus, the planning observer incorporates the whole sequence in the selection of all actions:
$${\pi }_{{\rm{planned}}}:=(\mathop{{\rm{argmax}}}\limits_{({x}_{1},{y}_{1})}\,P({\rm{correct}}|{x}_{1},{y}_{1},{x}_{2},{y}_{2}),\mathop{{\rm{argmax}}}\limits_{({x}_{2},{y}_{2})}\,P({\rm{correct}}|{x}_{1},{y}_{1},{x}_{2},{y}_{2}))$$
(3)
where P(correct | x1, y1, x2, y2) is the probability of a correct decision when fixating location (x1, y1) followed by (x2, y2).
Figure 2 illustrates the difference between the two computational strategies for the two conditions, i.e. the short search interval, allowing a single gaze shift, and the long search interval, allowing a maximum of two gaze shifts. The action selection for both the myopic and the planning observer for the long condition is shown in the right panel of Fig. 2a. Accordingly, three testable hypothesis can be derived from the computational models: H1: If eye movements are planned, we expect a difference in the location of the first fixation depending on the search interval for some stimuli. H2: We expect fixation locations to be better explained by the planning observer compared to the myopic observer. H3: The differences between the myopic and the planning observers also depend on the search shape, such that the gaze targets may coincide for the two models (Fig. 2b).
### Behavioral and model results
The computational models were utilized to automatically generate a variety of shapes such that four stimuli could be selected to maximize the discriminative power of the subsequent experiments. As shown in Fig. 2b, two of the four shapes were predicted by our models to elicit indistinguishable first fixation locations whereas two other shapes were predicted to result in different first fixation locations for the myopic and planning observers (see Methods). The mean fixation location for each participant separately for all shapes and conditions is shown in the left panel of Fig. 3a. To test whether eye movements were planned, we compared the first fixation location in the short condition to the first fixation location in the long condition for all shapes in accordance with hypothesis H1. If subjects are capable of performing planning, we expected a difference in the first fixation location for Shape S3 and S4 (H3). We used the Hotelling’s T-test to compare the bivariate landing positions of the first saccade between the two search intervals (Supplementary Table 1). Indeed, mean target locations for the first saccade were different in Shape S3 and S4. No significant differences, however, were found in shapes S1 and S2. These results are in agreement with our computational models of the myopic and planning observers.
Visual inspection suggests, that the behavioral data is closer resembled by the results of the planning observer (H2). Indeed, only the planning observer but not the myopic observer predicted a difference in fixation locations between the short and long search interval conditions. Furthermore, the direction of the spatial difference of the first fixation location between the search interval conditions followed the course suggested by our planning observer (Fig. 3b). Because the magnitude of the human spatial difference of the first fixation location was slightly smaller than the magnitude predicted by the planning observer, we extended both the myopic observer as well as the planning observer based on known facts about the visual system. The additional modeling components yielded progressively more realistic models of human visual search behavior by incorporating biological constraints leading to a bounded actor (see Methods). Specifically, we included additive costs for longer saccade amplitude (as they lead to longer scanpath durations24 and higher endpoint variability25, which humans have been shown to minimize26), used foveated versions of the shapes to account for the decline of visual acuity in peripheral vision27, and accounted for the often reported fact, that human saccades undershoot their target28,29.
To obtain a quantitative evaluation of the computational models, we employed model selection using the Bayesian information criterion (BIC). The two free parameters in the models, i.e. the magnitude of additive costs for saccade length and the magnitude of the undershot, were estimated using Maximum Likelihood with bivariate Gaussian error terms on subjects’ empirical data. We also estimated the covariance matrices for the models’ predictions and the behavioral gaze data to compute the BIC for each model. Figure 3c shows the difference in BIC of all models compared to the best model. The lower bound was derived by computing the mean fixation locations directly from the data for each of the four shapes as well as for each of the three fixation locations. The difference in BIC values between two models is an approximation for the log-Bayes factor and a difference ΔBIC > 4.6 is considered to be decisive30. Results clearly favor the planning observer over the myopic observer (ΔBIC = 139). Crucially, the planning observer without any parameter fitting still provided a better description of our human data than the myopic observer with all extensions (ΔBIC = 59). Further, costs for saccade amplitudes and foveation did not only improve our model fit for the planning observer but were also favored by model selection, suggesting that they are needed for better describing the eye movement data in our experiment. For the saccadic undershot model comparison was less decisive but still in favor of the full model (ΔBIC = 3 between planning observer with all extensions and planning observer without undershot). We also applied the MAP searcher (see Supplementary Fig. 5) but the predictions deviated severely from the data we observed.
Parameter estimates for the saccadic undershot were similar for the myopic observer (2.9%) and the planning observer (3.2%). The influence of the costs for longer saccades was higher for the myopic observer (0.69 DP/Deg) compared to the planning observer (0.34 DP/Deg). The unit of the costs is detection performance (DP in %) per degree (Deg) and states, how much performance subjects were willing to give up to shorten saccade amplitudes by one visual degree. It is important to note, that both factors, costs and saccadic undershot, represent distinct computational concepts. The influence of the costs does not depend on the amplitude of the saccade directly, but on the reward structure of different potential landing locations. Hence, for two different shapes the same costs can have very different effects on where to target gaze. On the other hand, the undershot is relative to and only depends on the amplitude of the saccade and does not depend on the reward structure and therefore the shape. We also estimated the radius of the circular gaze contingent search shape centered at the current fixation. Parameter estimation yielded values very close to the true radius and did not improve model quality for neither the planning observer nor the myopic observer.
## Discussion
Considerable previous research has investigated perceptual determinants of human gaze targets but much less is known about how the visual system uses perceptual beliefs to select gaze targets in sequential behavior. Thus, it has been unclear, whether sequences of human eye movements are planned ahead. Prior studies indicated that multiple saccadic targets are jointly prepared as a scanpath and that cueing new targets during execution of eye movements results in longer execution times21,22,23. However, it has been unclear whether eye movements are chosen by considering more than a single gaze shift ahead into the future. Instead, paradigms modeling human eye movements as sequential greedy decisions10,12,18,19 including the MAP searcher13 and the ideal searcher8, have been the predominant approach. Computationally, if a task requires multiple gaze shifts in sequence, the normative, i.e. optimal solution in general involves planning the sequence of gaze shifts jointly.
The present study investigated, whether human gaze shifts are well described by a greedy selection of targets or whether they are better described by a probabilistic planning strategy. Therefore, we contrasted a myopic observer with a planning observer that was formalized within the framework of Markov Decision Processes15,16 with partially observable states17. We derived policies for myopic observers including the ‘maximum-a-posteriori searcher’13 and the ideal-observer based searcher8, which only consider the immediate reward for action selection, and we also derived the policy for the planning observer, which also considers future rewards. Next, we determined the specific circumstances under which the models produce different gaze sequences. Ultimately, we used these insights to automatically manufacture stimuli that maximized the behavioral differences elicited by the different gaze strategies and also obtained stimuli that show very similar strategies. Thus, the resulting stimuli were highly suitable for examining which gaze strategy was adopted by our subjects.
We developed a visual search task where we expected different behavioral sequences depending on the gaze strategy of our subjects. In particular, we investigated whether subjects adjust their scanpath during visual search dependent on the duration of the search interval. Therefore, we controlled the length of the saccadic sequence. The short search interval allowed subjects to execute a single saccade, while in the long search interval subjects were able to fixate two locations. The gaze contingent paradigm allowed efficient computation of all strategies including the planning strategy of gaze targets. Moreover, the gaze contingent paradigm with a search interval allowing for two saccades provides the possibility for comparing spatial gaze targets, as the well known interindividual and intradindividual variability of gaze targets for longer gaze sequences would render such comparisons computationally very difficult.
Our results suggest that eye movements are indeed planned according to probabilistic planning. Subjects’ scanpath was very well predicted by the planning observer while showing severe deviations from the scanpath proposed by the myopic observer. We found fixation locations to be different depending on the duration of the search interval. This difference is only expected under the planning observer and can not be explained by the myopic observer. Finally, model comparison favored the planning observer and its extensions over the myopic observer by a large margin. Furthermore, extending our planning observer model with action costs, we found evidence that subjects traded off task performance and saccade amplitude. Including additive costs for saccades with great amplitude into the planning observer and accounting for saccadic undershot and foveation was best capable of explaining our data further.
A possible limitation of the current experiments lies in the specific use of the gaze contingent experimental paradigm. Considerable previous research has utilized gaze contingent setups31 and some of these investigations have quantified its influence on performance in visual search for targets with low visibility32. In the present experiments, the target was only detectable within a circular area with a diameter of 13° of visual angle around the fixation point. While peripheral processing was not impaired, as the contour of the shape was always visible, the visibility of the target was controlled by the gaze contingent design. This is different from naturalistic search tasks. While this may not affect the target of the first fixation, it is conceivable that this may have affected the selection of the second gaze target in idiosynchratic ways. However, the current quantitative analyses are all based on trials in which the target was not present. Thus, the peripheral visual information acquired during the first fixation would not have given indications of the target’s position, even with visibility extending beyond 13° of visual angle because of the high visibility of targets used in our search task. Nevertheless, future work needs to address, whether the results reported here extend to experimental paradigms with full peripheral visibility.
The current experiments also do not speak to the applicability of the probabilistic planning model to other search tasks or more naturalistic visual and visuomotor tasks3,33. Computationally, probabilistic planning is the optimal solution to control tasks with uncertainty in general15,16 and evidence for human motor behavior being explainable in these terms has been provided in the past17,26,34. But it is currently unclear, whether the planning used in the current study can also explain sequential behavior in other visual and visuomotor tasks. Potentially, subjects may have adopted a gaze target strategy which was particularly elicited by the current experimental setup. Note however, that subjects readily adopted the reported strategy without extensive practice. Only about a minute of familiarization with the gaze contingent setup was sufficient for subjects to target gaze at the locations predicted by the planning observer model. Overall, it is an empirical question for future work whether the probabilistic planning model reported here is able to successfully account for human gaze behavior in other visuomtor tasks.
A further limitation of the current study is that it does not disambiguate between open-loop and closed-loop planning15. The distinction between these two types of planning lies in the way future observations are utilized within the planning process. While open-loop algorithms plan a sequence of actions but disregard the outcome of future observations, closed-loop algorithms are much more sophisticated by taking all possible future observations after each action in the entire sequence into account within the planning process. As such, closed-loop planning is even more demanding computationally than open-loop planning. The current experiments cannot disambiguate whether human behavior is better explained by either of these two planning algorithms, because for the second fixation in the long search interval condition the belief after the first fixation only depends on whether the target was found. Thus, subjects terminating the search in the long search interval condition after finding the target after the first fixation may be the only support for closed-loop control in our experiments. Note that both these planning algorithms are very different from myopic action selection and the MAP searcher. Future work will need to address, which of these two types of planning better describes human gaze selection.
Finding and executing near optimal gaze sequences is crucial for many extended sequential every-day tasks3,33. The capability of humans to plan behavioral sequences gives further insights into why we can solve so many tasks with ease, which are extremely difficult from a computational perspective. In many visuomotor tasks coordinated action sequences are needed rather than single isolated actions35. This leads to delayed rewards and thus a complex policy is required rather than an action that directly maximizes the performance after the next single gaze switch. Additionally, our findings have implications for future models of human eye movements. While numerous influential past models have not taken planning into consideration8,10,11,14,18, our results indicate that in the case of visual search humans are capable of including future states into the selection of a suitable scan path. Thus, perception and action are not repeatedly carried out sequentially but intertwined through planning.
Nevertheless, our results also open up the possibility to reevaluate previous studies, which have interpreted deviations from an ideal observer based search strategy as evidence for a suboptimal strategy36,37,38,39. The current study points towards a potential explanation of these results, as subjects may have carried out a planning strategy, which can differ from a myopic ideal observer based strategy. Given that for some stimuli in our search task the gaze sequences for the two strategies differ, future work needs to carefully reevaluate myopic and planning strategies for these tasks and stimuli, in which suboptimality was established with respect to a myopic ideal observer model. The current results furthermore suggest, that this reevaluation may need to be extend to previous studies, which have interpreted behavioral results as support for a myopic strategy8,10,11,12,13,14,18,19, as for some tasks and stimuli, the strategies lead to indistinguishable gaze targets.
The broader significance of the present results beyond the understanding of eye movements lies in the fact that human behavior in our experiment was best described by a computational model that implements probabilistic planning under perceptual uncertainty and accounts for multiple costs. In this framework, sensory measurements and goal directed actions are inseparably intertwined40,41. So far, the predominant approach to probabilistic models in perception has been the ideal observer42,43, which can be formalized in the Bayesian framework44,45 as inferring latent causes in the environment giving rise to sensory observations. Models of eye movements selection have so far used ideal observers8,10,11 without planning. Probabilistic, Bayesian formulations of optimality in perceptual tasks46,47, cognitive tasks48,49, reasoning50, motorcontrol34, learning51, and planning52 have lead to a better understanding of human behavior and the quest to unravel, how the brain could implement these computations53,54,55, which are known in general to be intractable56. Our result extends the current understanding by demonstrating that planning under perceptual uncertainty is also part of the repertoire of human visual behaviors and this opens up the possibility to understand recent neurophysiological results57 within the framework of planning under uncertainty.
In the current work, we applied the computational concept of planning drawn from the field of AI to the literature of empirical eye movement studies. In particular, we connected the experimental paradigm of visual search to a solid mathematical foundation and for the first time systematically studied the very general connection between delayed rewards, horizon, and action selection in human eye movements. Overall, we layed out the groundwork that instantly reveals several clear implication for future studies (1) to investigate eye movement planning in different tasks, (2) to study the extent of the human planning capabilities, and (3) to revisit classic influential models as they may work only in a subset of situations.
## Methods
### Participants
Overall, 16 subjects (6 female) participated in the experiment. The subjects’ age ranged from 18 to 30 years (M = 21.8, SD = 3.1). Participants either received monetary compensation or course credit for participation. All subjects had normal or corrected to normal vision (four wore contact lenses). One subject stated to have dyschromatopsia, which had no influence on the experiment. Sufficient eye tracking quality was ensured for all data entering the analysis. In each trial a single fixation location (short search interval) or a sequence of two fixation locations (long search interval) entered the analysis. Further, informed consent was obtained from all participants, all experimental procedures were approved by the ethics committee of the Darmstadt University of Technology, and all methods employed were in accordance with the guidelines provided by the German Psychological Association (DGPs).
### Materials
The derived and implemented computational models enabled us to specifically select shapes that facilitate testing our hypothesis. In particular, stimuli were identified that triggered different policies for the myopic observer and planning observer. First, multiple candidates shapes were generated automatically using the following approach: Five points were drawn uniformly in a bounded area (23.24 × 23.24 of visual angle). Next, a B-spline was fitted to the random points. Finally, the shapes bounded by the splines using the fitted parameters were filled with a white noise texture. Both models were applied to the resulting shapes to identify those shapes that lead to maximally different or most similar policies. Overall, four different shapes were used in the experiment (see Fig. 2b). Two shapes were chosen where optimal behavior requires planning (S3 and S4) and two where planning is not necessary (S1 and S2), i.e. where the sequence of eye movements of the myopic observer and the planning observer coincide. For both categories we selected two shapes by visual inspection ensuring that they were similar with respect to the area covered. For display during the experiment the shapes were upscaled with a factor of 1.5 and centered on the monitor such that the center of the shape’s bounding box matched the center of the screen.
### Foveated versions of the stimuli
In order to account for the decline of visual acuity that affects the visibility of the shape boundaries, we created foveated versions of the experimental stimuli (see Supplementary Fig. 4). Foveated versions of the stimuli were created by using the approach described in ref.58. The contrast sensitivity function describing the decline of accuracy with increasing eccentricity is computed as
$$CS(f,e)=\frac{1}{CT(f,e)},$$
(4)
which can be used to assign a cut-off frequency at each eccentricity. The contrast threshold is computed as:
$$CT(f,e)=C{T}_{0}\,\exp (\alpha f\frac{e+{e}_{2}}{{e}_{2}}).$$
(5)
where f is the spatial frequency, e is the retinal eccentricity, and α, CT0, e2 are empirical values set to 0.106, 1/75, and 2.3, respectively. These values have been shown to provide a good fit to empirical data (see ref.27). We only needed to account for peripheral vision at the level of deciding where to make an eye movement to. The decline of visual acuity leads to deteriorated perception of the outline of the shape. Hence, our model needs to incorporate the foveated shapes. For the first fixation, we used the initial location at the beginning of the trial. This was the same for all trials. In the two saccade condition we used the empirical mean landing location of our participants to compute the foveated shape prior to the second saccade. While this is an approximation as landing locations showed variation, we did so to reduce the computational burden of the numerical optimization.
The target was a circular grating stimulus (0.87° of visual angle in diameter). Background was Gaussian white noise. Contrast was set in a way that the target was easily detected if it was within the visible search radius of the current fixation (detection proportion: 98.2%). The target’s position was generated by randomly choosing a location within the shape.
### Probability of finding the target
Next, we derive the probability of a correct detection given a sequence of fixation locations since both proposed policies depend on the performance in the task, i.e., the detection probability. The probability of correctly judging the presence of a target is proportional to the area covered by the search. This can be computed as:
$$P({\rm{correct}}|{x}_{n},{y}_{n})\propto \sum _{x}\sum _{y}\,{P}_{T}(x,y){P}_{O}(x,y|{x}_{n},{y}_{n})$$
(6)
where PT(x, y) is the probability that the target is located at (x, y) and PO(x, y|xn, yn) is the probability that the location (x, y) is covered by the search given that the saccade was targeted at (xn, yn). The former is 1/N if (x, y) lies within the shape and zero otherwise, where N is the number of possible target locations. The latter depends on the distance between the saccadic target (xn, yn) and the target location (x, y). Therefore:
$${P}_{O}(x,y|{x}_{n},{y}_{n})=(\begin{array}{ll}1 & {\rm{if}}\,\parallel {[{x}_{n}-x,{y}_{n}-y]}^{T}\parallel < {\rm{threshold}}\\ 0 & else\end{array}$$
(7)
where the threshold is equal to the radius of the search area (6.5° of visual angle).
### Perception
Visual perception can be described as inference of latent causes based on sensory signals44,45. Bayesian inference provides the mathematical tools to use sensory data D to infer unknown properties of the state s of the environment. For example, s could be indicating whether there is a predator hiding behind a bush, and by directing gaze to the bush visual data D about the latent variable describing the true state s of the environment is obtained. This information can be incorporated into what is known about s using Bayes’ theorem $$P(s|D)=P(D|s)P(s)/P(D)$$. Hence, this mechanism combines prior knowledge P(s) and sensory information P(D|s) to form an updated posterior belief about environmental states relevant to the specific task.
### Action
However, performing sensory inference by itself does not prescribe an action, i.e. information about s in the end needs to be used to decide for an appropriate action, e.g. whether to flee. The costs and benefits for the potential outcomes of the action can be very different, e.g., not to flee if a predator is present is more costly than an unnecessary flight. They can be captured computationally by a reward function R(s, a) assigning a value to each state action pair. In the past, different approaches have been proposed to choose an action given the current belief b(s) = P(s|D) drawn from perception and the reward function R(s, a).
The MAP model only takes into account the maximum of the posterior for action selection. This corresponds to taking the action
$${\pi }_{{\rm{MAP}}}={\rm{\arg }}\,\mathop{{\rm{\max }}}\limits_{a}\,R(a,s={\rm{\arg }}\,\mathop{max}\limits_{s}\,P(s|D))$$
(8)
that gives maximum reward given that the true state is the maximum of the posterior. For visual search arg maxs P(s|D) corresponds to the most likely location of the target and therefore the reward is maximal if the eye movement a is targeted towards this location.
The ideal observer model has been used successfully to understand how humans choose locations for the next saccade. Specifically, human eye movements use the current posterior and target the location where they expect uncertainty about task relevant variables to be reduced most after having acquired new data from that location in situations such as visual search8, face recognition10, and temporal event detection11. Hence, different potential outcomes of s are weighted with the reward function R(s, a) to determine the action with highest expected reward:
$${\pi }_{{\rm{i}}{\rm{d}}{\rm{e}}{\rm{a}}{\rm{l}}{\rm{o}}{\rm{b}}{\rm{s}}{\rm{e}}{\rm{r}}{\rm{v}}{\rm{e}}{\rm{r}}}=\arg \,\mathop{max}\limits_{a}\,{\mathbb{E}}[{r}_{0}]=\arg \mathop{max}\limits_{a}{\int }_{s}R(a,s)P(s|D)ds.$$
(9)
Thus, it may be better to flee, even when one is not absolutely certain that a predator is hiding behind a bush, because the consequences may be particularly harmful. Interestingly, within this framework, the optimal action targets the location where the next fixation will reduce uncertainty the most and not the location that currently looks like the most probable target location. Indeed, both explicit monetary rewards19 and implicit behavioral costs11 in experimental settings have been shown to influence eye movement choices.
The ideal planner model extends the ideal observer model to action sequences. While ideal observers based on Bayesian decision theory constitute the optimal solution to selecting a single action, repeatedly taking the action with the maximum immediate reward may fail in tasks with longer action sequences and delayed rewards depending on the specific task structure. In these cases, a planning observer based on the more powerful framework of belief MDPs, which contains the ideal observer as special case, is needed to find the optimal strategy. A Markov Decision Process (MDP)16,59 is a tuple (S, A, T, R, γ), where S is a set of states, A is a set of actions, $$T=P(s^{\prime} |s,a)$$ contains the probabilities of transitioning from one state to another, R represents the reward, and finally, γ denotes the discount factor. In a belief MDP only partial information about the current state s is available, therefore a probability distribution over states is kept as a belief state $$b(s)=P(s|D)$$17. The expected reward associated with performing action a in a belief state b(s) is denoted by the action-value function $$Q(b(s),a)$$. How should the actor decide where to look next according to this framework? A policy π is a sequence of actions and the optimal policy π* comprises actions that maximize the expected reward
$$\begin{array}{rcl}{\pi }_{{\rm{ideal}}{\rm{planner}}} & = & {\rm{\arg }}\,\mathop{{\rm{\max }}}\limits_{a}\,{\mathbb{E}}[{r}_{0}+\gamma {r}_{1}+\ldots +{\gamma }^{n}{r}_{n}]={\rm{\arg }}\,\mathop{{\rm{\max }}}\limits_{a}\,Q(b(s),a)\\ & = & {\rm{\arg }}\,\mathop{{\rm{\max }}}\limits_{a}{\int }_{b(s^{\prime} )}\{P(b(s^{\prime} )|b(s),a)[R(b(s^{\prime} ))+\gamma {V}^{\ast }(b(s^{\prime} ))]\}db(s^{\prime} \mathrm{).}\end{array}$$
(10)
where V* (b(s′)) is the expected future reward gained from the next belief state b(s′). In tasks comprising sequences of actions, the optimal strategy, the planning observer, incorporates rewards associated with future actions (V* (b(s′))) into action selection. Essentially, what this means is that the value of an action based on the current belief is a combination of the immediate reward and the long term expected reward, weighted by how likely the next belief is under the action. Thus, as the belief about the state of task relevant quantities depends on uncertain observations, actions are influenced both by obtaining rewards and obtaining more evidence about the state of the environment.
### Action selection in visual search
To apply the different models to our visual search task we first need to specify the relevant quantities describing the task, i.e. the state representation and the reward function. In our visual search task (Fig. 1), a suitable candidate for a state representation is the target location and the current location of gaze. However, in general, the exact location of the target is unknown. Instead, we formalize the probability distribution over potential target location as a belief state that can be inferred from observations. The action space comprises potential fixation locations and with each action we receive information about the target, update our belief and transition to the next belief state. The reward function is an intuitive mapping between the belief state, which comprises the knowledge about the location of a potential target, and the probability of finding the target.
For a sequence comprising two actions (a0, a1), the myopic observer (horizon = 1, repeated application of the ideal observer) selects the action with the maximum expected reward in each step
$${\pi }_{{\rm{myopic}}}=(\mathop{{\rm{argmax}}}\limits_{{a}_{0}}\,{\mathbb{E}}[{r}_{0}],\mathop{{\rm{argmax}}}\limits_{{a}_{1}}\,{\mathbb{E}}[{r}_{1}]),$$
(11)
which corresponds to using Equation 9 at each state of the sequence. The planning observer (horizon = 2) considers the total sum of rewards
$${\pi }_{{\rm{planned}}}^{\ast }=(\mathop{{\rm{argmax}}}\limits_{{a}_{0}}\,{\mathbb{E}}[{r}_{0}+{r}_{1}],\mathop{{\rm{argmax}}}\limits_{{a}_{1}}\,{\mathbb{E}}[{r}_{0}+{r}_{1}]).$$
(12)
which corresponds to using Equation 10 at each state of the sequence. Whether $${\pi }_{{\rm{myopic}}}$$ and $${\pi }_{{\rm{planned}}}^{\ast }$$ lead to the same action sequence depends on the specific nature of the task. However, in general:
$${\pi }_{{\rm{planned}}}^{\ast }\ne {\pi }_{{\rm{myopic}}}$$
(13)
as can be seen in Fig. 2. Ideal-observer approaches only lead to optimal actions if future rewards do not play a role, for example, if only a single action is concerned. It is apparent that for a single action the myopic observer and the planning observer lead to the same action as Equation 10 simplifies to
$$Q(b(s),a)={\int }_{b(s^{\prime} )}P(b(s^{\prime} )|b(s),a)R(b(s^{\prime} ),a)db(s^{\prime} )$$
(14)
where $$P(b(s^{\prime} )|b(s),a)$$ is the posterior over relevant quantities in the task and $$R(b(s^{\prime} ),a)$$ is the reward function.
### Model fitting
To take into account known cognitive and biological constraints we incorporated several well known characteristics of the human visual system. We introduced costs on the saccade amplitude thus favoring smaller eye movements, which humans have shown to do60. As was shown by prior research, greater amplitudes lead to higher endpoint variability25 and longer saccade duration24. It has further been shown that humans attempt to minimize endpoint variability when execution eye movements26. Therefore, we hypothesized that subjects show a preference for smaller saccade amplitudes. Computationally, we obtain the total reward as a combination of performance and saccade amplitude
$${r}_{n}(\alpha )=P({\rm{correct}}|{x}_{n},{y}_{n})-\alpha {\rm{c}}({x}_{n},{y}_{n})$$
(15)
where c is a linear cost function returning the amplitude of the saccade. The parameter α determines how much detection probability a subject is willing to give up in order to decrease saccade amplitude11. It was estimated from the mean fixation locations of our participants using Maximum Likelihood.
Next, the human visual system does not have access to visual content at all locations in the field of view with unlimited precision. We accounted for the decline of visual acuity at peripheral locations. Therefore, foveated versions of the shapes were generated using the known human contrast sensitivity function (see refs8,10,27, for example). For the first fixation foveation was computed using the initial fixation location of the trial. As it was not computationally tractable to compute foveated images corresponding to the exact location of the first landing position, we approximated it by using the mean fixation location of our subjects instead.
Finally, prior studies have shown that saccades undershot target locations29. Initial landing positions are closer to the start location of a saccade. The final target is reached using subsequent corrective saccades. However, in our experiment there is no visible fixation target, therefore corrective saccades might not be present. To account for that we estimated the undershot from our data.
### Preprocessing
First, fixations were extracted from the raw gaze signal using the software of the eye tracking device. Overall, 6400 trials (16 participants × 4 blocks × 100 trials per block) entered the preprocessing. 15 trials (0.23%) were dismissed because the subjects failed to target gaze towards the shape. In these trials, subjects triggered the beginning of the trial by crossing the boundary, however did not engage in visual search. While search time was adjusted to enable subjects to perform a single saccade in the short condition and two saccades in the long condition, respectively, in 17% of the trials subjects failed to do so. Since we are only interested in comparing the difference between strategies consisting of one or two targeted locations we only used the remaining 5288 trials. Next, we excluded trials where the target was present, regardless of whether it was found, leaving 2589 trials. Clearly, behavior after successfully finding the target is confounded and does no longer provide valid information about the search strategy. Also, trials in which a target was shown but not found are biased as they are more likely to occur in the context of inferior eye movement strategies.
Our analysis and our estimated model parameters rely on mean landing positions aggregated within subjects. Therefore, we need to make sure that the variation in landing positions arises due to saccadic endpoint variability or uncertainties the subject might have about the shape, but not from qualitatively different strategies. Shapes S1 and S2 consist of two separate parts, as a consequence the reward distribution is no longer unimodal across potential gaze targets (see Supplementary Fig. 1a). Indeed, qualitatively different strategies in the short condition were found for these stimuli (see Supplementary Fig. 1b). Using mean gaze locations therefore would have lead to misleading results as it implicitly implied unimodal variability in landing positions while the real data showed clear multi-modality. To further analyze the gaze targets of our participants, we first identified the strategy for each trial using a Gaussian mixture model. We only considered the most frequent strategy (see Supplementary Fig. 1c) for both shapes and discarded trials (10.6%) deviating from the chosen strategy. However, our findings do not depend on the particular choice of strategy as shapes that revealed differences between the myopic observer and the planning observer (S3 and S4) did not elicit different strategies. The remaining 2313 trials were used for our analysis.
### Code Availability Statement
The code used in this study is available from the corresponding author on request.
## Data Availability Statement
The data that support the findings of this study are available from https://github.com/RothkopfLab/spatial_gaze_planning.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Land, M. F. & Nilsson, D.-E. Animal eyes (Oxford University Press, 2012).
2. 2.
Findlay, J. M. & Gilchrist, I. D. Active vision: The psychology of looking and seeing. 37 (Oxford University Press, 2003).
3. 3.
Hayhoe, M. & Ballard, D. Eye movements in natural behavior. Trends in Cognitive Sciences 9, 188–194, http://linkinghub.elsevier.com/retrieve/pii/S1364661305000598 (2005).
4. 4.
Itti, L. & Koch, C. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision research 40, 1489–1506, http://www.sciencedirect.com/science/article/pii/S0042698999001637 (2000).
5. 5.
Itti, L. & Baldi, P. F. Bayesian surprise attracts human attention. In Advances in neural information processing systems, 547–554 http://papers.nips.cc/paper/2822-bayesian-surprise-attracts-human-attention.pdf (2006).
6. 6.
Renninger, L. W., Coughlan, J. M., Verghese, P. & Malik, J. An information maximization model of eye movements. In Advances in neural information processing systems, 1121–1128, http://papers.nips.cc/paper/2660-an-information-maximization-model-of-eye-movements.pdf (2005).
7. 7.
Renninger, L. W., Verghese, P. & Coughlan, J. Where to look next? Eye movements reduce local uncertainty. Journal of Vision 7, 6, https://doi.org/10.1167/7.3.6 (2007).
8. 8.
Najemnik, J. & Geisler, W. S. Optimal eye movement strategies in visual search. Nature 434, 387 (2005).
9. 9.
Torralba, A., Oliva, A., Castelhano, M. S. & Henderson, J. M. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological review 113, 766 (2006).
10. 10.
Peterson, M. F. & Eckstein, M. P. Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences 109, E3314–E3323 (2012).
11. 11.
Hoppe, D. & Rothkopf, C. A. Learning rational temporal eye movement strategies. Proceedings of the National Academy of Sciences 113, 8332–8337, https://doi.org/10.1073/pnas.1601305113 (2016).
12. 12.
Yang, S. C.-H., Lengyel, M. & Wolpert, D. M. Active sensing in the categorization of visual patterns. Elife 5, e12215 (2016).
13. 13.
Najemnik, J. & Geisler, W. S. Eye movement statistics in humans are consistent with an optimal search strategy. Journal of Vision 8, 4–4 (2008).
14. 14.
Eckstein, M. P., Thomas, J. P., Palmer, J. & Shimozaki, S. S. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays. Perception & psychophysics 62, 425–451 (2000).
15. 15.
Russell, S. J., Norvig, P. & Davis, E. Artificial intelligence: a modern approach. Prentice Hall series in artificial intelligence, 3rd ed edn (Prentice Hall, Upper Saddle River, 2010).
16. 16.
Sutton, R. S. & Barto, A. G. Reinforcement learning: An introduction, vol. 1 (MIT press Cambridge, 1998).
17. 17.
Kaelbling, L. P., Littman, M. L. & Cassandra, A. R. Planning and acting in partially observable stochastic domains. Artificial intelligence 101, 99–134 (1998).
18. 18.
Navalpakkam, V., Koch, C., Rangel, A. & Perona, P. Optimal reward harvesting in complex perceptual environments. Proceedings of the National Academy of Sciences 107, 5232–5237, https://doi.org/10.1073/pnas.0911972107 (2010).
19. 19.
Schutz, A. C., Trommershauser, J. & Gegenfurtner, K. R. Dynamic integration of information about salience and value for saccadic eye movements. Proceedings of the National Academy of Sciences 109, 7547–7552, https://doi.org/10.1073/pnas.1115638109 (2012).
20. 20.
Becker, W. & Jürgens, R. An analysis of the saccadic system by means of double step stimuli. Vision research 19, 967–983 (1979).
21. 21.
Zingale, C. M. & Kowler, E. Planning sequences of saccades. Vision research 27, 1327–1341 (1987).
22. 22.
Baldauf, D. & Deubel, H. Properties of attentional selection during the preparation of sequential saccades. Experimental Brain Research 184, 411–425 (2008).
23. 23.
De Vries, J. P., Hooge, I. T. & Verstraten, F. A. Saccades toward the target are planned as sequences rather than as single steps. Psychological science 25, 215–223, https://doi.org/10.1177/0956797613497020 (2014).
24. 24.
Baloh, R. W., Sills, A. W., Kumley, W. E. & Honrubia, V. Quantitative measurement of saccade amplitude, duration, and velocity. Neurology 25 1065–1065, http://www.neurology.org/content/25/11/1065.short (1975).
25. 25.
van Beers, R. J. The Sources of Variability in Saccadic Eye Movements. Journal of Neuroscience 27, 8757–8770, https://doi.org/10.1523/JNEUROSCI.2311-07.2007 (2007).
26. 26.
Harris, C. M. & Wolpert, D. M. Signal-dependent noise determines motor planning. Nature 394, 780, http://search.proquest.com/openview/1e30f492c643b4e7da7d892f942c31f2/1?pq-origsite=gscholarcbl=40569 (1998).
27. 27.
Geisler, W. S. & Perry, J. S. Real-time foveated multiresolution system for low-bandwidth video communication. Human vision and electronic imaging 3299, 294–305 (1998).
28. 28.
Harris, C. M. Does saccadic undershoot minimize saccadic flight-time? a monte-carlo study. Vision research 35, 691–701 (1995).
29. 29.
Gillen, C., Weiler, J. & Heath, M. Stimulus-driven saccades are characterized by an invariant undershooting bias: no evidence for a range effect. Experimental Brain Research 230, 165–174 (2013).
30. 30.
Kass, R. E. & Raftery, A. E. Bayes factors. Journal of the american statistical association 90, 773–795 (1995).
31. 31.
Duchowski, A. T., Cournia, N. & Murphy, H. Gaze-contingent displays: A review. CyberPsychology & Behavior 7, 621–634 (2004).
32. 32.
Geisler, W. S., Perry, J. S. & Najemnik, J. Visual search: The role of peripheral information measured using gaze-contingent displays. Journal of Vision 6, 1–1 (2006).
33. 33.
Land, M. F. & Hayhoe, M. In what ways do eye movements contribute to everyday activities? Vision research 41, 3559–3565 (2001).
34. 34.
Todorov, E. & Jordan, M. I. Optimal feedback control as a theory of motor coordination. Nature neuroscience 5, 1226–1235 (2002).
35. 35.
Hayhoe, M. M. Vision and action. Annual Review of Vision Science 3, 389–413, https://doi.org/10.1146/annurev-vision-102016-061437. PMID: 28715958 (2017).
36. 36.
Verghese, P. Active search for multiple targets is inefficient. Vision Research 74, 61–71 http://linkinghub.elsevier.com/retrieve/pii/S0042698912002581 (2012).
37. 37.
Morvan, C. & Maloney, L. T. Human visual search does not maximize the post-saccadic probability of identifying targets. PLoS computational biology 8, e1002342 (2012).
38. 38.
Ackermann, J. F. & Landy, M. S. Choice of saccade endpoint under risk. Journal of Vision 13, 27–27, https://doi.org/10.1167/13.3.27 (2013).
39. 39.
Paulun, V. C., Schütz, A. C., Michel, M. M., Gisler, W. S. & Gegenfurtner, K. R. Visual search under scotopic lighting conditions. Vision research 113, 155–168 (2015).
40. 40.
Gottlieb, J. Attention, Learning, and the Value of Information. Neuron 76, 281–295, http://linkinghub.elsevier.com/retrieve/pii/S0896627312008884 (2012).
41. 41.
Yang, S. C.-H., Wolpert, D. M. & Lengyel, M. Theoretical perspectives on active sensing. Current Opinion in Behavioral Sciences 11, 100–108 (2016).
42. 42.
Geisler, W. S. Ideal observer analysis. The visual neurosciences 10, 12–12, https://pdfs.semanticscholar.org/94ce/fe9e1a6d368e7d18bff474e254e14231977f.pdf (2003).
43. 43.
Geisler, W. S. Contributions of ideal observer theory to vision research. Vision Research 51, 771–781, http://linkinghub.elsevier.com/retrieve/pii/S0042698910004724 (2011).
44. 44.
Knill, D. C. & Richards, W. Perception as Bayesian inference (Cambridge University Press, 1996).
45. 45.
Kersten, D., Mamassian, P. & Yuille, A. Object perception as bayesian inference. Annu. Rev. Psychol. 55, 271–304 (2004).
46. 46.
Ernst, M. O. & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429–433 (2002).
47. 47.
Körding, K. P. & Wolpert, D. M. Bayesian integration in sensorimotor learning. Nature 427, 244–247 (2004).
48. 48.
Oaksford, M. & Chater, N. Bayesian rationality: The probabilistic approach to human reasoning (Oxford University Press, 2007).
49. 49.
Gershman, S. J., Horvitz, E. J. & Tenenbaum, J. B. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 349, 273–278 (2015).
50. 50.
Tenenbaum, J. B., Griffiths, T. L. & Kemp, C. Theory-based bayesian models of inductive learning and reasoning. Trends in cognitive sciences 10, 309–318 (2006).
51. 51.
Daw, N. D., Niv, Y. & Dayan, P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience 8, 1704–1711 (2005).
52. 52.
Huys, Q. J. et al. Interplay of approximate planning strategies. Proceedings of the National Academy of Sciences 112, 3098–3103 (2015).
53. 53.
Ma, W. J., Beck, J. M., Latham, P. E. & Pouget, A. Bayesian inference with probabilistic population codes. Nature neuroscience 9, 1432–1438 (2006).
54. 54.
Fiser, J., Berkes, P., Orbán, G. & Lengyel, M. Statistically optimal perception and learning: from behavior to neural representations. Trends in cognitive sciences 14, 119–130 (2010).
55. 55.
Sanborn, A. N. & Chater, N. The sampling brain. Trends in Cognitive Sciences 21, 492–493 (2017).
56. 56.
Kwisthout, J. & Van Rooij, I. Bridging the gap between theory and practice of approximate bayesian inference. Cognitive Systems Research 24, 2–8 (2013).
57. 57.
Foley, N. C., Kelly, S. P., Mhatre, H., Lopes, M. & Gottlieb, J. Parietal neurons encode expected gains in instrumental information. Proceedings of the National Academy of Sciences 114, E3315–E3323 (2017).
58. 58.
Wang, Z. & Bovik, A. C. Embedded foveation image coding. IEEE Transactions on image processing 10, 1397–1410 (2001).
59. 59.
Bellman, R. A markovian decision process. Journal of Mathematics and Mechanics 679–684 (1957).
60. 60.
Araujo, C., Kowler, E. & Pavel, M. Eye movements during visual search: The costs of choosing the optimal path. Vision research 41, 3613–3625, http://www.sciencedirect.com/science/article/pii/S0042698901001961 (2001).
## Acknowledgements
This research was supported by the Deutsche Forschungsgemeinschaft, DFG (grant RO 4337/3-1). We acknowledge support by the German Research Foundation and the Open Access Publishing Fund of Technische Universität Darmstadt.
## Author information
### Affiliations
• David Hoppe
• & Constantin A. Rothkopf
2. #### Centre for Cognitive Science, Technical University Darmstadt, Darmstadt, Hesse, Germany
• David Hoppe
• & Constantin A. Rothkopf
3. #### Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Hesse, Germany
• Constantin A. Rothkopf
### Contributions
D.H. and C.A.R. designed the research; D.H. performed the research; D.H. and C.A.R. analyzed the data and wrote the paper.
### Competing Interests
The authors declare no competing interests.
### Corresponding author
Correspondence to Constantin A. Rothkopf. | 2019-02-19 00:26:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5265311598777771, "perplexity": 1881.5672670115875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00218.warc.gz"} |
https://www.mysciencework.com/publication/show/metastability-generalized-hopfield-model-finitely-many-patterns-d9d9be79 | Metastability in the generalized Hopfield model with finitely many patterns
Authors
Type
Preprint
Publication Date
Apr 26, 2009
Submission Date
Mar 17, 2009
Identifiers
arXiv ID: 0903.3050
Source
arXiv
License
Yellow
External links
Abstract
This paper continues the study of metastable behaviour in disordered mean field models initiated in [2], [3]. We consider the generalized Hopfield model with finitely many independent patterns $\xi_1,...,\xi_p$ where the patterns have i.i.d. components and follow discrete distributions on $[-1,1]$. We show that metastable behaviour occurs and provide sharp asymptotics on metastable exit times and the corresponding capacities. We apply the potential theoretic approach developed by Bovier et al. in the space of appropriate order parameters and use an analysis of the discrete Laplacian to obtain lower bounds on capacities. Moreover, we include the possibility of multiple saddle points with the same value of the rate function and the case that the energy surface is degenerate around critical points.
Seen <100 times | 2019-01-18 16:07:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827984094619751, "perplexity": 848.1474769854298}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660175.18/warc/CC-MAIN-20190118151716-20190118173716-00399.warc.gz"} |
https://gateoverflow.in/tag/isi2018-dcg | # Recent questions tagged isi2018-dcg
1
The digit in the unit place of the number $7^{78}$ is $1$ $3$ $7$ $9$
2
If $P$ is an integer from $1$ to $50$, what is the probability that $P(P+1)$ is divisible by $4$? $0.25$ $0.50$ $0.48$ none of these
3
If the co-efficient of $p^{th}, (p+1)^{th}$ and $(p+2)^{th}$ terms in the expansion of $(1+x)^n$ are in Arithmetic Progression (A.P.), then which one of the following is true? $n^2+4(4p+1)+4p^2-2=0$ $n^2+4(4p+1)+4p^2+2=0$ $(n-2p)^2=n+2$ $(n+2p)^2=n+2$
4
The number of terms with integral coefficients in the expansion of $\left(17^\frac{1}{3}+19^\frac{1}{2}x\right)^{600}$ is $99$ $100$ $101$ $102$
1 vote
5
Let $A$ be the set of all prime numbers, $B$ be the set of all even prime numbers, and $C$ be the set of all odd prime numbers. Consider the following three statements in this regard: $A=B\cup C$. $B$ ... the above statements is true. Exactly one of the above statements is true. Exactly two of the above statements are true. All the above three statements are true.
1 vote
6
A die is thrown thrice. If the first throw is a $4$ then the probability of getting $15$ as the sum of three throws is $\frac{1}{108}$ $\frac{1}{6}$ $\frac{1}{18}$ none of these
1 vote
7
You are given three sets $A,B,C$ in such a way that the set $B \cap C$ consists of $8$ elements, the set $A\cap B$ consists of $7$ elements, and the set $C\cap A$ consists of $7$ elements. The minimum number of elements in the set $A\cup B\cup C$ is $8$ $14$ $15$ $22$
1 vote
8
A Pizza Shop offers $6$ different toppings, and they do not take an order without any topping. I can afford to have one pizza with a maximum of $3$ toppings. In how many ways can I order my pizza? $20$ $35$ $41$ $21$
1 vote
9
Let $f(x)=1+x+\dfrac{x^2}{2}+\dfrac{x^3}{3}...+\dfrac{x^{2018}}{2018}.$ Then $f’(1)$ is equal to $0$ $2017$ $2018$ $2019$
10
Let $f’(x)=4x^3-3x^2+2x+k,$ $f(0)=1$ and $f(1)=4.$ Then $f(x)$ is equal to $4x^4-3x^3+2x^2+x+1$ $x^4-x^3+x^2+2x+1$ $x^4-x^3+x^2+2(x+1)$ none of these
1 vote
11
The sum of $99^{th}$ power of all the roots of $x^7-1=0$ is equal to $1$ $2$ $-1$ $0$
1 vote
12
Let $A=\{10,11,12,13, \dots ,99\}$. How many pairs of numbers $x$ and $y$ are possible so that $x+y\geq 100$ and $x$ and $y$ belong to $A$? $2405$ $2455$ $1200$ $1230$
13
In a certain town, $20\%$ families own a car, $90\%$ own a phone, $5 \%$ own neither a car nor a phone and $30, 000$ families own both a car and a phone. Consider the following statements in this regard: $10 \%$ families own both a car and a phone. $95 \%$ families own either a ... (i) & (iii) are correct and (ii) is wrong. (ii) & (iii) are correct and (i) is wrong. (i), (ii) & (iii) are correct.
1 vote
14
In a room there are $8$ men, numbered $1,2, \dots ,8$. These men have to be divided into $4$ teams in such a way that every team has exactly $2$ ... total number of such $4$-team combinations is $\frac{8!}{2^4}$ $\frac{8!}{2^4(4!)}$ $\frac{8!}{4!}$ $\frac{8!}{(4!)^2}$
1 vote
15
The number of parallelograms that can be formed from a set of four parallel lines intersecting another set of three parallel lines is $6$ $9$ $12$ $18$
1 vote
16
Let $A=\begin{pmatrix} 1 & 1 & 0\\ 0 & a & b\\1 & 0 & 1 \end{pmatrix}$. Then $A^{-1}$ does not exist if $(a,b)$ is equal to $(1,-1)$ $(1,0)$ $(-1,-1)$ $(0,1)$
17
The value of $^{13}C_{3} + ^{13}C_{5} + ^{13}C_{7} +\dots + ^{13}C_{13}$ is $4096$ $4083$ $2^{13}-1$ $2^{12}-1$
1 vote
18
If $x+y=\pi,$ the expression $\cot \dfrac{x}{2}+\cot\dfrac{y}{2}$ can be written as $2 \: \text{cosec} \: x$ $\text{cosec} \: x + \text{cosec} \: y$ $2 \: \sin x$ $\sin x+\sin y$
19
The area of the region formed by line segments joining the points of intersection of the circle $x^2+y^2-10x-6y+9=0$ with the two axes in succession in a definite order (clockwise or anticlockwise) is $16$ $9$ $3$ $12$
20
The value of $\tan \left(\sin^{-1}\left(\frac{3}{5}\right)+\cot^{-1}\left(\frac{3}{2}\right)\right)$ is $\frac{1}{18}$ $\frac{11}{6}$ $\frac{13}{6}$ $\frac{17}{6}$
21
A box with a square base of length $x$ and height $y$ has an open top and its volume is $32$ cubic centimetres, as shown in the figure below. The values of $x$ and $y$ that minimize the surface area of the box are $x=4$ cm $\&$ $y=2$ cm $x=3$ cm $\&$ $y=\frac{32}{9}$ cm $x=2$ cm $\&$ $y=8$ cm none of these.
22
Let the sides opposite to the angles $A,B,C$ in a triangle $ABC$ be represented by $a,b,c$ respectively. If $(c+a+b)(a+b-c)=ab,$ then the angle $C$ is $\frac{\pi}{6}$ $\frac{\pi}{3}$ $\frac{\pi}{2}$ $\frac{2\pi}{3}$
23
Let $A$ be the point of intersection of the lines $3x-y=1$ and $y=1$. Let $B$ be the point of reflection of the point $A$ with respect to the $y$-axis. Then the equation of the straight line through $B$ that produces a right angled triangle $ABC$ with $\angle ABC=90^{\circ}$, and $C$ lies on the line $3x-y=1$, is $3x-3y=2$ $2x+3=0$ $3x+2=0$ $3y-2=0$
24
Let $[x]$ denote the largest integer less than or equal to $x.$ The number of points in the open interval $(1,3)$ in which the function $f(x)=a^{[x^2]},a\gt1$ is not differentiable, is $0$ $3$ $5$ $7$
25
There are three circles of equal diameter ($10$ units each) as shown in the figure below. The straight line $PQ$ passes through the centres of all the three circles. The straight line $PR$ is a tangent to the third circle at $C$ and cuts the second circle at the points $A$ and $B$ as shown in the figure.Then the length of the line segment $AB$ is $6$ units $7$ units $8$ units $9$ units
26
The area of the region bounded by the curves $y=\sqrt x,$ $2y+3=x$ and $x$-axis in the first quadrant is $9$ $\frac{27}{4}$ $36$ $18$
27
$\sum_{n=1}^{\infty}\frac{1}{n(n+1)}$ is $2$ $1$ $\infty$ not a convergent series
Let $f(x)=e^{-\big( \frac{1}{x^2-3x+2} \big) };x\in \mathbb{R} \: \: \& x \notin \{1,2\}$. Let $a=\underset{n \to 1^+}{\lim}f(x)$ and $b=\underset{x \to 1^-}{\lim} f(x)$. Then $a=\infty, \: b=0$ $a=0, \: b=\infty$ $a=0, \: b=0$ $a=\infty, \: b=\infty$
Let $f(x)=(x-1)(x-2)(x-3)g(x); \: x\in \mathbb{R}$ where $g$ is twice differentiable function. Then there exists $y\in(1,3)$ such that $f’’(y)=0.$ there exists $y\in(1,2)$ such that $f’’(y)=0.$ there exists $y\in(2,3)$ such that $f’’(y)=0.$ none of the above is true.
Let $0.01^x+0.25^x=0.7$ . Then $x\geq1$ $0\lt x\lt1$ $x\leq0$ no such real number $x$ is possible. | 2020-09-29 17:02:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6159459948539734, "perplexity": 108.91756156382014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202418.22/warc/CC-MAIN-20200929154729-20200929184729-00561.warc.gz"} |
https://vedicmathsbyppk.wordpress.com/2017/07/06/what-is-the-area-of-a-circle/ | ## What is the Area of a Circle?
circle is a simple closed shape in Euclidean geometry.
It is the set of all points in a plane that is at a given distance from a given point, the center. It is the curve traced out by a point that moves so that its distance from a given point is constant. The distance between any of the points and the center is called the radius.
A circle is a plane figure bounded by one line, and such that all right lines drawn from a certain point within it to the bounding line, are equal. The bounding line is called its circumference and the point, its centre.— Euclid. Elements Book
The ratio of a circle’s circumference to its diameter is π (pi), is approximately equal to 3.141592654. Thus the length of the circumference C is related to the radius r and diameter d by:
We were from the school days, that the area of a circle is found by the formula
Have you ever given a chance to your students or yourself to know how this formula has been derived?
I think most of the readers here are aware of the area of a parallelogram. If yes, then you can proceed further (else go back and read about the area of a parallelogram and come back).
Here, we go
Let’s do a small activity to find the area of the circle.
First, begin by drawing a conveniently sized circle on a piece of cardboard. Now, divide the circle into 16 equal arcs. This may be done by marking off consecutive arcs of 22.5° or alternatively by consecutively dividing the circle into two parts, then four parts, then bisecting each of these quarter arcs, and so on.
It would look like the image shown below:
Fig. Circle divided into 16 equal parts
At last, the 16 parts or sectors, shown above, are then cut apart and placed in the manner
shown in the figure below.
Fig. Sectors (16 parts ) are arranged in the form of a parallelogram
From the above figure, we have base length of a parallelogram as $\displaystyle Base=\frac{C}{2}$
Where, $\displaystyle C=2\pi r$ r: radius of the circle.
We know that the area of the parallelogram is equal to the product of its base and altitude (which here is r).
i.e $\displaystyle =\left( \frac{C}{2} \right).r$
$\displaystyle =\left( \frac{2\pi r}{2} \right).r$
$\displaystyle =\pi {{r}^{2}}$
Hurray!! You derived the formula to find the area of a circle.
Geometry is that of mathematical science which is devoted to consideration of form and size, and may be said to be the best and surest guide to study of all sciences in which ideas of dimension or space are involved. Almost all the knowledge required by navigators, architects, surveyors, engineers, and opticians, in their respective occupations, is deduced from geometry and branches of mathematics. All works of art are constructed according to the rules which geometry involves; and we find the same laws observed in the works of nature. The study of mathematics, generally, is also of great importance in cultivating habits of exact reasoning; and in this respect it forms a useful auxiliary to logic” – Robert Chambers | 2017-07-24 08:35:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7382204532623291, "perplexity": 370.173247756774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424770.15/warc/CC-MAIN-20170724082331-20170724102331-00580.warc.gz"} |
http://mathhelpforum.com/calculus/71375-prove-g-1-1-a.html | # Math Help - prove g is 1-1
1. ## prove g is 1-1
g(x) = x/(1-|x|)
I began by assuming not
so y/(1-|y|) = x/(1-|x|)
but came unstuck when dealing with the modulus signs - do I need to deal with the case where x<0, y>0. If so, how do I do this?
thanks
2. You must have copied it incorrectly.
As written it is not one-to-one.
$g\left( 3 \right) = g\left( {\frac{{ - 3}}
{5}} \right) = \frac{{ - 3}}
{2}$
3. ahh sorry - its defined from -1 to 1
4. In that case they cannot have different signs.
Note $\left| x \right| < 1\; \Rightarrow \;1 - \left| x \right| > 0$.
So the numerators must have the same sign.
5. of course - I missed the fact it was between -1 and 1
I am also asked to find g((-1,1))
what does this mean? is it g(-1) and g(1)?
I also need inverse of g.. do I need to do this with separate cases for x>0, x<0?
many thanks
6. Originally Posted by James0502
I am also asked to find g((-1,1))
what does this mean? is it g(-1) and g(1)?
First, did you graph the function?
$g[(-1,1)]$ is simply the range of the function.
7. yes, I did.. so the range is infinity, since the graph goes from -inf to + inf? | 2014-09-17 23:56:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896642923355103, "perplexity": 1376.0786411711908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124771.92/warc/CC-MAIN-20140914011204-00142-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://inst.eecs.berkeley.edu/~cs194-26/fa22/upload/files/proj1/cs194-26-act/ | Images of the Russian Empire:Colorizing the Prokudin-Gorskii photo collection
1UC Berkeley
# Overview
Sergei Mikhailovich Prokudin-Gorskii (1863-1944) traveled across the Russian Empire and took many colored images. His collections were purchased by the Library of Congress. Each colored image is preserved in the form of three unicolor glass negatives, each representing a color channel. Modern techniques enable us to restore the color from such a collection.
# Methods
Our method consists of two steps. Image alignment and Color Restoration. In an image alignemet phase, we try to align the three channels together. This is non trival since many images have a large offset between channels. We consider image alignment as finding solution of the following optimization problem $$\mathrm{argmin}_{x,y} \; \mathcal{L}(P_{+x,+y},Q)$$ where $$P,Q$$ are images of dimension $$h \times w$$, and $$P_{+x,+y}$$ is the image $$P$$ shifted by vector $$(x,y )$$. We consider two losses, the SSD loss and NCC loss defined by $$SSD(P,Q)= ||P - Q || ^2$$ $$NCC(P,Q)= - \frac{Cov(P,Q)}{\sqrt{Var(P) Var(Q)}}$$ A naive solution is to iterate through a the spcae $$[-15,+15]^2$$, however this approcah failed to cater to the cases where the image has a larger offset in number of pixels due to high resolution. To address this issue, we implement a image pyramid search where we down scale the image by a factor of two until we reach a resolution no greater than 128. We then performs exhaustive search on each level. When images at a particular level becomes larger than $$512 \times 512$$, we perform a center crop for loss calculation to spped up the computation.
In our implementation, we use blue channel as the base image and try to align red and blue channel with it. We found that while NCC loss tends to have better results, it is marginally slower in runtime.
# Bells & Whistle (Enhancement)
#### Conv Filters For Feature Extraction
While the baseline method can already achieve a satisfactory results, it fails when the saturation of color is so huge that the brightness have a huge variance across different channels. (See emir.jpg below) To address this issue, we take the average of the absolute values of gradients across two directions and use the response map in lieu of the raw rgb info as the input of our loss function. In practice, we get such gradients by applying two 3x3 Conv filters on the image. In the Visualization below, we show that while the brighnetness can be drastically different in two channels, the response map reamin a high correlation. Hence, the optimal point on the loss surface will better reflect the ground truth offset.
Brightness R Brightness B Response Map R Response Map B
#### Auto-Cropping
We implement auto-cropping to further refine the output image. We observe that the borders have low variance across one direction (i.e. they are uniform vertical and horizontal strips). Additonal, they cannot contain orthogonal information (vertical stripes cannot have horizontal edges). Based on these observations, we make the following assumption.
1.96% of the edges are in the actual image.
2.97% of the information (variance) are in the actual image.
For each image, we calculate these two metrics alongside one direction and perfom corresbounding crops from the two sides. An visualization is shown below. We perform auto-cropping on both axis.
Distribution of Edges alongside X-Axis Distribution of Variance alongside X-Axis
#### Ablation
As shown in the visualization below, our refinements brings visible improvements to image quality.
Baseline + Filter + AutoCrop
# Results and Visualization
We report the results of our algorithm as folloows
BaselineAutoCrop + Filter
melons.jpg
R[-179, -13] G[-83, -10]R[-180, -13] G[-83, -10]
three_generations.jpg
R[-105, -14] G[-49, -15]R[-108, -13] G[-50, -17]
train.jpg
R[-86, -32] G[-42, -6]R[-86, -32] G[-42, -6]
cathedral.jpg
R[-12, -3] G[-5, -2]R[-12, -3] G[-5, -2]
church.jpg
R[-58, 4] G[-24, -4]R[-58, 4] G[-24, -4]
onion_church.jpg
R[-108, -37] G[-50, -26]R[-108, -37] G[-49, -26]
harvesters.jpg
R[-124, -15] G[-59, -18]R[-123, -15] G[-59, -18]
sculpture.jpg
R[-140, 27] G[-33, 11]R[-140, 27] G[-33, 11]
R[-112, -9] G[-52, -7]R[-114, -11] G[-53, -7]
icon.jpg
R[-90, -23] G[-41, -18]R[-90, -23] G[-41, -18]
self_portrait.jpg
R[-175, -37] G[-77, -29]R[-175, -37] G[-77, -29]
tobolsk.jpg
R[-6, -3] G[-3, -3]R[-6, -3] G[-3, -3]
emir.jpg
R[-86, 316] G[-48, -24]R[-107, -41] G[-49, -23]
monastery.jpg
R[-3, -2] G[3, -2]R[-3, -2] G[3, -2] | 2022-12-03 01:36:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381091475486755, "perplexity": 4212.603930813906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00178.warc.gz"} |
https://www.sanfoundry.com/probability-statistics-questions-answers-chi-squared-distribution/ | # Probability and Statistics Questions and Answers – Chi-Squared Distribution
«
»
This set of Probability and Statistics Multiple Choice Questions & Answers (MCQs) focuses on “Chi-Squared Distribution”.
1. A dice is tossed 120 times with the following results
No. turned up 1 2 3 4 5 6 Frequency 30 25 18 10 22 15
Test the hypothesis that the dice is unbiased (X2 = 11.7). Calculate the frequency observed for Chi Square distribution.
a) Dice is unbiased, 11.3
b) Dice is biased, 12.9
c) Dice is unbiased, 10.9
d) Dice is biased, 12.3
Explanation: Step 1: Null Hypothesis: dice is unbiased.
Step 2: Calculation of Expected frequency:
Since the dice is unbiased P(r) = 1/6.
r = 1, 2, 3, 4, 5, 6
Expected frequency f(r) = N*r = 120* 1/6 = 20
Step3: Calculation of X2
X2 $$=∑\frac{[(fe-fo)]^2}{fe}$$
Hence X2 = 12.90 > 11.90.
Thus dice is biased.
2. Consider a set of 18 samples from a standard normal distribution. We square each sample and sum all the squares. The number of degrees of freedom for a Chi Square distribution will be?
a) 17
b) 18
c) 19
d) 20
Explanation: In Chi Square Distribution the number of standard normal derivatives or samples equals the number of degrees of freedom.
Here total number of standard normal derivatives = 18.
Hence the number of degrees of freedom for a Chi Square distribution = 18.
3. What is the mean of a Chi Square distribution with 6 degrees of freedom?
a) 4
b) 12
c) 6
d) 8
Explanation: By the property of Chi Square distribution, the mean corresponds to the number of degrees of freedom.
Degrees of freedom = 6.
Hence mean = 6.
4. Which Chi Square distribution looks the most like a normal distribution?
a) A Chi Square distribution with 4 degrees of freedom
b) A Chi Square distribution with 5 degrees of freedom
c) A Chi Square distribution with 6 degrees of freedom
d) A Chi Square distribution with 16 degrees of freedom
Explanation: When the number of degrees of freedom in Chi Square distribution increases it tends to correspond to normal distribution. The option with a maximum number of degrees of freedom is 16.
5. A bag contains 80 chocolates. This bag has 4 different colors of chocolates in it. If all four colors of chocolates were equally likely to be put in the bag, what would be the expected number of chocolates of each color?
a) 12
b) 11
c) 20
d) 9
Explanation: If all four colors were equally likely to be put in the bag, then the expected frequency for a given color would be 1/4th of the chocolates.
N = 80, r = 1/4
So, the expected frequency = N*r = (1/4)*(80) = 20.
6. Suppose a person has 8 red, 5 green, 12 orange, and 15 blue balls. Test the null hypothesis that the colors of the balls occur with equal frequency. What is the Chi Square value you get?
a) 5.6
b) 5.68
c) 5.86
d) 5.8
Explanation: By Chi Square Test we get,
Observed frequency f0= (8+5+12+15)/4 = 10
X2 =$$∑\frac{[(fe-fo)]^2}{fe}$$
The sum of each (expected – observed)2/expected = (10-8)2/10 + (10-5)2/10 + (10-12)2/10 + (10-15)2/10 = 5.8.
7. A faculty is interested in whether there is a relationship between gender and subject at his college. He tabulated some men and women on campus and asked them if their subject was Mathematics (M), Geography (G), and Science (S). What would be the expected frequency of women in Geography based on this table?
M G S Total Women 10 14 10 34 Men 11 22 14 23 Total 21 36 24 57
a) 31.12
b) 11.32
c) 12.13
d) 13.12
Explanation: The expected value of women in social sciences is the product of the total number of women and the total number of social science majors divided by the total number of participants. (22*34)/57 = 13.12.
8. In a sample survey of public opinion answer to the question:
1) Do you drink?
2) Are you in favor of local option sale of Liquor
Yes No Total Yes 56 31 87 No 18 6 24 Total 74 37 111
Infer or not the local option on the sale of liquor is dependent on individual drinker? Find the value of X2 for degrees of freedom at level of significance 3.841.
a) 0.957
b) 0.975
c) 0.759
d) 0.795
Explanation: Step 1: Null hypothesis: The option on the sale of liquor is not dependent with the individual drinking.
Step 2: Calculation of theoretical frequencies (Expected)
Expected frequency of (1,1) cell
fe11 = 87*74/111 = 58
fe12 = 87*37/111 = 29
fe21 = 24*74/111 = 16
fe22 = 24*32/111 = 8
Step 3: calculation of X2 distribution we know that
X2 =$$∑\frac{[(fe-fo)]^2}{fe}$$
X2 = 0.957 < 3.841.
Hence the null hypothesis is accepted
Thus the sale of liquor does not depend on the individual drinker.
9. The Variance of Chi Squared distribution is given as k.
a) True
b) False
Explanation: The Mean of Chi Squared distribution is given as k. The Variance of Chi Squared distribution is given as 2k.
10. Which of these distributions is used for a testing hypothesis?
a) Normal Distribution
b) Chi-Squared Distribution
c) Gamma Distribution
d) Poisson Distribution
Explanation: Chi-Squared Distribution is used for testing hypothesis. The value of X2 decides whether the hypothesis is accepted or not.
Sanfoundry Global Education & Learning Series – Probability and Statistics.
To practice all areas of Probability and Statistics, here is complete set of 1000+ Multiple Choice Questions and Answers. | 2022-08-09 20:12:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46425098180770874, "perplexity": 990.2428600325522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00350.warc.gz"} |
https://physics.stackexchange.com/questions/628656/beta-function-and-anomalous-dimension-in-on-model | # Beta Function and anomalous dimension in $O(N)$ model
I am currently studying Quantum Field Theory, but I cannot find any analytic calculation for the beta fucntion in $$O(N)$$ models such as $$\phi^4$$ theory. Can anyone provide me some hints on the calculation? | 2022-01-28 21:49:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7965896129608154, "perplexity": 253.4627448495625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306346.64/warc/CC-MAIN-20220128212503-20220129002503-00126.warc.gz"} |
https://git.ps.informatik.uni-kiel.de/curry-packages/currycheck/-/blame/264273cf423483a98b476b2c77872df7be7fea2c/docs/manual.tex | manual.tex 29.4 KB
Michael Hanus committed Dec 14, 2018 1 \section{CurryCheck: A Tool for Testing Properties of Curry Programs} Michael Hanus committed Mar 27, 2017 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 \label{sec-currycheck} CurryCheck\index{CurryCheck}\index{testing programs}\index{program!testing} is a tool that supports the automation of testing Curry programs. The tests to be executed can be unit tests as well as property tests parameterized over some arguments. The tests can be part of any Curry source program and, thus, they are also useful to document the code. CurryCheck is based on EasyCheck \cite{ChristiansenFischer08FLOPS}. Actually, the properties to be tested are written by combinators proposed for EasyCheck, which are actually influenced by QuickCheck \cite{ClaessenHughes00} but extended to the demands of functional logic programming. Michael Hanus committed Dec 14, 2018 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 \subsection{Installation} The current implementation of CurryCheck is a package managed by the Curry Package Manager CPM. Thus, to install the newest version of CurryCheck, use the following commands: % \begin{curry} > cypm update > cypm install currycheck \end{curry} % This downloads the newest package, compiles it, and places the executable \code{curry-check} into the directory \code{\$HOME/.cpm/bin}. Hence it is recommended to add this directory to your path in order to execute CurryCheck as described below. Michael Hanus committed Mar 27, 2017 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 \subsection{Testing Properties} To start with a concrete example, consider the following naive definition of reversing a list: \begin{curry} rev :: [a] -> [a] rev [] = [] rev (x:xs) = rev xs ++ [x] \end{curry} To get some confidence in the code, we add some unit tests, i.e., test with concrete test data: \begin{curry} revNull = rev [] -=- [] rev123 = rev [1,2,3] -=- [3,2,1] \end{curry} The operator \ccode{-=-} specifies a test where both sides must have a single identical value. Since this operator (as many more, see below) are defined in the library \code{Test.Prop}\pindex{Test.Prop},\footnote{% The library \code{Test.Prop} is a clone of the library Michael Hanus committed Jan 02, 2019 52 \code{Test.EasyCheck}\pindex{Test.EasyCheck} (see package \code{easycheck}) Michael Hanus committed Mar 27, 2017 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 which defines only the interface but not the actual test implementations. Thus, the library \code{Test.Prop} has less import dependencies. When CurryCheck generates programs to execute the tests, it automatically replaces references to \code{Test.Prop} by references to \code{Test.EasyCheck} in the generated programs.} we also have to import this library. Apart from unit tests, which are often tedious to write, we can also write a property, i.e., a test parameterized over some arguments. For instance, an interesting property of reversing a list is the fact that reversing a list two times provides the input list: \begin{curry} revRevIsId xs = rev (rev xs) -=- xs \end{curry} Note that each property is defined as a Curry operation where the arguments are the parameters of the property. Altogether, our program is as follows: \begin{curry} module Rev(rev) where import Test.Prop rev :: [a] -> [a] rev [] = [] rev (x:xs) = rev xs ++ [x] revNull = rev [] -=- [] rev123 = rev [1,2,3] -=- [3,2,1] revRevIsId xs = rev (rev xs) -=- xs \end{curry} Now we can run all tests by invoking the CurryCheck tool. If our program is stored in the file \code{Rev.curry}, we can execute the tests as follows: \begin{curry} Michael Hanus committed Dec 14, 2018 87 > curry-check Rev Michael Hanus committed Mar 27, 2017 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 ... Executing all tests... revNull (module Rev, line 7): Passed 1 test. rev123 (module Rev, line 8): Passed 1 test. revRevIsId_ON_BASETYPE (module Rev, line 10): OK, passed 100 tests. \end{curry} Since the operation \code{rev} is polymorphic, the property \code{revRevIsId} is also polymorphic in its argument. In order to select concrete values to test this property, CurryCheck replaces such polymorphic tests by defaulting the type variable to prelude type \code{Ordering} (the actual default type can also be set by a command-line flag). If we want to test this property on integers numbers, we can explicitly provide a type signature, where \code{Prop} denotes the type of a test: \begin{curry} revRevIsId :: [Int] -> Prop revRevIsId xs = rev (rev xs) -=- xs \end{curry} Michael Hanus committed Dec 14, 2018 110 The command \code{curry-check} has some options to influence Michael Hanus committed Mar 27, 2017 111 112 113 114 the output, like \ccode{-q} for a quiet execution (only errors and failed tests are reported) or \ccode{-v} for a verbose execution where all generated test cases are shown. Michael Hanus committed Dec 14, 2018 115 Moreover, the return code of \code{curry-check} is \code{0} Michael Hanus committed Mar 27, 2017 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 in case of successful tests, otherwise, it is \code{1}. Hence, CurryCheck can be easily integrated in tool chains for automatic testing. In order to support the inclusion of properties in the source code, the operations defined the properties do not have to be exported, as show in the module \code{Rev} above. Hence, one can add properties to any library and export only library-relevant operations. To test these properties, CurryCheck creates a copy of the library where all operations are public, i.e., CurryCheck requires write permission on the directory where the source code is stored. The library \code{Test.Prop} defines many combinators to construct properties. In particular, there are a couple of combinators for dealing with non-deterministic operations (note that this list is incomplete): \begin{itemize} \item The combinator \ccode{<\char126>} is satisfied if the set of values of both sides are equal. \item The property \code{$x$\char126>$y$} is satisfied if$x$evaluates to every value of$y$. Thus, the set of values of$y$must be a subset of the set of values of$x$. \item The property \code{$x$<\char126$y$} is satisfied if$y$evaluates to every value of$x$, i.e., the set of values of$x$must be a subset of the set of values of$y$. \item The combinator \ccode{<\char126\char126>} is satisfied if the multi-set of values of both sides are equal. Hence, this operator can be used to compare the number of computed solutions of two expressions. \item The property \code{always$x$} is satisfied if all values of$x$are true. \item The property \code{eventually$x$} is satisfied if some value of$x$is true. \item The property \code{failing$x$} is satisfied if$x$has no value, i.e., its evaluation fails. \item The property \code{$x$\#$n$} is satisfied if$x$has$n$different values. \end{itemize} % For instance, consider the insertion of an element at an arbitrary position in a list: \begin{curry} insert :: a -> [a] -> [a] insert x xs = x : xs insert x (y:ys) = y : insert x ys \end{curry} The following property states that the element is inserted (at least) at the beginning or the end of the list: \begin{curry} insertAsFirstOrLast :: Int -> [Int] -> Prop insertAsFirstOrLast x xs = insert x xs ~> (x:xs ? xs++[x]) \end{curry} % A well-known application of \code{insert} is to use it to define a permutation of a list: \begin{curry} perm :: [a] -> [a] perm [] = [] perm (x:xs) = insert x (perm xs) \end{curry} We can check whether the length of a permuted lists is unchanged: \begin{curry} permLength :: [Int] -> Prop permLength xs = length (perm xs) <~> length xs \end{curry} Note that the use of \ccode{<\char126>} is relevant since we compare non-deterministic values. Actually, the left argument evaluates to many (identical) values. One might also want to check whether \code{perm} computes the correct number of solutions. Since we know that a list of length$n$has$n!$permutations, we write the following property: \begin{curry} permCount :: [Int] -> Prop permCount xs = perm xs # fac (length xs) \end{curry} where \code{fac} is the factorial function. However, this test will be falsified with the argument \code{[1,1]}. Actually, this list has only one permuted value since the two possible permutations are identical and the combinator \ccode{\#} counts the number of \emph{different} values. The property would be correct if all elements in the input list \code{xs} are different. This can be expressed by a conditional property: the property \code{$b$==>$p$} is satisfied if$p$is satisfied for all values where$b$evaluates to \code{True}. Therefore, if we define a predicate \code{allDifferent} by \begin{curry} allDifferent [] = True allDifferent (x:xs) = x notElem xs && allDifferent xs \end{curry} then we can reformulate our property as follows: \begin{curry} permCount xs = allDifferent xs ==> perm xs # fac (length xs) \end{curry} % Now consider a predicate to check whether a list is sorted: \begin{curry} sorted :: [Int] -> Bool sorted [] = True sorted [_] = True sorted (x:y:zs) = x<=y && sorted (y:zs) \end{curry} This predicate is useful to test whether there are also sorted permutations: \begin{curry} permIsEventuallySorted :: [Int] -> Prop permIsEventuallySorted xs = eventually$\code{\$}$ sorted (perm xs) \end{curry} % The previous operations can be exploited to provide a high-level specification of sorting a list: \begin{curry} Michael Hanus committed Dec 14, 2018 236 psort :: [Int] -> [Int] Michael Hanus committed Mar 27, 2017 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 psort xs | sorted ys = ys where ys = perm xs \end{curry} Again, we can write some properties: \begin{curry} psortIsAlwaysSorted xs = always $\code{\$}$sorted (psort xs)$\listline$psortKeepsLength xs = length (psort xs) <~> length xs \end{curry} Of course, the sort specification via permutations is not useful in practice. However, it can be used as an oracle to test more efficient sorting algorithms like quicksort: \begin{curry} qsort :: [Int] -> [Int] qsort [] = [] qsort (x:l) = qsort (filter (x) l) \end{curry} The following property specifies the correctness of quicksort: \begin{curry} qsortIsSorting xs = qsort xs <~> psort xs \end{curry} Actually, if we test this property, we obtain a failure: % \begin{curry} Michael Hanus committed Dec 14, 2018 260 > curry-check ExampleTests Michael Hanus committed Mar 27, 2017 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 ... qsortIsSorting (module ExampleTests, line 53) failed Falsified by third test. Arguments: [1,1] Results: [1] \end{curry} % The result shows that, for the given argument \code{[1,1]}, an element has been dropped in the result. Hence, we correct our implementation, e.g., by replacing \code{(>x)} with \code{(>=x)}, and obtain a successful test execution. For I/O operations, it is difficult to execute them with random data. Hence, CurryCheck only supports specific I/O unit tests: \begin{itemize} \item \code{$a$returns$x$} is satisfied if the I/O action$a$returns the value$x$. \item \code{$a$sameReturns$b$} is satisfied if the I/O actions$a$and$b$return identical values. \end{itemize} % Since CurryCheck executes the tests written in a source program in their textual order, one can write several I/O tests that are executed in a well-defined order. \subsection{Generating Test Data} CurryCheck test properties by enumerating test data and checking a given property with these values. Since these values are generated in a systematic way, one can even prove a property if the number of test cases is finite. For instance, consider the following property from Boolean logic: \begin{curry} neg_or b1 b2 = not (b1 || b2) -=- not b1 && not b2 \end{curry} This property is validated by checking it with all possible values: % \begin{curry} Michael Hanus committed Dec 14, 2018 303 > curry-check -v ExampleTests Michael Hanus committed Mar 27, 2017 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 ... 0: False False 1: False True 2: True False 3: True True neg_or (module ExampleTests, line 67): Passed 4 tests. \end{curry} % However, if the test data is infinite, like lists of integers, CurryCheck stops checking after a given limit for all tests. As a default, the limit is 100 tests but it can be changed by the command-line flag \ccode{-m}. For instance, to test each property with 200 tests, CurryCheck can be invoked by % \begin{curry} Michael Hanus committed Dec 14, 2018 328 > curry-check -m 200 ExampleTests Michael Hanus committed Mar 27, 2017 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 \end{curry} % For a given type, CurryCheck automatically enumerates all values of this type (except for function types). In KiCS2, this is done by exploiting the functional logic features of Curry, i.e., by simply collecting all values of a free variable. For instance, the library \code{Test.EasyCheck}\pindex{Test.EasyCheck} defines an operation \begin{curry} valuesOf :: a -> [a] \end{curry} which computes the list of all values of the given argument according to a fixed strategy (in the current implementation: randomized level diagonalization \cite{ChristiansenFischer08FLOPS}). For instance, we can get 20 values for a list of integers by % \begin{curry} Test.EasyCheck> take 20 (valuesOf (_::[Int])) [[],[-1],[-3],[0],[1],[-1,0],[-2],[0,0],[3],[-1,1],[-3,0],[0,1],[2], [-1,-1],[-5],[0,-1],[5],[-1,2],[-9],[0,2]] \end{curry} % Since the features of PAKCS for search space exploration are more limited, PAKCS uses in CurryCheck explicit generators for search tree structures which are defined in the module \code{SearchTreeGenerators}. For instance, the operations % \begin{curry} genInt :: SearchTree Int$\listline$genList :: SearchTree a -> SearchTree [a] \end{curry} generates (infinite) trees of integer and lists values. To extract all values in a search tree, the library \code{Test.EasyCheck} also defines an operation \begin{curry} valuesOfSearchTree :: SearchTree a -> [a] \end{curry} so that we obtain 20 values for a list of integers in PAKCS by % \begin{curry} ...> take 20 (valuesOfSearchTree (genList genInt)) [[],[1],[1,1],[1,-1],[2],[6],[3],[5],[0],[0,1],[0,0],[-1],[-1,0],[-2], [-3],[1,5],[1,0],[2,-1],[4],[3,-1]] \end{curry} % Apart from the different implementations, CurryCheck can test properties on predefined types, as already shown, as well as on user-defined types. For instance, we can define our own Peano representation of natural numbers with an addition operation and two properties as follows: % \begin{curry} data Nat = Z | S Nat$\listline$add :: Nat -> Nat -> Nat add Z n = n add (S m) n = S(add m n)$\listline$addIsCommutative x y = add x y -=- add y x$\listline$addIsAssociative x y z = add (add x y) z -=- add x (add y z) \end{curry} % Properties can also be defined for polymorphic types. For instance, we can define general polymorphic trees, operations to compute the leaves of a tree and mirroring a tree as follows: \begin{curry} data Tree a = Leaf a | Node [Tree a]$\listline$leaves (Leaf x) = [x] leaves (Node ts) = concatMap leaves ts$\listline$mirror (Leaf x) = Leaf x mirror (Node ts) = Node (reverse (map mirror ts)) \end{curry} Then we can state and check two properties on mirroring: \begin{curry} doubleMirror t = mirror (mirror t) -=- t$\listline$leavesOfMirrorAreReversed t = leaves t -=- reverse (leaves (mirror t)) \end{curry} % In some cases, it might be desirable to define own test data since the generated structures are not appropriate for testing (e.g., balanced trees to check algorithms that require work on balanced trees). Of course, one could drop undesired values by an explicit condition. For instance, consider the following operation that adds all numbers from 0 to a given limit: % \begin{curry} sumUp n = if n==0 then 0 else n + sumUp (n-1) \end{curry} % Since there is also a simple formula to compute this sum, we can check it: % \begin{curry} sumUpIsCorrect n = n>=0 ==> sumUp n -=- n * (n+1) div 2 \end{curry} Note that the condition is important since \code{sumUp} diverges on negative numbers. CurryCheck tests this property by enumerating integers, i.e., also many negative numbers which are dropped for the tests. In order to generate only valid test data, we define our own generator for a search tree containing only valid data: % \begin{curry} genInt = genCons0 0 ||| genCons1 (+1) genInt \end{curry} % The combinator \code{genCons0} constructs a search tree containing only this value, whereas \code{genCons1} constructs from a given search tree a new tree where the function given in the first argument is applied to all values. Similarly, there are also combinators \code{genCons2}, \code{genCons3} etc.\ for more than one argument. The combinator \ccode{|||} combines two search trees. If the Curry program containing properties defines a generator operation with the name \code{gen$\tau$}, then CurryCheck uses this generator to test properties with argument type$\tau$. Hence, if we put the definition of \code{genInt} in the Curry program where \code{sumUpIsCorrect} is defined, the values to check this property are only non-negative integers. Since these integers are slowly increasing, i.e., the search tree is actually degenerated to a list, we can also use the following definition to obtain a more balanced search tree: % \begin{curry} genInt = genCons0 0 ||| genCons1 (\n -> 2*(n+1)) genInt ||| genCons1 (\n -> 2*n+1) genInt \end{curry} The library \code{SearchTree} defines the structure of search trees as well as operations on search trees, like limiting the depth of a search tree (\code{limitSearchTree}) or showing a search tree (\code{showSearchTree}). For instance, to structure of the generated search tree up to some depth can be visualized as follows: \begin{curry} ...SearchTree> putStr (showSearchTree (limitSearchTree 6 genInt)) \end{curry} % If we want to use our own generator only for specific properties, we can do so by introducing a new data type and defining a generator for this data type. For instance, to test only the operation \code{sumUpIsCorrect} with non-negative integers, we do not define a generator \code{genInt} as above, but define a wrapper type for non-negative integers and a generator for this type: % \begin{curry} data NonNeg = NonNeg { nonNeg :: Int }$\listline$genNonNeg = genCons1 NonNeg genNN where genNN = genCons0 0 ||| genCons1 (\n -> 2*(n+1)) genNN ||| genCons1 (\n -> 2*n+1) genNN \end{curry} Now we can either redefine \code{sumUpIsCorrect} on this type \begin{curry} sumUpIsCorrectOnNonNeg (NonNeg n) = sumUp n -=- n * (n+1) div 2 \end{curry} or we simply reuse the old definition by \begin{curry} sumUpIsCorrectOnNonNeg = sumUpIsCorrect . nonNeg \end{curry} Michael Hanus committed Dec 15, 2017 492 493 \subsection{Checking Equivalence of Operations} Michael Hanus committed Feb 11, 2018 494 CurryCheck supports also equivalence tests for operations. Michael Hanus committed Dec 15, 2017 495 496 Two operations are considered as \emph{equivalent} if they can be replaced by each other in any possible context Michael Hanus committed Feb 11, 2018 497 498 499 without changing the computed values (this is also called \emph{contextual equivalence} and precisely defined in \cite{AntoyHanus12PADL} for functional logic programs). Michael Hanus committed Dec 15, 2017 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 For instance, the Boolean operations \begin{curry} f1 :: Bool -> Bool f2 :: Bool -> Bool f1 x = not (not x) f2 x = x \end{curry} are equivalent, whereas \begin{curry} g1 :: Bool -> Bool g2 :: Bool -> Bool g1 False = True g2 x = True g1 True = True \end{curry} are not equivalent: \code{g1 failed} has no value but \code{g2 failed} evaluates to \code{True}. To check the equivalence of operations, one can use the property combinator \code{<=>}: \begin{curry} f1_equiv_f2 = f1 <=> f2 g1_equiv_g2 = g1 <=> g2 \end{curry} Michael Hanus committed Feb 11, 2018 520 The left and right argument of this combinator must be a defined operation Michael Hanus committed Dec 15, 2017 521 522 523 524 525 526 or a defined operation with a type annotation in order to specify the argument types used for checking this property. CurryCheck transforms such properties into properties where both operations are compared w.r.t.\ all partial values and partial results. Michael Hanus committed Feb 11, 2018 527 The details are described in \cite{AntoyHanus18FLOPS}. Michael Hanus committed Dec 15, 2017 528 Michael Hanus committed Feb 11, 2018 529 It should be noted that CurryCheck can test Michael Hanus committed Dec 15, 2017 530 531 532 533 534 535 536 537 538 539 540 541 542 the equivalence of non-terminating operations provided that they are \emph{productive}, i.e., always generate (outermost) constructors after a finite number of steps (otherwise, the test of CurryCheck might not terminate). For instance, CurryCheck reports a counter-example to the equivalence of the following non-terminating operations: \begin{curry} ints1 n = n : ints1 (n+1)$\listline$ints2 n = n : ints2 (n+2) -- This property will be falsified by CurryCheck: ints1_equiv_ints2 = ints1 <=> ints2 \end{curry} Michael Hanus committed Feb 11, 2018 543 544 This is done by iteratively guessing depth-bounds, computing both operations up to these depth-bounds, and comparing the computed results. Michael Hanus committed Dec 15, 2017 545 546 547 548 Since this might be a long process, CurryCheck supports a faster comparison of operations when it is known that they are terminating. If the name of a test contains the suffix \code{'TERMINATE}, Michael Hanus committed Feb 09, 2018 549 550 551 CurryCheck assumes that the operations to be tested are terminating, i.e., they always yields a result when applied to ground terms. In this case, CurryCheck does not iterate over depth-bounds Michael Hanus committed Dec 15, 2017 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 but evaluates operations completely. For instance, consider the following definition of permutation sort (the operations \code{perm} and \code{sorted} are defined above): \begin{curry} psort :: Ord a => [a] -> [a] psort xs | sorted ys = ys where ys = perm xs \end{curry} A different definition can be obtained by defining a partial identity on sorted lists: \begin{curry} isort :: Ord a => [a] -> [a] isort xs = idSorted (perm xs) where idSorted [] = [] idSorted [x] = [x] idSorted (x:y:ys) | x<=y = x : idSorted (y:ys) \end{curry} Michael Hanus committed Feb 11, 2018 570 We can test the equivalence of both operations by Michael Hanus committed Dec 15, 2017 571 572 573 574 575 576 577 578 579 580 581 582 583 584 specializing both operations on some ground type (otherwise, the type checker reports an error due to an unspecified type \code{Ord} context): \begin{curry} psort_equiv_isort = psort <=> (isort :: [Int] -> [Int]) \end{curry} CurryCheck reports a counter example by the 274th test. Since both operations are terminating, we can also check the following property: \begin{curry} psort_equiv_isort'TERMINATE = psort <=> (isort :: [Int] -> [Int]) \end{curry} Now a counter example is found by the 21th test. Michael Hanus committed Feb 11, 2018 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 Instead of annotating the property name to use more efficient equivalence tests for terminating operations, one can also ask CurryCheck to analyze the operations in order to safely approximate termination or productivity properties. For this purpose, one can call CurryCheck with the option \ccode{--equivalence=$equiv$} or \ccode{-e$equiv$}. The parameter$equiv$determines the mode for equivalence checking which must have one of the following values (or a prefix of them): % \begin{description} \item[\code{manual}:] This is the default mode. In this mode, all equivalence tests are executed with first technique described above, unless the name of the test has the suffix \code{'TERMINATE}. \item[\code{autoselect}:] This mode automatically selects the improved transformation for terminating operations by a program analysis, i.e., if it can be proved that both operations are terminating, then the equivalence test for terminating operations is used. It is also used when the name of the test has the suffix \code{'TERMINATE}. \item[\code{safe}:] This mode analyzes the productivity behavior of operations. If it can be proved that both operations are terminating or the test name has the suffix \code{'TERMINATE}, then the more efficient equivalence test for terminating operations is used. If it can be proved that both operations are productive or the test name has the suffix \code{'PRODUCTIVE}, then the first general test technique is used. Otherwise, the equivalence property is \emph{not} tested. Thus, this mode is useful if one wants to ensure that all equivalence tests always terminate (provided that the additional user annotations are correct). 617 618 619 620 621 622 623 624 625 626 627 628 629 630 \item[\code{ground}:] In this mode, only ground equivalence is tested, i.e., each equivalence property \begin{curry} g1_equiv_g2 = g1 <=> g2 \end{curry} is transformed into a property which states that both operations must deliver the same values on same input values, i.e., \begin{curry} g1_equiv_g2 x1 ... xn = g1 x1 ... xn <~> g2 x1 ... xn \end{curry} Note this property is more restrictive than contextual equivalence. For instance, the non-equivalence of \code{g1} and \code{g2} as shown above cannot be detected by testing ground equivalence only. Michael Hanus committed Feb 11, 2018 631 \end{description} Michael Hanus committed Feb 09, 2018 632 Michael Hanus committed Dec 15, 2017 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 \subsection{Checking Contracts and Specifications} \label{sec:currycheck:contracts} The expressive power of Curry supports writing high-level specifications as well as efficient implementations for a given problem in the same programming language, as discussed in \cite{AntoyHanus12PADL}. If a specification or contract is provided for some function, then CurryCheck automatically generates properties to test this specification or contract. Following the notation proposed in \cite{AntoyHanus12PADL}, a \emph{specification}\index{specification} for an operation$f$is an operation \code{$f$'spec} of the same type as$f$. A \emph{contract}\index{constract} consists of a pre- and a postcondition, where the precondition could be omitted. A \emph{precondition}\index{precondition} for an operation$f$of type$\tau \to \tau'$is an operation \begin{curry}$f$'pre ::$\tau$->$~$Bool \end{curry} whereas a \emph{postcondition}\index{postcondition} for$f$is an operation \begin{curry}$f$'post ::$\tau$->$~\tau'$->$~$Bool \end{curry} which relates input and output values (the generalization to operations with more than one argument is straightforward). As a concrete example, consider again the problem of sorting a list. We can write a postcondition and a specification for a sort operation \code{sort} and an implementation via quicksort as follows (where \code{sorted} and \code{perm} are defined as above): \begin{curry} -- Postcondition: input and output lists should have the same length sort'post xs ys = length xs == length ys -- Specification: -- A correct result is a permutation of the input which is sorted. sort'spec :: [Int] -> [Int] Michael Hanus committed Oct 30, 2018 680 sort'spec xs | sorted ys = ys where ys = perm xs 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 -- An implementation of sort with quicksort: sort :: [Int] -> [Int] sort [] = [] sort (x:xs) = sort (filter (=x) xs) \end{curry} % If we process this program with CurryCheck, properties to check the specification and postcondition are automatically generated. For instance, a specification is satisfied if it is equivalent to its implementation, and a postcondition is satisfied if each value computed for some input satisfies the postcondition relation between input and output. For our example, CurryCheck generates the following properties (if there are also preconditions for some operation, these preconditions are used to restrict the test cases via the condition operater \ccode{==>}): \begin{curry} sortSatisfiesPostCondition :: [Int] -> Prop Michael Hanus committed Nov 21, 2018 701 sortSatisfiesPostCondition x = always (sort'post x (sort x)) 702 703 704 705 706 707 sortSatisfiesSpecification :: Prop sortSatisfiesSpecification = sort <=> sort'spec \end{curry} Michael Hanus committed Nov 01, 2018 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 \subsection{Combining Testing and Verification} Usually, CurryCheck tests all user-defined properties as well as postconditions or specifications, as described in Section~\ref{sec:currycheck:contracts}. If a programmer uses some other tool to verify such properties, it is not necessary to check such properties with test data. In order to advice CurryCheck to do so, it is sufficient to store the proofs in specific files. Since the proof might be constructed by some tool unknown to CurryCheck or even manually, CurryCheck does not check the proof file but trusts the programmer and uses a naming convention for files containing proofs. If there is a property \code{p} in a module \code{M} for which a proof in file \code{proof-M-p.*} (the name is case independent), then CurryCheck assumes that this file contains a valid proof for this property. For instance, the following property states that sorting a list does not change its length: % \begin{curry} sortlength xs = length (sort xs) <~> length xs \end{curry} % If this property is contained in module \code{Sort} and there is a file \code{proof-Sort-sortlength.txt} containing a proof for this property, CurryCheck considers this property as valid and does not check it. Moreover, it uses this information to simplify other properties to be tested. For instance, consider the property \code{sortSatisfiesPostCondition} of Section~\ref{sec:currycheck:contracts}. This can be simplified to \code{always$\;$True} so that it does not need to be tested. One can also provide proofs for generated properties, e.g., determinism, postconditions, specifications, so that they are not tested: \begin{itemize} \item Michael Hanus committed May 08, 2019 748 If there is a proof file \code{proof-$M$-$f$-IsDeterministic.*}, Michael Hanus committed Nov 01, 2018 749 750 a determinism annotation for operation$M.f$is not tested. \item Michael Hanus committed May 08, 2019 751 If there is a proof file \code{proof-$M$-$f$-SatisfiesPostCondition.*}, Michael Hanus committed Nov 01, 2018 752 753 a postcondition for operation$M.f$is not tested. \item Michael Hanus committed May 08, 2019 754 If there is a proof file \code{proof-$M$-$f$-SatisfiesSpecification.*}, Michael Hanus committed Nov 01, 2018 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 a specification for operation$M.f$is not tested. \end{itemize} Note that the file suffix and all non-alpha-numberic characters in the name of the proof file are ignored. Furthermore, the name is case independent This should provide enough flexibility when other verification tools require specific naming conventions. For instance, a proof for the property \code{Sort.sortlengh} could be stored in the following files in order to be considered by CurryCheck: \begin{curry} proof-Sort-sortlength.tex PROOF_Sort_sortlength.agda Proof-Sort_sortlength.smt ProofSortSortlength.smt \end{curry} Michael Hanus committed Mar 27, 2017 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 \subsection{Checking Usage of Specific Operations} In addition to testing dynamic properties of programs, CurryCheck also examines the source code of the given program for unintended uses of specific operations (these checks can be omitted via the option \ccode{--nosource}). Currently, the following source code checks are performed: \begin{itemize} \item The prelude operation \ccode{=:<=} is used to implement functional patterns \cite{AntoyHanus05LOPSTR}. It should not be used in source programs to avoid unintended uses. Hence, CurryCheck reports such unintended uses. \item Set functions \cite{AntoyHanus09} are used to encapsulate all non-deterministic results of some function in a set structure. Hence, for each top-level function$f$of arity$n$, the corresponding set function can be expressed in Curry Michael Hanus committed Dec 15, 2017 792 (via operations defined in the library \code{SetFunctions}) Michael Hanus committed Mar 27, 2017 793 794 795 796 797 798 799 800 801 802 803 by the application \ccode{set$nf$} (this application is used in order to extend the syntax of Curry with a specific notation for set functions). However, it is not intended to apply the operator \ccode{set$n$} to lambda abstractions, locally defined operations or operations with an arity different from$n\$. Hence, CurryCheck reports such unintended uses of set functions. \end{itemize} % LocalWords: CurryCheck Michael Hanus committed Dec 15, 2017 804 805 806 807 808 %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End: | 2021-04-22 16:30:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451021909713745, "perplexity": 1342.7308924176116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00521.warc.gz"} |
https://andrewhooker.github.io/PopED/reference/inv.html | Function computes the inverse of a matrix.
inv(mat, method = 1, tol = .Machine\$double.eps, pseudo_on_fail = TRUE, ...)
## Arguments
mat A matrix Which method to use. 1 is Cholesky chol2inv(chol(mat), 2 is using solve(mat) and 3 is the Moore-Penrose generalized inverse (pseudoinverse). The tolerance at which we should identify a singular value as zero (used in pseudoinverse calculation). If another method fails should the Moore-Penrose generalized inverse (pseudoinverse) be used? Not used.
## Value
The inverse matrix | 2023-01-31 22:53:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30761778354644775, "perplexity": 2823.8282059549883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00618.warc.gz"} |
http://ncatlab.org/nlab/show/syntax | # Contents
## Idea
Syntax is the formal specification of a theory, as opposed to semantics.
Revised on December 20, 2012 01:38:17 by Urs Schreiber (82.169.65.155) | 2013-12-21 14:13:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3079439401626587, "perplexity": 2596.915997148643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345775611/warc/CC-MAIN-20131218054935-00022-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.ias.ac.in/listing/bibliography/pram/R_KHENATA | • R KHENATA
Articles written in Pramana – Journal of Physics
• First-principle calculations of structural, electronic, optical, elastic and thermal properties of $\rm{MgXAs_{2} (X = Si, Ge)}$ compounds
First-principle calculations on the structural, electronic, optical, elastic and thermal properties of the chalcopyrite $\rm{MgXAs_{2} (X = Si, Ge)}$ have been performed within the density functional theory (DFT) using the full potential linearized augmented plane wave (FP-LAPW) method. The obtained equilibrium structural parameters are in good agreement with the available experimental data and theoretical results. The calculated band structures reveal a direct energy band gap for the interested compounds. The predicted band gaps using the modified Becke–Johnson(mBJ) exchange approximation are in fairly good agreement with the experimental data. The optical constants such as the dielectric function, refractive index, and the extinction coefficient are calculated and analysed. The independent elastic parameters namely, $C_{11}, C_{12}, C_{13}, C_{33}, C_{44}$ and $C_{66}$ are evaluated. The effects of temperature and pressure on some macroscopic properties of $\rm{MgSiAs_{2}}$ and $\rm{MgGeAs_{2}}$ are predicted using the quasiharmonic Debye model in which the lattice vibrations are taken into account.
• Electronic, optical, magnetic and thermoelectric properties of CsNiO$_2$ and CsCuO$_2$: Insights from DFT-based computer simulation
In this paper, we present the results of a detailed computational study of the structural, electronic, optical, magnetic and thermoelectric properties of the CsNiO$_2$ and CsCuO$_2$ Heusler alloys, by using the full potential-linearised augmented plane wave (FP-LAPW) method. The calculated structural parameters of the title compounds are in excellent agreement with the available theoretical data. The equilibrium ground-state properties were calculated and it was showed that the studied compounds are energetically stable in the AlCu$_2$Mn phase within the ferromagnetic state. In order to evaluate the stability of our compounds, the cohesion energies and formation energies have been evaluated. The optoelectronic and magnetic properties revealed that these compounds exhibit half-metallic ferromagnetic behaviour with large semiconductor and half-metallic gaps. This behaviour is confirmed by the integer values of total magnetic moments, but these compounds do not satisfy the Slater–Pauling rule. Furthermore, the thermoelectric parameters are computed in a large temperature range of 300–800 K to explore the potential of these compounds for high-performance technological applications.
• Prediction study of magnetic stability, structural and electronic properties of Heusler compounds Mn2PtZ (Z = V, Co): DFT+U+TB-mBJ calculation
In this work, first-principles calculations were utilised to study the structural, magnetic and electronic properties of Mn$_2$PtZ (Z = V, Co) based on density functional theory (DFT). These compounds are predicted to be more stable in the Cu$_2$MnAl structure and the FM ground state is energetically favourable, withmagnetic moments of 4.93 and 9.04 $μ$$_B$/f.u. for Mn$_2$PtV and Mn$_2$PtCo compounds, respectively. Additionally, the computed total magnetic moments of both compounds agree well with the Slater–Pauling rule, MT = NV−24, andthe main contribution in these magnetic moments emanates from Mn atoms for both materials. Through the results on the spin-polarised electronic properties (band structures and densities of states), it is found that both alloys reveala complete half-metallic (HM) character with a half-metallic gap using GGA+U+TB-mBJ approximation.
• # Pramana – Journal of Physics
Volume 96, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | 2022-01-20 19:42:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5497370362281799, "perplexity": 2260.2203442672017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00676.warc.gz"} |
http://math.stackexchange.com/questions/269413/phase-space-area-preservation | # Phase space area preservation
What is wrong with the following argument?
Suppose the initial configuration $(x,p)$ of a system of many non-interacting particles each of mass $m$ in phase space is given by a rectangle $x_0\in[-a,a]$ and $p_0\in[-b,b]$.
Then they are subjected to a constant acceleration in the $x$-direction, $c$.
I wish to find the image of this rectangle in phase space after time $t$ (under such acceleration).
Hint: The answer should have the same area -- $4ab$ by Liouville Theorem.
By the SUVAT equations, $$x=x_0+{p_0\over m} t+{1\over 2}ct^2\\ p=p_0+mct$$
So the image should be $$x\in \left[-a-{b\over m}t+{1\over 2}ct^2\,\,\,\,\,\,,\,\,\,\,\,\,a+{b\over m}t+{1\over 2}ct^2\right]\\ p\in \left[-b+mct\,\,\,,\,\,\,b+mct\right]$$
So the area would be $$\left(2a+2{b\over m}t\right)\left(2b\right)\neq 4ab$$
What is wrong here?
Thank you.
-
Are you sure the image remains a rectangle? – Rahul Jan 2 '13 at 21:58
@RahulNarain: Ah thank you. My assumption was totally unjustifiable! – Kurt Jan 2 '13 at 22:30
The system does not remain a rectangle for $t\ne0$. The transformation from $x_0,p_0$ to $x,p$ is affine and can be written as $$\begin{bmatrix}x \\ p\end{bmatrix} = \begin{bmatrix}1 & \frac tm \\ 0 & 1\end{bmatrix} \begin{bmatrix}x_0 \\ p_0\end{bmatrix} + \begin{bmatrix}\frac12ct^2 \\ mct\end{bmatrix}.$$ The matrix in there has determinant $1$, so it preserves area. But it is not a scalar multiple of an orthogonal matrix, so it may not preserve rectangularity. (Nor is it diagonal, which would preserve axis-aligned rectangles.)
Alternatively, you can compute the images of the vertices of the rectangle, and find that they form a parallelogram with one pair of edges of length $2a$ and corresponding perpendicular height $2b$. | 2016-05-26 02:54:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853264451026917, "perplexity": 247.44377901382302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275437.19/warc/CC-MAIN-20160524002115-00045-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://vaspkit.cn/index.php/221.html | # VASP errors
If you do not find the error listed here, or the solution does not resolve your problem you can also check out the error-wiki.
VERY BAD NEWS! internal error in subroutine SGRCON:
Found some non-integer element in rotation matrix 2
This error is symmetry related. In the VASP forum the suggested solution is to increase the SYMPREC parameter. An alternative solution which also works but may be safer from the accuracy point of view (and increases the computational cost) is to switch of symmetry by setting ISYM = 0.
WARNING: Sub-Space-Matrix is not hermitian in DAV
combined with one of the following:
Error FEXCP: supplied Exchange-correletion table is too small BRMIX: very serious problems, the old and the new charge density differ This error occurs at the start of a “new” calculation and keeps popping up no matter how you modify your INCAR. This error seems to be linked to “small” systems as well, i.e. if you divide the number of bands used by the number of cores you often get a rather small value (e.g. <5). As a result, reducing the number of cores on which the calculation is performed seems to resolve the problem, stabilizing the calculation. (e.g. mympirun -h 14 vasp on a system with 28 cores/node, or increase KPAR such that fewer cores work on a single k-point)
VERY BAD NEWS! internal error in subroutine PRICEL (probably precision problem, try to change SYMPREC in INCAR ?):
Sorry, number of cells and number of vectors did not agree. 2
This error is also symmetry related. According to the VASP-masters it can pop up for a super cell with high symmetry atomic positions. For some reason part of the symmetry algorithm can not recognize the cell as being primitive. The suggested solution is to move one such highly symmetric atom slightly off, breaking the symmetry of the cell. However when you are optimizing a geometry this may not be the way you want to go (as it feels rather counterproductive). Two other suggested solutions (which both worked in my problem case with a MOF) that can work are: (1) Increase SYMPREC = 1.0E-8 (default is 1.0E-5) or (2) switch off symmetry altogether by setting ISYM = 0. As option 2 switches off the use of symmetry altogether, I believe this option should always work, as it may just skip the problematic part of the program.
VASP refuses to read (some) parameters from the INCAR file.
VASP is rather sturdy when it comes to input. Every parameter has a default value, and as such even an empty INCAR file is sufficient to start a calculation (albeit not a good idea). However, sometimes it looks like VASP just plainly refuses to read parameter value from the INCAR file. There are a few possible reasons for this:
You made a typo in the parameter name (e.g. ICHARGE instead of ICHARG) You have multiple instances of the same parameter. This can easily happen if you have a large INCAR file. In such a case VASP will use the value of the first instance of the parameter. You used a double tab to start the line. This is a rather interesting feature a student ran into during a project (He noticed his ENCUT scan gave the exact same result for all ENCUT values.) Apparently, if you use 2 tabs to start a line VASP ignores the parameter. One tab seems not to be a problem. Adding additional spaces after the double tab doesn’t resolve the problem either. How can you distinguish between a double tab and a set of spaces? Open your INCAR file using midnight commander (and F4), and you will find double tabs indicated as “<—–>” symbols. Note: this feature was found in VASP 5.4 and 5.3 .
A single atom calculation refuses to converge in energy.
You can help VASP by setting (or guessing) the electronic structure more accurately:
NUPDOWN parameter: set the difference between up and down electrons. FERWE and FERDO parameters: more stringent than the option above, you can tell VASP what the specific occupations of the up and down orbitals should be. And you may even want to fix these partial occupancies using ISMEAR = -2 ALGO = All : Instead of the usual Davidson algorithm you could use a conjugate gradient algorithm to optimise the electronic structure. Use a step (smaller than the default) be setting TIME = 0.1. However, chance are you will end up with convergence problems in the Rotation steps (only shown in the standard output, not in the OSZICAR file. A way to avoid this is by switching of subspace rotations all-together: LSUBROT = .FALSE.. This may start to sound fishy, but normally you should have crossed the sign “desperate” already a few times before you arrive at this point.
Internal error in SETUP_DEG_CLUSTERS: NB_TOT exceeds NMAX_DEG
increase NMAX_DEG to 107
This is a rather unpleasant error you may encounter when performing phonon calculations (IBRION = 7 or 8). It is one of these cases where you extend beyond the range of a hard coded array size. The first thing to try is to perform the modification suggested. This means, go to your VASP source and increase the hard-coded limit.(source)
「感觉有帮助?一键投喂 牛奶/咖啡/冰阔乐!」
(๑>ڡ<)☆哇~太棒了! | 2023-03-21 23:20:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6242556571960449, "perplexity": 1277.0612787290138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00462.warc.gz"} |
http://www.nxn.se/valent/more-interactive-python-plotting-with-clojure-and | # More interactive Python plotting with Clojure and Quil
The Python plotting library matplotlib is one of the libraries I read about a lot, and often, to try to keep track of everything possible to do with it. But I keep running in to situations where I don’t find it very useful.
Here I will write about one issue I had at a point, where I found it annoying when I wanted to sequentially build a graph, by adding scatter points to it one set at a time.
This summer I started playing with Clojure, and I enjoyed it quite a lot. Through looking at presentations for this I found Quil. I’ve only at some points played with Processing before, and I feel like quil better fits my way of thinking.
So, what I want, is to to my regular things with point data in IPython Notebook, and when I want to add it to a graph, I want to just send it to a quil sketch which is just running to the side of my IPython window. This way I can build my graphs over time.
I enjoy working with MongoDB, it has a tendency to just work where I need it and it is easy get started with. So I used this as a backend, and connected my quil sketch to the database using the monger library, which is extremely simple to use.
Here follows the entire code for the quill sketch that does what I want:
(ns square-vis.core
(:gen-class)
(:require [monger.core :as mg])
(:require [monger.collection :as mc])
(:use monger.operators)
(:use quil.core))
(mg/connect!)
(mg/set-db! (mg/get-db "test"))
(defn draw-square
[doc]
(let [size (get doc :size)
x (get (get doc :coord) 0)
y (get (get doc :coord) 1)
c (get doc :c)]
(no-stroke)
(fill c)
(rect x y size size)))
(defn -main
[& args])
(defn setup []
(smooth)
(frame-rate 30)
(background 200))
(defn draw []
(background 200)
(doseq [item (mc/find-maps "squares")] (draw-square item)))
(defsketch example
:title "Squaaaares!"
:setup setup
:draw draw
:size [323 200])
This is only for making scatter plots with squares, in grayscale, which I like a lot. This piece of code connects to a database called “test”, and searches for documents in the collection named “squares”, and give them as Clojure maps, so they are easy to use in the rest of the sketch. The documents in the collection represents squares, with the following schema.
square = {'coord': [10, 20], 'size': 8, 'c': 155}
Some small examples:
from pymongo import Connection
conn = Connection()
coll = conn['test']['squares']
for i in range(28):
for j in range(22):
square = {'coord': [10 + 10 * i, 20 + 10 * j],
'size': 5, 'c': (i * j) ** 2 % 251}
coll.insert(square)
coll.drop()
for k in linspace(0.0, 2.0):
x = np.linspace(1, 350)
y = k * x
for xi, yi in zip(x, y):
doc = {'coord': [xi, yi], 'size': 5 * k + 1, 'c': 100 * k}
coll.save(doc)
print('running')
while True:
try:
for doc in coll.find():
doc['coord'][0] += 1 * (random.random() - 0.5)
doc['coord'][1] += 1 * (random.random() - 0.5)
coll.save(doc)
except KeyboardInterrupt:
print('done')
break | 2018-11-19 17:37:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33723923563957214, "perplexity": 4996.249598105443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746061.83/warc/CC-MAIN-20181119171420-20181119193420-00093.warc.gz"} |
https://dsp.stackexchange.com/questions/62115/2d-convolution-of-image-with-filter-as-successive-1d-convolutions | # 2D convolution of image with filter as successive 1D convolutions
I want to prove (or more precisely experiment with) the idea that a 2D convoltion as produced by the Matlab conv2() function between an image I (2D matrix) and a kernel (smaller 2D matrix) can be implemented as some 1D conv i.e. the Matlab conv() function and NOT conv2(). Of course possibly some reshapes and matrix multiply might be needed but no conv2().
And to make it clear, I am NOT refering to that kind if thing:
s1=[1,0,-1]'
s2=[1 2 1]
diff=conv2(x,y)-conv2(conv2(x,s1),s2)
diff is = 0 everywhere
Rather, I want to do something like
conv(conv(x(:), filter1)filter2) ...
• please? Maybe @Fat32 ? – Machupicchu Nov 23 '19 at 15:26
• Anyone please?. – Machupicchu Nov 23 '19 at 17:25
• Your code was not implicit. Care should be taken on borders – Laurent Duval Nov 23 '19 at 17:48
• Math stackexchange may be better. In mathematical terms it is sums of kronecker products of matrices. – mathreadler Nov 23 '19 at 23:00
When a 2D filter $$h[n,m]$$ is separable; i.e., $$h[n,m] = f[n]g[m]$$, then the 2D convolution of an image $$I[n,m]$$ with that filter can be decomposed into 1D convolutions between rows and columns of the image and the 1D filters $$f[n]$$ and $$g[m]$$ respectively.
Let me give you the MATLAB / OCTAVE code, I hope this is what you wanted to show ?
clc; clear all; close all;
N1 = 8; % input x[n1,n2] row-count
N2 = 5; % input x[n1,n2] clm-count
M1 = 4; % impulse response h[n1,n2] row-count
M2 = 3; % impulse response h[n1,n2] clm-count
L1 = N1+M1-1; % output row-count
L2 = N2+M2-1; % output clm-count
x = rand(N1,N2); % input signal
f = rand(1,M2); % f[n1] = row vector
g = rand(M1,1); % g[n1] = column vector
h = g*f; % h[n1,n2] = f[n1]*g[n2]
y = zeros(L1,L2); % output signal
% S1 - Implement Separable Convolution
% ------------------------------------
for k1 = 1:N2 % I - Convolve COLUMNS of x[:,k] with g[k]
y(:,k1) = conv(x(:,k1),g); % intermediate output
end
for k2 = 1:L1 % II- Convolve ROWS of yi[k,:] with f[k]
y(k2,:) = conv(y(k2,1:N2),f);
end
% S2 - Matlab conv2() :
% ---------------------
y2 = conv2(x,h); % check for matlab conv2()
% S3 - Display the Results
% ------------------------
title('The Difference y[n,m] - y2[n,m]');
• thanks for taking time to answer, great! – Machupicchu Nov 23 '19 at 18:44
• @Machupicchu ok. But it seems to late :-) – Fat32 Nov 23 '19 at 18:45
• no no its always good to have multiple answers – Machupicchu Nov 23 '19 at 18:48
• Yes indeed by Ray Charles – Laurent Duval Nov 23 '19 at 18:55
• I added a comment here since there is no way to directly send messages to members (which I regret), Dear Fat32, I would be very happy I you had a look at this new question of mine: datascience.stackexchange.com/questions/81923/… – Machupicchu Sep 18 at 19:01
If a 2D $$K_2$$ filter kernel is of rank $$0$$ or $$1$$, it can be written as a separable product of $$2$$ 1D kernels $$K_1^r$$ and $$K_1^c$$ on rows and columns. As such, it can implemented by 1D convolutions, as long as one properly reshape the 2D matrices into 1D ones, and take care about "out-of-range" values, to avoid wrap-around. For instance, you can pad in every direction by the size of the filter, and make sure the convolution does not add unwanted information.
Assuming that you know that you have a separable 2D filter, the following code does the job. A one-liner would be:
xRowFull = reshape(conv(reshape(reshape( conv(x(:),s1,'same'),nRow,nCol)',nRow*nCol,1),s2,'same'),nRow,nCol)';
And the code is:
% https://dsp.stackexchange.com/questions/62115/2d-convolution-of-image-with-filter-as-successive-1d-convolutions
%% Initialization
clear all
nRow = 16;
nCol = 16;
HalfSizeCentralImageKernel = 1;
x = zeros(nRow,nCol);
x(nRow/2-HalfSizeCentralImageKernel:nRow/2+HalfSizeCentralImageKernel,nCol/2-HalfSizeCentralImageKernel:nCol/2+HalfSizeCentralImageKernel)=rand(2*HalfSizeCentralImageKernel+1);
%% Original 2D version
s1=[1,0,-1]';
s2=[1 2 1];
y = s1*s2;
%% Step by step 2x1D version
xRowFlat1 = x(:);
xRowFlat1FiltCol = conv(xRowFlat1,s1,'same');
xRowFlat2 = (reshape(xRowFlat1FiltCol,nRow,nCol))';
xRowFlat2 = xRowFlat2(:);
xRowFlat2FiltRowFlat = conv(xRowFlat2,s2,'same');
xRowFlatFilt2Row = reshape(xRowFlat2FiltRowFlat,nRow,nCol)';
%% Compact vectorized 1D version
xRowFull = reshape(conv(reshape(reshape( conv(x(:),s1,'same'),nRow,nCol)',nRow*nCol,1),s2,'same'),nRow,nCol)';
%% Display
figure(1);
imagesc(x);
figure(2);
subplot(1,3,1)
imagesc([conv2(x,y,'same')]); xlabel('Original')
subplot(1,3,2)
imagesc(xRowFlatFilt2Row); xlabel('Separable, step by step')
subplot(1,3,3)
imagesc(xRowFull); xlabel('Separable, one-liner')
diff1=conv2(x,y,'same')-conv2(conv2(x,s1,'same'),s2,'same');
disp(['Max error 1: ',num2str(max(abs(diff1(:))))]);
diff2=conv2(x,y,'same')-xRowFlatFilt2Row;
disp(['Max error 2: ',num2str(max(abs(diff2(:))))]);
Here is a crude Matlab code. Can you test it, and if OK, i'll send a one-liner (if I can).
nRow = 8;
nCol = 8;
HalfSizeCentralKernel = 1;
x = zeros(nRow,nCol);
x(nRow/2-HalfSizeCentralKernel:nRow/2+HalfSizeCentralKernel,nCol/2-HalfSizeCentralKernel:nCol/2+HalfSizeCentralKernel)=rand(2*HalfSizeCentralKernel+1);
figure(1);
imagesc(x);
% 2D version
s1=[1,0,-1]';
s2=[1 2 1];
y = s1*s2;
diff1=conv2(x,y,'same')-conv2(conv2(x,s1,'same'),s2,'same');
disp(['Max error 1: ',num2str(max(abs(diff1(:))))]);
% 1D version
xRowFlat1 = x(:);
xRowFlat1FiltCol = conv(xRowFlat1,s1,'same');
xRowFlat2 = (reshape(xRowFlat1FiltCol,nRow,nCol))';
xRowFlat2 = xRowFlat2(:);
xRowFlat2FiltRow = conv(xRowFlat2,s2,'same');
xRowFlatFilt2Row = reshape(xRowFlat2FiltRow,nRow,nCol)';
figure(2);
subplot(1,2,1)
imagesc([conv2(x,y,'same')])
subplot(1,2,2)
imagesc(xRowFlatFilt2Row)
• thanks it seems to work, fantastic ! Im now trying to understand why you are using the 'same' conv and how you pad etc – Machupicchu Nov 23 '19 at 18:42
• fantastic! great – Machupicchu Nov 23 '19 at 18:43
• Updated with more details – Laurent Duval Nov 23 '19 at 20:22
• thanks for the great answers! If you can, maybe have a look to a new question, related to Normalized Cross Correlation (NCC). -> dsp.stackexchange.com/questions/62259/… – Machupicchu Nov 29 '19 at 10:03 | 2020-10-29 16:34:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5240147709846497, "perplexity": 3731.462159663758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904834.82/warc/CC-MAIN-20201029154446-20201029184446-00444.warc.gz"} |
http://mathhelpforum.com/algebra/108071-parellel-vectors.html | 1. ## Parellel Vectors?
c= 5i+2j
d=-2i-4j
find u if c+ud is parellel to vector i.
I know the parallel bit is important but I don't know how. Would it be possible for you to explain to me how?
Also its my first time here, hi
2. Originally Posted by alibond07
c= 5i+2j
d=-2i-4j
find u if c+ud is parellel to vector i.
I know the parallel bit is important but I don't know how. Would it be possible for you to explain to me how?
Also its my first time here, hi
a vector parallel to the unit vector i will have zero for a j component.
(5i + 2j) + u(-2i - 4j) = ai + bj
what value of the scalar u will yield the vector ai + bj such that b = 0? | 2017-10-21 04:16:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813890278339386, "perplexity": 1072.933750374525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00499.warc.gz"} |
http://etheses.bham.ac.uk/4273/ | eTheses Repository
# The role of oxygen-dependent substances in exercise
Davies, Christopher S. (2013)
Ph.D. thesis, University of Birmingham.
PDF (2749Kb)Accepted Version
## Abstract
This thesis investigated the role of O$$_2$$-dependent substances in mediating the vasodilatation seen following exercise (post-exercise hyperaemia) and in fatigue development. Additionally we compared young and old subjects to investigate the effects of ageing in both of these phenomena.
Breathing supplementary 40% O$$_2$$ during handgrip exercise at 50% of maximum voluntary contraction had no effect of the magnitude of post-exercise hyperaemia compared to air breathing control. Furthermore, aspirin administration did not alter magnitude of post-exercise hyperaemia or the levels of prostaglandin E metabolites assayed from the forearm venous efflux. Similarly the magnitude of post-exercise hyperaemia was not affected by aminophylline administration. Collectively these suggest that prostaglandins and adenosine are not obligatory mediators of post-exercise hyperaemia.
Supplementary O$$_2$$ breathed during recovery had no effect on fatigue in a second bout of exercise or any of the substances proposed to mediate fatigue, in young subjects. We demonstrated that older subjects showed no changes in the magnitude of post-exercise hyperaemia, but they were more fatigue resistant. There was no O$$_2$$-dependence of either post-exercise hyperaemia or fatigue in older subjects.
In conclusion, we have found no evidence of O$$_2$$-dependent mediators in either post-exercise hyperaemia or fatigue.
Type of Work: Ph.D. thesis. Marshall, Janice Colleges (2008 onwards) > College of Medical & Dental Sciences School of Clinical and Experimental Medicine R Medicine (General)RC1200 Sports Medicine University of Birmingham 4273
This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder.
Repository Staff Only: item control page | 2018-05-26 23:36:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19783084094524384, "perplexity": 10548.253284698954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867949.10/warc/CC-MAIN-20180526225551-20180527005551-00502.warc.gz"} |
https://exponentiations.com/3-to-the-1st-power | # 3 to the 1st Power
Welcome to 3 to the 1st power, our post about the mathematical operation exponentiation of 3 to the power of 1. If you have been looking for 3 to the first power, or if you have been wondering about 3 exponent 1, then you also have come to the right place. The number 3 is called the base, and the number 1 is called the exponent. In this post we are going to answer the question what is 3 to the 1st power. Keep reading to learn everything about three to the first power.
## What is 3 to the 1st Power
3 to the 1st power is conventionally written as 31, with superscript for the exponent, but the notation using the caret symbol ^ can also be seen frequently: 3^1.
31 stands for the mathematical operation exponentiation of three by the power of one. As the exponent is a positive integer, exponentiation means a repeated multiplication:
3 to the 1st power = $\underbrace{ {\rm 3\hspace{3px} \times\hspace{7px} …\hspace{5px} \times\hspace{3px} 3} }_{\rm 1\hspace{3px} times}$
The exponent of the number 3, 1, also called index or power, denotes how many times to multiply the base (3).
Thus, we can answer what is 3 to the 1st power as
3 to the power of 1 = 31 = 3.
If you have come here in search of an exponentiation different to 3 to the first power, or if you like to experiment with bases and indices, then use our calculator below.
To stick with 3 to the power of 1 as an example, insert 3 for the base and enter 1 as the index, also known as exponent or power. Next, hit the convert button, then check the result.
3 to the 1st power is an exponentiation which belongs to the category powers of 3. Similar exponentiations on our site in this category include, but are not limited, to:
Ahead is more info related to 3 to the 1 power, along with instructions how to use the search form, located in the sidebar or at the bottom, to obtain a number like 3^1.
## 3 to the Power of 1
Reading all of the above, you already know most about 3 to the power of 1, except for its inverse which is discussed a bit further below in this section.
Using the aforementioned search form you can look up many numbers, including, for instance, 3 to the power 1, and you will be taken to a result page with relevant posts.
Now, we would like to show you what the inverse operation of 3 to the 1st power, (31)−1, is. The inverse is 1st root of 31, and the math goes as follows:
(31)−1
= $\sqrt[1]{3^{1}}$
= $3^{1/1}$
= $3^{1}$
= 3
Because the index of 1 is not a multiple of 2, which means odd, in contrast to even numbers, the operation produces only one value: (31)−1$\hspace{3px} = \hspace{3px}3$.
Make sure to understand that exponentiation is not commutative, which means that 31 ≠ 13, and also note that (31)-1 ≠ 3-1, the inverse and reciprocal of 31, respectively.
You already know what 3 to the power of 1 equals, but you may also be interested in learning what 3 to the negative 1st power stands for. Next is the summary of our content.
## Three to the First Power
You have reached the concluding section of three to the first power = 31. Three to the first power is, for example, the same as 3 to the power 1 or 3 to the 1 power.
Exponentiations like 31 make it easier to write multiplications and to conduct math operations as numbers get either big or small, such as in case of decimal fractions with lots of trailing zeroes.
If you have been looking for 3 power 1, what is 3 to the 1 power, 3 exponent 1 or 1 power of 3, then it’s safe to assume that you have found your answer as well.
If our explanations have been useful to you, then please hit the like button to let your friends know about our site and this post 3 to the 1st power. And don’t forget to bookmark us.
If you like to learn more about exponentiation, the mathematical operation conducted in 31, then check out the articles which you can locate in the header menu of our site.
We appreciate all comments on 3^1, and if you have a question don’t hesitate filling in the form at the bottom or sending us an email with the subject what is 3 to the 1st power.
Thanks for visiting 3 to the 1st power.
Posted in Powers of 3 | 2020-06-05 23:11:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.634303629398346, "perplexity": 396.19257223168364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00578.warc.gz"} |
https://math.stackexchange.com/questions/3690609/how-to-translate-this-statement-into-a-mathematical-oneusing-appropriate-quanti/3699299#3699299 | # How to translate this statement into a mathematical one(using appropriate quantifiers)?
The statement I'd like to translate into a mathematical one is
"Every American has a dream".
Let $$A$$ and $$D$$ denote the set of all Americans and the set of all dreams, respectively, and $$P(a,d)$$ denote the proposition "American $$a$$ has a dream $$d$$". The mathematically equivalent statement I've deduced is $$\forall a\in A.\exists d\in D.P(a, d)$$
However, I suspect the above statement implies that for every American there exists a common dream $$d$$ such that $$P(a,d)$$ holds true. I would like to know how to rectify this error(if there is one).
• Correct: "forall a there is a d..." does not mean that the d is the same for all a. To state that the d is the same, you have to write "there is a d for all a...". Compare $\forall n \exists m (n < m)$ and $\exists m \forall n (n < m)$. May 25, 2020 at 10:01
1. The verifier of a sentence of the form "$$∀a{∈}A\ ( P(a) )$$" must let the refuter first choose any arbitrary $$a∈A$$ and then verify $$P(a)$$ no matter what $$a∈A$$ was chosen.
2. The verifier of a sentence of the form "$$∃d{∈}D\ ( Q(d) )$$" must first choose some $$d∈D$$ and then verify $$Q(d)$$ for that chosen $$d∈D$$.
In your example, the verifier of "$$∀a{∈}A\ ∃d{∈}D\ ( P(a,d) )$$" must let the refuter make the first move in choosing an $$a∈A$$, and then verify "$$∃d{∈}D\ ( P(a,d) )$$" no matter what $$a$$ was chosen. But since the verifier makes the second move in choosing some $$d∈D$$, the verifier can choose this $$d$$ based on the refuter's first move (i.e. based on $$a$$). That is why "Every American has a dream." corresponds to this sentence.
In contrast, the verifier of "$$∃d{∈}D\ ∀a{∈}A\ ( P(a,d) )$$" must make the first move in choosing some $$d∈D$$, before the refuter makes the second move in choosing an $$a∈A$$. You can see easily that the verifier can win only if there is a single choice of $$d∈D$$ that defeats every possible choice of $$a∈A$$. That is why "All Americans have a common dream." corresponds to this sentence and not the other one. | 2023-03-29 03:00:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922331690788269, "perplexity": 368.18121339178043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00269.warc.gz"} |
https://www.gerad.ca/fr/events/1935 | Retour aux activités
# Exponential convergence towards consensus for non-symmetric linear first-order systems in finite and infinite dimensions
iCalendar
### Emmanuel Trélat – Sorbonne Université, France
I will first recall some results on how to achieve consensus for well known classes of systems, like the celebrated Cucker-Smale or Hegselmann-Krause models. When the systems are symmetric, convergence to consensus is classically established by proving, for instance, that the usual variance is an exponentially decreasing Lyapunov function: this is a "$$L^2$$ theory". When the systems are not symmetric, no $$L^2$$ theory existed until now and convergence was proved by means of a "$$L^\infty$$ theory". In this talk I will show how to develop a $$L^2$$ theory by designing an adequately weighted variance, and how to obtain the sharp rate of exponential convergence to consensus for general finite and infinite-dimensional linear first-order consensus systems. If time allows, I will show applications in which one is interested in controlling vote behaviors in an opinion model.
Biography: Emmanuel Trélat is full professor at Sorbonne Université in Paris, he is the director of Laboratoire Jacques-Louis Lions. His interests range over control theory in finite and infinite dimension, optimal control, stabilization, geometry, and numerical issues. He has been awarded several prizes, among which the Felix Klein Prize by the EMS in 2012 for his achievements on the optimal guidance of Ariane launchers, and has been an invited speaker at ICM in 2018. He is the current editor in chief of the journal COCV (Control Calculus of Variations and Optimization).
Peter E. Caines responsable | 2022-05-22 08:00:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6194514632225037, "perplexity": 1166.7013940672275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00048.warc.gz"} |
http://www.aanda.org/articles/aa/full_html/2010/07/aa13686-09/aa13686-09.html | Free access
Issue A&A Volume 515, June 2010 A6 15 Astronomical instrumentation http://dx.doi.org/10.1051/0004-6361/200913686 28 May 2010
A&A 515, A6 (2010)
Example of Gruis, calibrator for VLTI-AMBER
P. Cruzalèbes1 - A. Jorissen2 - S. Sacuto3 - D. Bonneau1
1 - UMR CNRS 6525 H. Fizeau, Univ. de Nice-Sophia Antipolis, Observatoire de la Côte d'Azur, Av. Copernic, 06130 Grasse, France
2 - Institut d'Astronomie et d'Astrophysique, Univ. Libre de Bruxelles, Campus Plaine C.P. 226, Bd du Triomphe, 1050 Bruxelles, Belgium
3 - Institute of Astronomy, University of Vienna, Türkenschanzstrasse 17, 1180 Wien, Austria
Received 17 November 2009 / Accepted 5 February 2010
Abstract
Context. Accurate long-baseline interferometric measurements require careful calibration with reference stars. Small calibrators with high angular diameter accuracy ensure the true visibility uncertainty to be dominated by the measurement errors.
Aims. We review some indirect methods for estimating angular diameter, using various types of input data. Each diameter estimate, obtained for the test-case calibrator star Gruis, is compared with the value [2.71] mas found in the Bordé calibrator catalogue published in 2002.
Methods. Angular size estimations from spectral type, spectral index, in-band magnitude, broadband photometry, and spectrophotometry give close estimates of the angular diameter, with slightly variable uncertainties. Fits on photometry and spectrophotometry need physical atmosphere models with plausible'' stellar parameters. Angular diameter uncertainties were estimated by means of residual bootstrapping confidence intervals. All numerical results and graphical outputs presented in this paper were obtained using the routines developed under PV-WAVE, which compose the modular software suite SPIDAST, created to calibrate and interprete spectroscopic and interferometric measurements, particularly those obtained with VLTI-AMBER.
Results. The final angular diameter estimate [2.70] mas of Gru, with 68% confidence interval [2.65-2.81] mas, is obtained by fit of the MARCS model on the ISO-SWS 2.38-[27.5] m spectrum, with the stellar parameters [4250] K, , z = [0.0] dex, , and .
Key words: stars: fundamental parameters - stars: individual: Gru - techniques: interferometric - instrumentation: interferometers
1 Introduction
Recent improvements in the optical long-baseline interferometers need good knowledge of the calibrator fundamental parameters and of their brightness distribution. In our paper, we review different methods of angular diameter estimation for a test-case calibrator star and compare the results obtained with the corresponding value found in the calibrator catalogue usually considered as reference for optical interferometry.
In Sect. 2, we recall some basics of interferometric calibration and study in Sect. 3 the influence of the angular diameter uncertainty on the visibility, applied to the uniform-disk model case. In Sect. 4, we review the criteria to be fulfiled by a potential calibrator and introduce the calibrator star Gru. In Sect. 5, we recall the distinction between the direct and the indirect approaches of angular diameter estimation and present various calibrator catalogues presently available for optical interferometry. In Sect. 6, we give the main characteristics of the most used stellar atmosphere models used for our study, particularly those of MARCS. In Sect. 7, we applied some methods of angular diameter estimation to the case of Gru, based on: the Morgan-Keenan-Kellman spectral type (Sect. 7.1), the colour index (Sect. 7.2), the in-band magnitude (Sect. 7.3), the broadband photometry (Sect. 7.4), and the spectrophotometry (Sect. 7.5). In Sect. 8, we discuss the results in terms of diameter uncertainty (Sect. 8.1), of fundamental stellar parameters (Sect. 8.2), and of atmosphere model parameters (Sect. 8.3). We conclude in Sect. 9, and present the main functionalities of the software tool, which we have developed in order to process, calibrate, and interpret the VLTI-AMBER measurements. The method used to compute the uncertainties is described in Appendix A, the de-reddening process in Appendix B, and the residual bootstrap method in Appendix C.
2 Interferometric calibration
Absolute calibration of long-baseline spectro-interferometric observations of scientific targets, such as fluxes, visibilities, differential, and closure phases, needs simultaneous measurements with calibrator targets, allowing determination of the instrumental response during the observing run (Mozurkewich et al. 1991; van Belle & van Belle 2005; Boden 2003). The true (i.e. calibrated) visibility function is , where is the measured visibility of the scientific target, and RV the instrumental response (in visibility).
In principle, when we consider the instrument as a linear optical system, observation of a point-like calibrator gives the system response. Thus, the visibility response RV is simply equal to the measured visibility of the calibrator . Unfortunately, instrumental and atmospheric limitations make the instrument unstable and contribute to destroying this linearity. To get a reliable estimate of the instrumental response during the observing run, scientific and calibrator targets must be observed under similar conditions. With the VLTI-AMBER instrument described by Petrov et al. (2007), it has been showed that the estimator used to measure the fringe visibility also depends on the signal-to-noise ratio (Millour et al. 2008; Tatulli et al. 2007), so that calibrators as bright as their corresponding scientific targets must be found. Most of the time, it is difficult to find unresolved and bright enough calibrators in directions close to a given bright scientific target. To determine the system response, it is preferable to use bright, but non-point-like, calibrators, observed under instrumental conditions similar to those of the scientific targets, rather than dimmer point-like sources observed under different conditions. The price to pay for this choice is the need for an independent estimation of the calibrator brightness distribution (Boden 2007).
If the system response in visibility is given by , where is the calibrator model visibility, then the true visibility becomes . Considering a calibrator with a circularly-symmetric brightness distribution, with angular diameter , the model visibility function at the wavelength , for the sky-projected baselength B, is given by the normalized Hankel transform (of order 0) of the radial brightness distribution, according to the Van Cittert-Zernike theorem (Goodman 1985)
(1)
where r is the distance from the star centre expressed in radius units (r=0 in direction to the disk centre, r=1 towards the limb), J0 the zeroth-order Bessel function of the first kind, the monochromatic brightness distribution, herafter called spectral radiance, i.e. the monochromatic emitted luminous intensity (in ), and is the spectral radiant exitance, i.e. the monochromatic emitted luminous flux (in ), integration of the spectral radiance into the full solid angle of an hemisphere around the emitting area (Malacara & Thompson 2001).
3 Effect of the diameter uncertainty
A bad knowledge of the calibrator angular diameter can skew the true visibility estimate. For a small angular-diameter uncertainty , the model-visibility absolute uncertainty is usually computed applying the approximation of the first-order Taylor series expansion of the visibility function, increasingly inaccurate for non-linear equations,
(2)
In the case of the uniform-disk (ud) model, the monochromatic visibility function is , where J1 is the first-order Bessel function of the first kind, and is a dimensionless argument, which can be also expressed as
(3)
The first partial derivative of the visibility with respect to the angular diameter transforms Eq. (2) into
(4)
where J2 is the second-order Bessel function of the first kind.
Figure 1: Left panel: plot of the uniform-disk visibility function against the argument . Right panel: plot of the ratio of the model visibility uncertainty to the angular diameter relative uncertainty against x, given by the first-order Taylor series expansion of the visibility. Open with DEXTER
The left-hand panel of Fig. 1 shows the variation in against x, while the right-hand panel shows the variation in the ratio / , deduced from the first-order expansion of the visibility. Evidence that the first-order approximation of the standard deviation is inaccurate can be found particularly at the inflexion points of the visibility function ( ), where this ratio should not drop to zero. One can notice that the second-order Taylor expansion deduced from Eq. (A.2),
(5)
gives negative values of the variance at the same points, which is a clear indication that higher order Taylor expansions would be needed.
Knowing that the amplitude of the first maximum of J2(x) reaches 0.4865 for (Andrews 1981), we can infer that the visibility uncertainty of the ud-model due to the calibrator diameter uncertainty never exceeds
(6)
a maximum value that only depends on the relative uncertainty of the angular diameter. It results that, if one wants to get an absolute uncertainty of the science true visibility dominated by a given measured visibility error for any calibrator angular diameter, the relative precision of the estimation of this diameter must be lower than . For example, calibrator angular diameters estimated with relative uncertainties lower than ensure the science true visibilities to be dominated by experimental visibility errors greater than 0.01.
If the relative uncertainty of the model diameter is higher than the experimental visibility absolute error ( > ), one can still find values of the calibrator diameter for which the absolute uncertainty of the science true visibility is dominated by the measurement errors. This can be achieved by numerical inversion of Eq. (4), in finding the values of the argument x corresponding to model visibility absolute uncertainties lower than a given value of the measurement error , for a given diameter relative uncertainty . Because of the quasi-periodic behaviour of against x, as shown in the right panel of Fig. 1, many sets of diameter values, enclosing each zero of the function, can fulfil the condition . Then, we can define the value x0, below which any ud-calibrator diameter would contribute to the global visibility uncertainty less than the experimental errors, thanks to the inversion of the relation , where = . To obtain the corresponding value of the angular diameter , we can use Eq. (3)
(7)
For example, if the calibrator angular diameters are estimated with relative uncertainties, while the experimental visibility errors are 0.01 (i.e. ), a model visibility error lower than 0.01 can be obtained for . Using Eq. (7), we find that this is achieved for any ud-calibrator smaller than [0.93] mas, with a 100-m baselength interferometer operating at [2.2] m. Table 1 gives some typical values of under which < 0.01, for various diameter relative uncertainties, with various baselengths, also at [2.2] m.
Table 1: Values of the angular diameter (in mas), under which , at .
With B = [330] m, , and = , we find that any calibrator smaller than [0.42] mas gives , i.e. = < 0.02, a result very close to that of van Belle & van Belle (2005), who give [0.45] mas for the same value of . Figure 2 shows the baselength dependency of at wavelengths ofs 10 (upper line), 2.2, 1.25, and [0.7] m (lower line) on a log-log scale, with and .
We can conclude from this short study that the choice of suitable ud-calibrators for long-baseline optical interferometry depends on the ratio of the absolute measurement error of the visibility to the calibrator angular size prediction error . If , any ud-calibrator is suitable, i.e. contributes less than the measurement error to the global budget error in visibility, because of its angular diameter uncertainty. If , we find that any ud-calibrator with an angular diameter less than a value , such as , is suitable.
4 Choosing the calibrators
Choosing calibration targets for a specific scientific programme in a given instrumental configuration is a critical point of the absolute calibration of interferometric measurements. If one wants to determine the visibility of scientific targets with a high degree of accuracy, not only the angular diameters of the calibrators need to be carefully estimated, but also their brightness distributions, which are known to deviate slightly from simple ud-profiles. This implies that suitable calibrators belong to well-known, intensively-studied, and easily-modelled object classes.
As said in Sect. 2, point-like calibrators as bright as their associated scientific targets are ideal interferometric calibrators, which are unfortunately rarely available. Partially resolved sources may also be considered as suitable calibrators, provided that their brightness distribution can be accurately modelled. This excludes irregular and rapid variables, evolved stars, or stellar objects embedded in a complex and varying circumstellar environment involving disks, shells, etc., which are potentially revealed by an infrared excess in the spectral energy distribution (SED).
Figure 2: Log-log plot of the angular diameter against the sky-projected baselength, for = 10 (upper line), 2.2, 1.25, and [0.7] m (lower line), with and . is such that the lack of precision in the size of any uniform-disk calibrator smaller than has no significant impact upon the final errors. Open with DEXTER
Since we are concerned with studies of the circumstellar environment and of brightness asymmetries on the surface of evolved giants and supergiants, observed at high angular resolution with VLTI-AMBER in the near infrared (NIR), we consider as good'' calibrators the celestial targets fulfilling the following criteria:
1.
small angular distance ( ) to the scientific targets;
2.
spectral type not later than K, with luminosity class III at the most (no supergiant nor intrinsically bright giant);
3.
NIR apparent magnitudes as close as possible to the scientific target ( , i.e. flux ratio below 16);
4.
angular diameter as small as possible but at least smaller than the scientific target;
5.
no near-infrared excess observed in the spectral energy distribution (SED discrepancy with a blackbody radiator within in the NIR domain);
6.
no evidence for variability identified in the CDS-SIMBAD database;
7.
preferably source unicity, possibly multiplicity with far ( ) and/or faint ( ) companion(s) not seen in the observation field of the instrument, thus not affecting interferometric measurements;
8.
no evidence for non centro-symmetric geometry.
To choose calibrators during the observation preparation phase, we usually cross-compare the output lists given by many calibrator selector tools: JMMC-SearchCal (Bonneau et al. 2006), MSC-getCal (NASA Exoplanet Science Institute 2008), and ESO-CalVin. The 2MASS catalogue (Skrutskie et al. 2006) gives the NIR magnitudes. The infrared SEDs are extracted from the ISO-SWS database of spectra (Leech et al. 2003), or the IRAS-LRS database (Volk & Cohen 1989), if no ISO-SWS spectrum is available. If multiplicity is suspected, the associated parameters can be found in the CCDM Catalogue (Dommanget & Nys 2002).
The present paper uses the reference giant star Gru ( Gruis) as test case, selected to calibrate the interferometric measurements of the scientific target Gruis ( Gruis), that we observed in Oct. 2007 with the VLTI-AMBER instrument. The target Gru has been used several times as calibrator for interferometry (Kervella et al. 2004; Wittkowski et al. 2006; Di Folco et al. 2004). The following set of basic information can be found for this star:
• equatorial coordinates (J2000): [22] h [06] m [06.885] s, and ;
• galactic coordinates (J2000): l = , and b = ;
• parallax: [13.20(78)] mas (Perryman et al. 1997);
• spectral type: initially classified as M3III by Buscombe (1962), then as K3III since Houk (1978);
• apparent broadband magnitudes gathered in Table 2;
• infrared fluxes: , , and from IRAS (in Jy) (Beichman et al. 1988);
• infrared spectrophotometry: from 2.38 to [45.21] m with ISO-SWS01 (Sloan et al. 2003);
• angular diameter: [2.71(3)] mas (limb-darkened) in the catalogue of calibrator stars for LBSI (Bordé et al. 2002), revised to [2.75(3)] mas by Di Folco et al. (2004) from observations with VLTI/VINCI.
Note the use of a concise notation for the uncertainties, e.g. 2.71(3) for the angular diameter, instead of the standard notation, e.g. 2.71 0.03. It must be understood that the number in parentheses in the concise notation is the numerical value of the standard uncertainty referred to the corresponding last digits of the quoted result. By extension, a value like 2.71 +0.04-0.02 in the standard notation becomes 2.71 in the concise notation. Unless otherwise stated, we use the concise notation to report the uncertainties throughout the present paper. Let us also add that symmetric and nonsymmetric uncertainties are computed from confidence intervals with 68% confidence level, corresponding to 1 errors for the normal distribution, as stated in Appendix A. Figures 3 and 4 respectively show the broadband absolute photometry and the ISO-SWS spectrophotometry of Gru deduced from the magnitude and flux measurements, compared with a 4250-K blackbody radiator.
Figure 3: Log-log plot of the broadband absolute photometry (in ) of Gru, deduced from the JP11-UBVRI and the 2MASS- JHK magnitudes, and from the IRAS flux measurements at 12, 25 and [60] m. The thin curve is the spectrum of a 4250-K blackbody radiator with an angular diameter of [2.7] mas, given for comparison. The lengths of the short vertical bars are the values of the actual flux errors. Open with DEXTER
Figure 4: Log-log plot of the high-resolution processed SWS01 spectrum (in ) of Gru from the NASA/IPAC Infrared Data Archive. The thick curve is the spectrum of a 4250-K blackbody radiator with an angular diameter of [2.7] mas, given for comparison. Open with DEXTER
5 Direct and indirect approaches
To determine the stellar angular diameters, two different methods are commonly used, classified as direct and indirect by Fracassini et al. (1981). The direct method consists in linking the high angular or spectral resolution observations of some physical phenomena directly with the stellar disk geometry. Unless the instrumental response is known with an extreme accuracy, which is an extremely difficult challenge in the presence of (terrestrial) atmospheric turbulence, the accurate estimation of the calibrator angular diameters with the direct method needs very careful calibration with other unresolved or extremely well-known calibrators. In this case, the problem can be solved thanks to global calibrating strategies (Meisner 2008; Richichi et al. 2009).
The indirect method is based on the luminosity formula , where is the linear radius. High-fidelity SED templates (Boden 2007) or stellar atmosphere models can be used to provide homogeneous diameter estimates. Because of the very small number of existing absolute primary standards (Cohen et al. 1992), indirect diameter estimation needs to beware of the calibration of the absolute flux, hence of the effective temperature.
The calibrator catalogues of Bordé et al. (2002, hereafter B02), Mérand et al. (2005), and van Belle et al. (2008) use this method, the first two based on the previous absolute spectral calibration works of Cohen et al. (1999), the latter based on the works of Pickles (1998). The angular diameter estimates contained in the calibrator catalogue for VLTI-MIDI MCC are also inferred from the indirect method, fitting global photometric measurements by stellar atmosphere models, giving diameter uncertainties within 5% (Verhoelst 2005).
The compilation of all stellar diameter values published in the literature has been carried out to build the CADARS (Pasinetti Fracassini et al. 2001; Fracassini et al. 1981) and the CHARM/CHARM2 (Richichi & Percheron 2002; Richichi et al. 2005) catalogues. Although this approach seems attractive, because it gives the impression of providing reliable'' and well-controlled diameters, a sharper analysis of the data shows that these catalogues are intrinsically heterogeneous, with a precision rarely reaching 5%.
The studies presented in the present paper follow the indirect method of estimating the angular diameters of the interferometric calibrators, comparing the results obtained with various observations: diameter from the spectral type, from the colour index, from the broadband infrared magnitude, from the Johnson photometry, and from the spectral energy distribution. We especially focus attention on determining diameter uncertainties.
6 Model atmospheres
Thanks to the considerable progress made in modelling the stellar atmospheres, extensive grids of synthetic fluxes and spectra are now available. To get a summary of the existing synthetic spectra, one can look, for example, at Carrasco's web page. Among all the stellar atmosphere grids available, we should particularly mention: the ATLAS models (Castelli & Kurucz 2003; Kurucz 1979), the PHOENIX stellar and planetary atmosphere code (Hauschildt 1992; Brott & Hauschildt 2005), and the MARCS stellar atmosphere models (Gustafsson et al. 1975,2008). These codes have been compared by Kucinskas et al. (2005) for the late-type giants, and Meléndez et al. (2008) have shown the very good agreement between them. Concerning MARCS, Decin et al. (2000) has studied the influence of various stellar parameters on the synthetic spectra.
Because the MARCS code is particularly suitable for the cool stars (Gustafsson et al. 2003), we naturally opt to use it to model the atmosphere of Gru. Detailed information about the models can be found on the MARCS web site. The library supplies high-resolution ( ) energy fluxes for , for a wide grid of spherical atmospheric models, obtained with (step [100] K or [250] K), surface gravities (step 0.5), metallicities (with variable step from 1.0 to [0.25] dex), stellar masses of 0.5, 1.0, 2.0, and [5.0] , and microturbulent velocity or . Figure 5 shows the high-resolution synthetic spectral radiant exitance of a typical K3III star, with , , , , and , given by the spherical MARCS model.
Figure 5: High-resolution (R=20 000) MARCS synthetic spectral radiant exitance (in ), in the K-band (2.0 to [2.35] m), obtained with [4250] K, , z= [0.0] dex, , and = . Open with DEXTER
Figure 6: Median-resolution (R=1000) TURBOSPECTRUM synthetic radiance data, obtained with the same model parameters as for Fig. 5. Upper left panel: spectral distribution of the central radiance. Upper right panel: spectral distribution of the radiance normalized to the centre, for r=0.345 (upper curve), 0.515, 0.631, 0.720, 0.791, 0.848, 0.883, 0.922, 0.952, 0.974, 0.990, and 0.998 (lower curve). Lower left panel: normalized radiance profiles, for the wavelengths (upper curve), 2.364, 2.226, 2.088, 1.950, 1.812, 1.674, 1.536, and [1.4] m (lower curve). Lower right panel: partial derivatives of the normalized radiance profiles with respect to r, against . The dashed vertical line gives the median value of (see text for details). Open with DEXTER
Figure 6 shows the synthetic spectral radiance, obtained with the same set of physical parameters using the TURBOSPECTRUM code (Alvarez & Plez 1998), with a spectral step of [20] Å. In the upper left panel, the radiance spectral distribution at the disk centre is shown for . The upper right panel shows the radiance normalized to the centre, for various values of the distance from the star centre r (expressed in photospheric radius units). The model reproduces the change from absorption (on the disk) to emission (just beyond the continuum limb) of the first overtone ro-vibrational CO band at [2.3] m, also seen in the near-infrared solar observations (Prasad 1998). The lower left panel shows the normalized radiance profiles for various wavelengths. The position of the inflexion point gives the wavelength-independent Rosseland to limb-darkened conversion factor = 1)/ , where is the model outermost linear diameter (Wittkowski et al. 2004). For a discussion of the different definitions of the stellar radius, one can refer to Baschek et al. (1991). The lower right panel shows first partial derivatives with respect to r of the normalized radiance, against the viewing angle cosine . The median value 0.991 of the inflexion point is very close to the value predicted by the MARCS code.
For comparison purpose, we use the Planck and the Engelke (Engelke 1992; Marengo 2000) formulae. Representing the simplest way to model a stellar flux, the Planck function describes the exitance of a blackbody radiator with temperature T. Improving upon the blackbody description of the cool star infrared emission by incorporating empirical corrections for the main atmospheric effects, the Engelke function is obtained by substituting T with in the expression of the Planck formula (T in K and in m). Because it is an analytical approximation of the [2-60] m continuum spectrum for giants and dwarfs with , the Engelke function is based on the scaling of a semi-empirical plane-parallel solar atmospheric exitance profile for various effective temperatures (Decin & Eriksson 2007).
7 Diameter estimation
Among all the indirect approaches used to estimate the angular diameter, we compare now some of the most widely used methods.
7.1 From the spectral type
The stellar fundamental parameters mass , linear radius , and absolute luminosity are directly related to the stellar atmospheric parameters effective temperature , surface gravity g, according to the logarithmic formulae (Smalley 2005; Straizys & Kuriliene 1981):
(8) (9) (10)
This uses the solar parameter values = (Smalley 2005), = 4.4374(5) (Gray 2005), and deduced from the solar luminosity with the value , used as the zero point of the absolute bolometric magnitude scale (Amsler et al. 2008).
To estimate the effective temperature and the luminosity from the Morgan-Keenan-Kellman (MKK) type, de Jager & Nieuwenhuijzen (1987) introduce the continuous variables s (linked to the spectral class) and b (linked to the luminosity) and derive mathematical expressions of and against s and b, with 1 values of 0.021 and 0.164 respectively.
For a K3III star, with s=5.8 and b=3.0, the two-dimensional B-spline interpolation of the tables of de Jager & Nieuwenhuijzen (1987) gives and , hence , so that Eq. (9) gives . To avoid the bias on the distance appearing from the inversion of the parallax (Luri & Arenou 1997; Brown et al. 1997), we deduce the angular diameter from the linear radius and from the parallax ( and in same units), according to the relation (Allende Prieto 2001)
(11)
based on the latest value of the angular diameter of the Sun seen at 1 pc: (Amsler et al. 2008). Combining Eqs. (11) and (9) leads to a logarithmic variant of the formula giving the effective temperature (in K)
(12)
If the relative uncertainty on is higher than 20%, we follow the confidence interval transformation principle (CITP), described in Appendix A, to get a rough estimate of the uncertainty on . Using the value derived above and the parallax ([13.20(78)] mas), we obtain from Eq. (11) .
The angular diameter estimate given by this method clearly underestimates the B02 value ([2.71] mas). An incorrect parallax value given by Hipparcos cannot be suspected, considering the relative proximity of Gru, located at [76(4)] pc. Slight errors in determining the luminosity class could be a more likely cause of bias in diameter estimation. For Gru, we find that a luminosity would be more adequate than , giving .
7.2 From the colour index
Because the accurate stellar classification is a very difficult challenge leading to potential misclassifications, other parameters must be used to investigate the relation between the stellar temperature and the angular size. Being relatively independent of stellar gravities and abundances, the NIR colours are very good temperature indicators (Bell & Gustafsson 1989). For cool stars, the V-K colour index is also known to be the most appropriate parameter for representing the apparent bolometric flux, almost independently of their luminosity class (di Benedetto 1993; Johnson 1966), The empirical derivation of the angular sizes from the colour indices have been studied by many authors (van Belle 1999; di Benedetto 1998; Groenewegen 2004), leading to different relations. For our study, we use the following relations proposed by van Belle et al. (1999), particularly suitable for late-type giants and supergiants
(13) (14)
The average standard deviations are: [250] K on , and 30% on . One of the major difficulties with this method is the correction of the colour index for the interstellar absorption. Appendix B briefly describes the de-reddening process used in our study. Table 2 gives the results of the correction for the interstellar extinction in the Johnson and in the 2MASS bands. The UBVRI magnitudes come from the JP11 Catalogue (Morel & Magnenat 1978). Because data precision may vary significantly (Nagy & Hill 1980), a conservative value of 0.05 has been arbitrarily chosen as the uncertainty on each magnitude. The JHK magnitudes and uncertainties are taken from the 2MASS Catalogue (Cutri et al. 2003).
Table 2: Broadband photometry of Gru and reddening.
Using the corrected (intrinsic) V-K colour index 3.16(29) deduced from Table 2, we can infer from Eqs. (13) and (14) that the effective temperature of Gru is [4247(250)] K, and the linear radius is [26.6(80)] . Since the linear radius relative uncertainty is 30%, we estimate the angular diameter uncertainty range according to the CITP.
As a result, with the parallax [13.20(78)] mas, the V-K angular diameter is [3.3(10)] mas, slightly greater than the B02 value [2.71] mas. A V-K value 2.92 would give an angular diameter estimate closer to the B02. Unfortunately, the high level of final uncertainties prevent knowing the most likely source of bias: errors on the input magnitudes, de-reddening process, or diameter estimation method itself.
The angular diameter estimation given by the JMMC-SearchCal tool is, for bright objects (Bonneau et al. 2006), based on the study undertaken by Delfosse (2004), where a least-square polynomial fit of the distance-independent diameter estimator against each deredenned colour index CI is achieved. Introducing the distance modulus (with in arcsec) in Eq. (11), where mV and MV are the apparent and the absolute stellar magnitudes in V, one can define by
(15)
for in mas. Among the empirical relations between and each colour index, the highest accuracies on the angular diameter given by Eq. (15) are obtained with the three colour indices B-V ( , for ), V-R (10%, for ), and V-K (7%, for ). Unlike the classical methods of angular diameter estimation from the colour index, as the method of van Belle et al. (1999), which needs a parallax estimate in complement to magnitude measurements in 2 bands, Bonneau's method needs only photometric data, more precisely the apparent V magnitude and the colour indices. With B-V = 1.36(25), V-R = 0.99(19), and V-K = 3.16(21), the corresponding angular diameter estimates of Gru are , , and . Although this method gives coherent diameter estimates within 11%, which confirms that SearchCal considers Gru as a suitable calibrator for interferometry, it also overestimates the B02 value [2.71] mas, especially using the B-V colour index. For a K3 giant, the fiducial Johnson colours taken from spectral type-luminosity class-colour relations given by Bonneau et al. (2006), would be: B-V=1.27, V-R=0.98, and V-K=3.01. With these colour indices, the angular diameter estimates would be: , , and , close to the B02 value. At least 3 causes of bias may be suspected:
1.
Decreasing the B corrected magnitude from 5.79 to 5.70 would be sufficient to lower the B-V angular diameter estimate to [2.94] mas, so that the diameter estimates in B-V, V-R, and V-K, would stay within 2%. Thus, a slight overestimate of the B magnitude of Gru in the JP11 catalogue may be suspected.
2.
On the other hand, tests of the de-reddening procedure show that, even if we artificially increase the visual extinction from 0.04 to an unrealistically high value of 2.0 mag, the angular diameter derived from B-V would decrease from 3.32 to barely [3.30] mas, while the V-R and the V-K diameters would get closer to the B02 value, respectively from 2.97 to [2.65] mas, and from to 3.01 to [2.84] mas.
3.
Since the intrinsic B-V colour index of Gru (1.36) is slightly larger than the upper limit (1.30) of the validity domain of the polynomial fit, it is finally not surprising that the method gives an incorrect diameter estimate from B-V in this case.
7.3 From the in-band magnitude
The two methods for estimating the stellar diameter presented above are based on statistical relations and do not use any photospheric model. On the contrary, the methods presented in the following sections explicitly need photospheric models. As first shown by Blackwell & Shallis (1977), the photometric angular diameter in a spectral band can be estimated thanks to the relation , where and are the received and emergent mean fluxes in the considered band (both in ). The angular diameters derived with this method, known as the infra-red flux method (IRFM), are generally accurate to between 2 and 3% (Blackwell et al. 1990), depending not only on the fidelity of the atmospheric models used in the calibration, but also on the uncertainty in the absolute flux determination.
The last column of Table 2 lists the received absolute fluxes deduced from the measured de-reddened in-band magnitudes , where F0 is the zero-magnitude flux taken from Bessell et al. (1998) for UBVRI, with 2% uncertainties (Colina et al. 1996), and from Cohen et al. (2003) for JHK . For in-band corrected magnitudes with relative uncertainties exceeding 20%, absolute flux uncertainties are computed according to the CITP (see Eq. (A.3)).
If is the transmission profile of the considered filter, normalized to 1.0 at its maximum, one can define the in-band effective wavelength and width as (Fiorucci & Munari 2003)
(16) (17)
so that the band emergent mean flux can be written as
(18)
where is the equivalent width of the band transmission profile. The W0 values are enclosed in square brackets in Table 2. Introducing
(19)
the in-band angular diameter is given by , where . Depending on the model used, the effective band parameters and , presented in Table 3 for a K2 spectrum ([4380] K) and for a [4250] K blackbody spectrum, are extracted from the Asiago Database on Photometric Systems. Table 4 gives the in-band angular diameters using the Planck, the Engelke, and the MARCS synthetic spectra with the same temperature of [4250] K, integrated in the 2MASS J, H, and spectral bands. When the absolute flux uncertainties are greater than 20%, we compute the angular diameter uncertainties according to the CITP.
Table 3: Effective wavelength and bandwidth (in square brackets) in each band (both in m), for a K2 spectral type spectrum and a [4250] K blackbody spectrum.
Table 4: Photometric angular diameters (in mas) obtained with various synthetic spectra (with T = [4250] K) in the 2MASS near-infrared bands.
Given for comparison, the overestimated angular diameters obtained with the Planck spectrum confirm that the stellar photospheres may deviate noticeably from simple blackbodies. Similarly, the underestimated J-band angular diameter derived from the Engelke spectrum confirms that the Engelke analytic approximation is valid for wavelengths longer than [2] m. Finally, the MARCS synthetic spectrum with , , , , and yields angular diameters in J, H, and , which are close to the B02 value of [2.71] mas, and with less dispersion.
7.4 From the broadband photometry
The IRFM method, described in Sect. 7.3, gives different angular-diameter estimates for each spectral band in which the model spectrum is integrated. To get a unique estimate of the angular diameter, taking the global broadband photometry into account (as shown for example in Fig. 3), the use of fitting techniques is necessary. The most widely used is based on minimization.
If is the measurement error of the mean flux Fi, received in spectral band i, and Mi the emergent mean flux in the same band, the best-fit angular diameter corresponds to the minimum of the one-parameter () nonlinear function defined by
(20)
where N is the total number of spectral bands used to build the global photometry. To find the minimum value of the function, we use the gradient-expansion algorithm, which combines the features of the gradient search with the method of linearizing the fitting function (Bevington & Robinson 1992), very similar to the classical Levenberg-Marquardt algorithm (Levenberg 1944; Marquardt 1963). Figure 7 shows an example of against the angular diameter obtained when fitting the MARCS model on the ISO SWS data, as described in Sect. 7.5, using the gradient-expansion algorithm. First used by Cohen et al. (1992) for the absolute calibration of broad- and narrow-band infrared photometry, based upon the Kurucz stellar models of Vega and Sirius, this method has led to the construction of a self-consistent, all-sky network of over 430 infrared radiometric calibrators (Cohen et al. 1999), upon which the further works of B02 and Mérand et al. (2005) are based. Estimating the stellar angular diameters through photometric modelling is also used in the MSC-getCal Interferometric Observation Planning Tool Suite, which relies on the Planck blackbody SED, parameterized by its effective temperature and bolometric flux (see the fbol routine in the reference manual available online).
Figure 7: Plot of against the angular diameter (in mas) obtained by fitting the appropriate MARCS model on the SWS data (see Sect. 7.5), using the gradient-expansion algorithm. The vertical dotted line gives the position of the best-fit parameter. Open with DEXTER
Table 5: Angular diameters obtained by fitting various models (with T = [4250] K) to visible and/or NIR photometric data.
Table 5 gives the results of the fitting process for the visible and/or NIR photometry (given in Table 2), with the Planck, the Engelke (suitable for infrared wavelengths only), and the MARCS models. Since the blackbody model ignores line-blanketing effects in the near-UV (Allende Prieto & Lambert 2000), as seen in Fig. 3, the Johnson-U flux is not considered when fitting the Planck model. The best-fit angular diameters correspond to the minimum values of the function. To compare the values obtained for data samples with different sizes, it is convenient to use the F2 goodness-of-fit parameter defined as (Kovalevsky & Seidelmann 2004)
(21)
where is the number of degrees of freedom, equal to N-1 in our case (1 parameter). When gets larger than 20, F2 tends to be normally distributed with zero mean and unit standard deviation. Bad fits correspond to F2 values higher than 3 (especially after removing outliers), while abnormally good fits correspond to high negative values (Jancart et al. 2005). To identify the extreme outliers, we use the upper and lower outer fences defined by , where IQR=Q3-Q1 is the interquartile range, and Q1 and Q3 are the first and third quartiles, respectively (Zhang et al. 2004).
As underlined by Press et al. (2007), although minimization is a useful way of estimating the parameters, the formal covariance matrix of the output parameters has a clear quantitative interpretation only if the measurement uncertainties are normally distributed. To derive robust estimates of the model parameter uncertainties, we used the confidence limits of the fitted parameters with the bootstrap method (Efron 1979,1982). Rather than resampling the individual observations with replacement, we use the method of residual resampling, more relevant for regression, as described in Appendix C.
In Table 5, the 68% confidence interval limits of the best-fit angular diameter are determined by bootstrap, with 1000 resampling loops, only for data sets containing more than 5 photometric bands. Although the angular diameters listed in Table 5 (obtained by fitting model fluxes on photometric measurements) are very close to those obtained from the IRFM method, we did not consider the former results as very robust, considering the small number of photometric bands used for the fits.
7.5 From the spectrophotometry
Fitting atmospheric models on sparse photometric data may result in large uncertainties on the angular diameter. To decrease them significantly, larger data sets are needed. The observational data for this section consist in spectro-photometric measurements obtained with ISO-SWS (de Graauw et al. 1996). The Gru spectrum shown in Fig. 4, extracted from the NASA/IPAC Infrared Science Archive, was obtained in the SWS01 observing mode (low-resolution full grating scan, on-target time = [1140] s), which covers the entire 2.4-[45.4] m SWS spectral range, with a variable spectral resolution (Table 6). The SWS AOT-1 spectra, subdivided into wavelength segments (Leech et al. 2003), have been processed and renormalized with the post-pipeline algorithm referred as the swsmake code (Sloan et al. 2003). The spectral characteristics of the SWS AOT-1 bands and their 1 photometric accuracies given in Table 6 were deduced from Leech et al. (2003) and Lorente (1998). Since the spectrum of Gru is very noisy at wavelengths longer than [27.5] m, as shown in Fig. 4, probably because of calibration problems, we did not use the bands 3E and 4 for fitting the models on the ISO-SWS data.
Table 6: Spectral characteristics of the SWS AOT-1 bands and their photometric accuracy.
Table 7: Angular diameters obtained by fitting various models (with T = [4250] K) to the 2.38-[27.5] m SWS spectrum.
Table 7 gives the results of the fitting process of the SWS 2.38-[27.5] m spectrum with the Planck and Engelke models (both with a temperature of [4250] K) and with the K3III MARCS model, presented in Sect. 6. The confidence interval limits were estimated using the bootstrap resampling ( ) for the 68% confidence level. The agreement between the angular diameter obtained by spectrophotometry fitting with the MARCS model and the B02 value, obtained by fitting with the Kurucz model, reflects the excellent agreement between the two models (Meléndez et al. 2008). Deriving the angular diameters from the fit of the Engelke function on the SWS spectra (extended to [45.2] m), Heras et al. (2002) gives an overestimated value of the angular diameter ([2.82(21)] mas, as compared to 2.71(3) from B02). Figure 8 shows a typical example of the histogram of the residual-bootstrap estimates of the best-fit angular diameter, obtained with the MARCS model on the SWS AOT-1 data, where one can see that the resulting distribution of angular diameters is notably asymmetric.
Figure 8:Histogram of the angular diameters estimated by residual bootstrapping when fitting the MARCS model on the SWS spectrum. Open with DEXTER
8 Discussion
8.1 Diameter uncertainty
Figure 9: Estimated sizes of Gru with each method. The horizontal dotted line is the B02 value ([2.71] mas). Estimates #1 and 2 are from the K3III spectral type using 20 and 40 terms of the polynomial expansion of de Jager & Nieuwenhuijzen (1987), #3 from V-K with van Belle, #4 to 6 from B-V, V-R and V-K with Bonneau, #7 to 9 from the J magnitude, #10 to 12 from H, and #13 to 15 from , using the IRFM with the Planck, Engelke, or MARCS models, respectively. #16 and 17 are from the broadband photometry with the Planck or MARCS models, #18 to 20 from the NIR photometry with the Planck, Engelke, or MARCS models; similar for #21 to 23 but from the SWS spectrum. Open with DEXTER
Figure 9 summarizes the angular diameter estimates of the test-case calibrator Gru, obtained in the present study using various data types and methods. The B02 value of [2.71] mas is given for comparison. The last method (estimating the angular diameter from a fit of the SWS spectrum with a MARCS model gives, as expected, the most reliable estimate of the angular diameter, very close to the B02 estimate, with an uncertainty of 2.7%. It is also noticeable that the weighted average of the 23 angular diameter estimates is [2.73] mas.
We must underline that the uncertainty of the limb-darkened angular diameter given by B02 is deduced from the formal standard error associated with the best-fit value of the multiplicative factor, scaling the appropriate Kurucz model on the infrared fluxes (Cohen et al. 1996). Called biases by Cohen et al. (1995), scale factor uncertainties rarely reach 1% with their method, independent of the spectral type and luminosity class. If we use, in the same manner, the formal fit errors as uncertainty estimators, the diameter uncertainty only amounts to 0.15%. Since we consider this extremely low value as unrealistic, we prefer to estimate the angular-diameter uncertainty from the statistically-significant confidence intervals, given by the residual bootstrapping (Appendix C). This amounts to 2.4% with the fit of the MARCS model.
8.2 Fundamental stellar parameters
From this angular-diameter accurate estimate and the parallax, we can infer a set of fundamental stellar parameters for Gru, presented in Table 8, using the following procedure.
1.
Calculate the linear radius from and , according to Eq. (11). For and , we find .
2.
Fix the value of the spectral class variable s from the spectral type. For a K3 star, s=5.80.
3.
Find the value of the luminosity class variable b, which gives the same angular diameter value. One can easily demonstrate from Eq. (12) that b is solution of the equation
(22)
where t(s,b) and l(s,b) are the two-dimensional B-spline interpolation functions of the tables and , published by de Jager & Nieuwenhuijzen (1987). For and , we find b = 2.76.
4.
From s=5.80 and b=2.76, deduce the interpolated effective temperature and the absolute luminosity , and calculate the bolometric magnitude .
5.
Interpolate the surface gravity in the corresponding table of Straizys & Kuriliene (1981) for the same couple of values (s,b).
6.
Using Eq. (10), deduce the stellar mass from and .
Table 8: Fundamental parameter estimates (with uncertainties) of Gru reevaluated from our study.
Using this method, one can see from Table 8 that the bolometric magnitude and especially the stellar mass are determined with very low accuracies. The fundamental parameter accuracies are computed using the 1 accuracies given by de Jager & Nieuwenhuijzen (1987): 0.021 for , 0.164 for , and 0.4 for . The uncertainties on the fundamental parameters deduced by our study validate a posteriori the choice of the input parameter values for the MARCS model used to fit the flux measurements: , , and .
8.3 Model parameters
One critical point of our method is the preliminary choice of a single set of photospheric model parameters, used to infer the angular diameter. To determine them accurately, we refer the reader to the papers by Decin et al. (2000) and Decin et al. (2004). In our study, we use the MARCS model with the fiducial parameter set of a K3III star, i.e. , , , , and . The very good agreement between the angular diameter estimates deduced from the fit of the Planck, the Engelke and the MARCS models on the ISO-SWS 2.38-[27.5] m spectrum of Gru, as it is shown in Table 7, justifies the choice of these model parameters.
9 Conclusion
In our paper, we have compared different methods for angular-diameter estimation of the interferometric calibrators. The spectral-type angular diameters only need distances as extra input. The colour-index diameters need a good interstellar correction. The photometric and the spectrophotometric diameters need explicit synthetic spectra. As expected, the results are highly dependent on the number and quality of the input data.
As a test case, we used the giant cool star Gru that we observed to calibrate the VLTI-AMBER low-resolution observations of the scientific target Gru (Sacuto et al. 2008), which will be the subject of our forthcoming paper.
Each diameter estimate is compared to the B02 value ([2.71] mas) found in the Calalogue of Calibrator Stars for Interferometry. The most reliable estimate of the angular diameter we find is [2.70] mas, with a 68% confidence interval [2.65-2.81] mas, obtained by fitting the ISO/SWS spectrum (2.38-[27.5] m) with a MARCS atmospheric model, characterized by , , , , and . One original contribution of our study is the estimation of the statistically-significant uncertainties by means of the unbiased confidence intervals, determined by residual bootstrapping.
All numerical results and graphical outputs presented in the paper were obtained using the routines developed under PV-WAVE, which compose the modular software suite SPIDAST, created to calibrate and interpret spectroscopic and interferometric measurements, particularly those obtained with VLTI-AMBER (Cruzalèbes et al. 2008). The main functionalities of the SPIDAST code, intended to be available to the community, are
1.
estimate the calibrator angular diameter by any of the methods described in this paper;
2.
create calibrator synthetic measurements, for the instrumental configuration (spectral fluxes, visibilities, and closure phases);
3.
estimate the instrumental response from the observational and the synthetic measurements of the calibrator;
4.
calibrate the observational measurements of the scientific target with the instrumental response;
5.
determine the science parameters by fitting the chromatic analytic model on the true science measurements, with the confidence intervals given by residual bootstrapping.
Acknowledgements
P.C. thanks A. Spang, Y. Rabbia, O. Chesneau, and A. Mérand for helpful discussions. A.J. is grateful to B. Plez and T. Masseron for their ongoing support with the MARCS code. S.S. acknowledges funding by the Austrian Science Fund FWF under the project P19503-N13. We also thank the anonymous referee whose comments helped us to improve the clarity of this paper. This research has made use of the Jean-Marie Mariotti Centre SearchCal service, co-developed by FIZEAU and LAOG, of the CDS Astronomical Databases SIMBAD and VIZIER, and of the NASA Astrophysics Data System Abstract Service.
Appendix A: Computing the uncertainties
As defined in the Guide to the Expression of Uncertainty in Measurement (JCGM/WG 1 2008), the uncertainty , associated to the best'' estimate of a given random variable x, usually given by the sample average, characterizes the dispersion of x about . When it is associated to the level of confidence , it can be interpreted as defining the interval around , which encompasses 100()% of the estimates X of x. By analogy with the 1 dispersion in the normal case, one can define the standard uncertainty of by the interval that encompasses 68.3% of the distribution of x around .
If are the right and left deviations of x varying in the 100()% confidence interval (CI) about , they can be defined as and , where and are the upper and lower bounds of the CI, respectively given by the 100 % and 100 % quantiles of the distribution of x.
To propagate the uncertainties with a monotonic transformation function f of the input variable x into the output variable , one can follow the confidence interval transformation principle (CITP) (Kelley 2007; Smithson 2002)
(A.1)
where and are the upper and lower bounds of the 100 CI about the output best estimate . If f is an increasing function of x between and , and can be defined by , and . If f is decreasing in the same range, we get and .
The upper and lower output uncertainties can be defined by the left and right deviations of y about its best estimate , so that , and .
For small input uncertainties , one can apply the approximation of a second-order Taylor series expansion to compute the upper/lower output uncertainties . Omitting the terms leading to moments higher than the second one, Winzer (2000) gives
(A.2)
where and respectively denote the first and second partial derivatives of f with respect to x. If , Eq. (A.2) is the general law of uncertainty propagation. Throughout our study, we apply the second-order approximation as long as the input uncertainties are less than the arbitrary value 20%. For larger uncertainties, the second-order approximation can introduce bias in the error estimate because of the use of a truncated series expansion, and we compute the output uncertainties thanks to the transformed bounds of the confidence interval.
For practical reasons, it is often more convenient to associate a single uncertainty value , hereafter denoted in order to simplify the notations, rather than dealing with asymmetric uncertainties , hereafter denoted . Most people remove the asymmetry by taking the highest value between and , or by averaging the two values, arithmetically or geometrically. Although the arithmetic mean gives the correct uncertainty in most cases of practical interest and small uncertainties (D'Agostini & Raso 2000), we can follow a statistical approach based on asymmetrical probability density functions (pdf), also applicable with large uncertainties.
In the general case where the 100()% CI does not encompass the whole distribution of the estimates X of x, asymmetric uncertainties need careful handling with known likelihood functions (Barlow 2003). If the CI bounds and are close to the extremal values and of the distribution, and if there is no specific knowledge about the distribution itself, one can use the standard deviation of an asymmetric distribution as estimator for the symmetric uncertainty. When only the value of the best estimate is known, in addition to the upper and lower bounds of the CI, it is reasonable to assume that the probability to obtain values near the bounds is lower than values near . In this case, a simple approximation of the pdf is given by the asymmetric triangular distribution, with mode , width , and variance
(A.3)
Kotz & Van Dorp (2004) give the analytic relations
(A.4) (A.5)
where q is solution of the equation
(A.6)
Throughout our study, we use the standard deviation of the asymmetric triangular distribution as estimator for the symmetric uncertainty.
Appendix B: De-reddening
If is the observed broad-band magnitude, the dereddened magnitude is , where is the interstellar extinction in the band. To calculate the value of the extinction at any wavelength, we use the relation , where is the ratio of total to selective extinction at any wavelength (Seaton 1979), and V and B stand for the visible and the blue wavelengths: , and (Williams 1992; Cardelli et al. 1989).
To get the wavelength dependence of the extinction in the IR/optical region, we use the tabular data of the Asiago Database of Photometric Systems available online, following Fitzpatrick (1999), for the case RV = 3.1.
The visual interstellar extinction AV is calculated thanks to the numerical algorithm of Hakkila et al. (1997), including the studies of Fitzgerald (1968), Neckel & Klare (1980), Berdnikov & Pavlovskaya (1991), Arenou et al. (1992), Chen et al. (1998), and Drimmel & Spergel (2001), plus a sample of studies of high-galactic latitude clouds. The algorithm calculates the three-dimensional visual extinction from inputs of distance, Galactic longitude and latitude. The final estimate of the visual extinction is given by weighted averaging of the individual study values. Since the datasets used in the analyses are not statistically independent, Hakkila suggests to use the simple averaging of the individual study uncertainties as formal extinction uncertainty.
Table B.1: Total visual extinction of Gru as obtained from the relevant studies.
The total visual extinctions of Gru are shown in Table B.1 for the relevant studies, with the distance [76] pc, and the Galactic coordinates l = , and b = . Because the estimate from Arenou et al. (1992) does not agree with the 4 other estimates, we do not use it for averaging. The weighted average estimate of AV is 0.03, with a mean uncertainty of 0.15.
Appendix C: Residual bootstrapping
The bootstrap process is based on the idea that if the original data population is a good approximation of the unknown distribution, each sample of the data population closely resembles that of the original data (Babu & Feigelson 1996). The bootstrap process can be summarized as follows (Palm 2002; Dogan 2007): fabricate many new'' data sets by resampling the original data set, then estimate the angular diameter value for each of these new'' data sets to generate a distribution of the angular diameter estimates, and finally use the resulting empirical distribution of the angular diameters to estimate the confidence intervals.
Figure C.1: Left panel: plot of the spectral distribution of the original centred Pearson residuals, obtained by fitting the MARCS model on the SWS spectrum. The residual values located above and below the two horizontal dashed lines are identified as extreme outliers. Right panel: histogram of the Pearson residual distribution. The thin curve within the histogram is the normal probability density function shown for comparison. The two vertical dashed lines give the positions of the upper and lower outer fences identifying the extreme outliers. Open with DEXTER
Figure C.2: Left panel: cumulative histogram of the values given by residual bootstrapping. The thin curve shows the cumulative distribution function for comparison, with 1 degree of freedom. Central panel: corresponding theoretical QQ plot, where the cube root of the ordered values are plotted against the cube root of the order statistics medians. The straight line is added for reference. Right panel: histogram of the residual-bootstrap , and probability density function () for comparison (thin curve). Open with DEXTER
In the direct method, the resampling with replacement is based on the experimental distribution function of the original data. For regression purpose, it is recommended to instead use the residual-based method, implemented as follows.
1.
Fit the model to the original measurements Fi, with their standard uncertainties ( ), by minimizing the function. Call the best-fit angular diameter , and the associated minimum value .
2.
Compute the residuals ri using . Call the projection of the best-fit model on the ith spectral channel.
Because the amplitude of each error term is correlated with the wavelength, we prefer to use the unscaled Pearson residuals , instead of the error terms themselves.
3.
Center the residuals by subtracting the mean of the original residual terms (Friedmann 1981). Figure C.1 shows an example of the spectral distribution of the centred Pearson residuals and of the corresponding histogram, obtained by fitting the MARCS-model spectrum on the SWS spectrum as described in Sect. 7.5.
4.
Resample the centred residuals by drawing randomly with replacement, so that a new residual value is obtained for each measurement (nonparametric bootstrap). Denote as the resampled normalized residual term for the kth data set at the wavelength . We have introduced the subscript k because this step and the next two will be repeated many times.
5.
Build new data sets ( ) from .
6.
Estimate the model angular diameter by minimization for each fabricated data set.
Repeat steps 3-6 many times to obtain a sufficiently large bootstrap sample (e.g. ). At the end of the process, we have best-fit angular diameter estimates of , as well as associated values. Figure C.2 shows an example of an empirical distribution of the values given by bootstrapping, compared to the distribution with 1 degree of freedom. The right and left panels respectively show the cumulative and the ordinary histograms of the residual-bootstrap . The central panel shows the theoretical quantile-quantile (QQ) plot of , where the cube root scaling has been applied for both the order response values (as ordinates), and the order statistics medians (as abscissas), as suggested by Chambers et al. (1983). Used as quantile estimators, the order statistics medians are computed according to NIST/SEMATECH. Weak departures from straightness observed on theoretical QQ plots is an indication of the good agreement between the theoretical and the empirical distributions.
Once the minimization procedures are terminated, we can estimate the angular diameter confidence interval from the distribution thanks to the simple percentile confidence interval method, easy to implement (Efron & Tibshirani 1983): for the confidence level, calculate and , the percentiles 100(% and 100( )% relative to the residual-bootstrap distribution of ; then, among all the bootstrap angular diameters with associated between and , find the smallest and the greatest bootstrap angular diameter estimates and , corresponding to the lower and the upper bounds of the confidence interval.
Because the mean of the distribution of the bootstrap values is not equal to the minimum obtained with the original data set, we instead use the bias corrected percentile confidence interval method (Efron & Tibshirani 1993). In this method, the probabilities and are replaced by and , the values of the standard normal cdf for the points and , where p is the proportion of negative values, and up is the (100p)th percentile relative to the standard normal cdf.
Formally, if is the standard normal cdf, , , , , and .
Finally, and are the smallest and the greatest bootstrap angular diameter estimates with the values between and .
Footnotes
... database
... JMMC-SearchCal
www.jmmc.fr/searchcal_page.htm
... MSC-getCal
nexsciweb.ipac.caltech.edu/gcWeb/gcWeb.jsp
... ESO-CalVin
www.eso.org/observing/etc/
... 2MASS catalogue
www.ipac.caltech.edu/2mass/
... ISO-SWS
irsa.ipac.caltech.edu/data/SWS/
... IRAS-LRS
www.iras.ucalgary.ca/satellites/Iras/getlrs.html
... LBSI
cdsarc.u-strasbg.fr/viz-bin/Cat?J/A+A/393/183
... MCC
ster.kuleuven.ac.be/~tijl/MIDI_calibration/mcc.txt
... page
www.am.ub.es/~carrasco/models/synthetic.html
... models
kurucz.harvard.edu/
... code
www.hs.uni-hamburg.de/EN/For/ThA/phoenix/
... models
marcs.astro.uu.se/
... Systems
... online
nexsciweb.ipac.caltech.edu/gcWeb/doc/getCal/ gcManual.html
... observations
ESO programme ID 080.D-0076A (AMBER GTO).
... SPIDAST
SPectro-Interferometric Data Analysis Software Tool.
... NIST/SEMATECH
www.itl.nist.gov/div898/handbook/
All Tables
Table 1: Values of the angular diameter (in mas), under which , at .
Table 2: Broadband photometry of Gru and reddening.
Table 3: Effective wavelength and bandwidth (in square brackets) in each band (both in m), for a K2 spectral type spectrum and a [4250] K blackbody spectrum.
Table 4: Photometric angular diameters (in mas) obtained with various synthetic spectra (with T = [4250] K) in the 2MASS near-infrared bands.
Table 5: Angular diameters obtained by fitting various models (with T = [4250] K) to visible and/or NIR photometric data.
Table 6: Spectral characteristics of the SWS AOT-1 bands and their photometric accuracy.
Table 7: Angular diameters obtained by fitting various models (with T = [4250] K) to the 2.38-[27.5] m SWS spectrum.
Table 8: Fundamental parameter estimates (with uncertainties) of Gru reevaluated from our study.
Table B.1: Total visual extinction of Gru as obtained from the relevant studies.
All Figures
Figure 1: Left panel: plot of the uniform-disk visibility function against the argument . Right panel: plot of the ratio of the model visibility uncertainty to the angular diameter relative uncertainty against x, given by the first-order Taylor series expansion of the visibility. Open with DEXTER In the text
Figure 2: Log-log plot of the angular diameter against the sky-projected baselength, for = 10 (upper line), 2.2, 1.25, and [0.7] m (lower line), with and . is such that the lack of precision in the size of any uniform-disk calibrator smaller than has no significant impact upon the final errors. Open with DEXTER In the text
Figure 3: Log-log plot of the broadband absolute photometry (in ) of Gru, deduced from the JP11-UBVRI and the 2MASS- JHK magnitudes, and from the IRAS flux measurements at 12, 25 and [60] m. The thin curve is the spectrum of a 4250-K blackbody radiator with an angular diameter of [2.7] mas, given for comparison. The lengths of the short vertical bars are the values of the actual flux errors. Open with DEXTER In the text
Figure 4: Log-log plot of the high-resolution processed SWS01 spectrum (in ) of Gru from the NASA/IPAC Infrared Data Archive. The thick curve is the spectrum of a 4250-K blackbody radiator with an angular diameter of [2.7] mas, given for comparison. Open with DEXTER In the text
Figure 5: High-resolution (R=20 000) MARCS synthetic spectral radiant exitance (in ), in the K-band (2.0 to [2.35] m), obtained with [4250] K, , z= [0.0] dex, , and = . Open with DEXTER In the text
Figure 6: Median-resolution (R=1000) TURBOSPECTRUM synthetic radiance data, obtained with the same model parameters as for Fig. 5. Upper left panel: spectral distribution of the central radiance. Upper right panel: spectral distribution of the radiance normalized to the centre, for r=0.345 (upper curve), 0.515, 0.631, 0.720, 0.791, 0.848, 0.883, 0.922, 0.952, 0.974, 0.990, and 0.998 (lower curve). Lower left panel: normalized radiance profiles, for the wavelengths (upper curve), 2.364, 2.226, 2.088, 1.950, 1.812, 1.674, 1.536, and [1.4] m (lower curve). Lower right panel: partial derivatives of the normalized radiance profiles with respect to r, against . The dashed vertical line gives the median value of (see text for details). Open with DEXTER In the text
Figure 7: Plot of against the angular diameter (in mas) obtained by fitting the appropriate MARCS model on the SWS data (see Sect. 7.5), using the gradient-expansion algorithm. The vertical dotted line gives the position of the best-fit parameter. Open with DEXTER In the text
Figure 8:Histogram of the angular diameters estimated by residual bootstrapping when fitting the MARCS model on the SWS spectrum. Open with DEXTER In the text
Figure 9: Estimated sizes of Gru with each method. The horizontal dotted line is the B02 value ([2.71] mas). Estimates #1 and 2 are from the K3III spectral type using 20 and 40 terms of the polynomial expansion of de Jager & Nieuwenhuijzen (1987), #3 from V-K with van Belle, #4 to 6 from B-V, V-R and V-K with Bonneau, #7 to 9 from the J magnitude, #10 to 12 from H, and #13 to 15 from , using the IRFM with the Planck, Engelke, or MARCS models, respectively. #16 and 17 are from the broadband photometry with the Planck or MARCS models, #18 to 20 from the NIR photometry with the Planck, Engelke, or MARCS models; similar for #21 to 23 but from the SWS spectrum. Open with DEXTER In the text
Figure C.1: Left panel: plot of the spectral distribution of the original centred Pearson residuals, obtained by fitting the MARCS model on the SWS spectrum. The residual values located above and below the two horizontal dashed lines are identified as extreme outliers. Right panel: histogram of the Pearson residual distribution. The thin curve within the histogram is the normal probability density function shown for comparison. The two vertical dashed lines give the positions of the upper and lower outer fences identifying the extreme outliers. Open with DEXTER In the text
Figure C.2: Left panel: cumulative histogram of the values given by residual bootstrapping. The thin curve shows the cumulative distribution function for comparison, with 1 degree of freedom. Central panel: corresponding theoretical QQ plot, where the cube root of the ordered values are plotted against the cube root of the order statistics medians. The straight line is added for reference. Right panel: histogram of the residual-bootstrap , and probability density function () for comparison (thin curve). Open with DEXTER In the text | 2016-10-23 14:30:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267859220504761, "perplexity": 2382.433404276774}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.38/warc/CC-MAIN-20161020183839-00015-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://firmfunda.com/maths/mensuration-basics/basic-mensuration-measuring-basics/basicmensu-measuring-area | maths > mensuration-basics
Introduction to Measuring Area
what you'll learn...
Overview
Area of a plane figure : The surface-span of a plane figure is the area of the surface. It is measured in square meter (or in one of other derived or similar forms). Area is specified as a number in reference to the surface-span of a square of $1$$1$ meter side.
surface-span
The length of a rod is the distance-span measured in meter.
For a surface of some shape (like a paper, or a leaf), the measure of the surface-span is area. Area is the surface-span of an enclosed 2D-region.
To specify distance-span or length, a reference-prototype-standard (ie a metal rod of specific material at specific temperature) was defined. Any length measurement is specified in reference to the prototype-standard.
Similarly, to specify area, a reference-prototype-standard can be defined. But, it is simpler to define the measurement of area using the already defined measure of length.
To specify the surface-span or area of a surface, a square of side $1$$1$ meter is taken as the reference. The area of that is one square meter or $1{m}^{2}$$1 {m}^{2}$. In the figure this is shown in the top right corner.
Area of a surface is given in reference to the area of a square of side $1$$1$ meter. In the figure, the area of the given rectangle is the number of $1$$1$ square meter squares fit in that rectangle. It is counted to $6$$6$, so the area of the rectangle is $6{m}^{2}$$6 {m}^{2}$.
The statement "area of the paper is $3$$3$ square meter" specifies that "The surface-span of the paper equals $3$$3$ surface-spans of a square of side $1$$1$ meter".
summary
Area of a plane figure : The surface-span of a plane figure is the area of the surface. It is measured in square meter (or in one of other derived or similar forms). Area is specified as a number in reference to the surface-span of a square of $1$$1$ meter side.
Outline
The outline of material to learn "Mensuration basics : Length, Area, & Volume" is as follows.
• Measuring Basics
→ Introduction to Standards
→ Measuring Length
→ Accurate & Approximate Meaures
→ Measuring Area
→ Measuring Volume
→ Conversion between Units of Measure
• 2D shapes
→ Perimeter of Polygons
→ Area of Square & rectangle
→ Area of Triangle
→ Area of Polygons
→ Perimeter and area of a Circle
→ Perimeter & Area of Quadrilaterals
• 3D shapes
→ Surface Area of Cube, Cuboid, Cylinder
→ Volume of Cube, Cuboid, Cylinder | 2021-12-03 22:21:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 11, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8069450855255127, "perplexity": 5705.348622521148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00545.warc.gz"} |
https://brilliant.org/problems/a-calculus-problem-by-md-zuhair/ | # #Limits 2
Calculus Level 3
$\large \lim_{x \to 1} \left(\frac{1+x}{2+x} \right)^{\frac{1-\sqrt x}{1-x}} = \sqrt[n]{\frac{a}{b}}$
The equation above holds true for positive integers $a$, $b$ and $n$, with $a$ and $b$ being primes. Find the value of $a+b+n$.
× | 2019-10-20 10:13:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6005002856254578, "perplexity": 619.6282531301484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00310.warc.gz"} |
https://mathoverflow.net/questions/439105/is-the-number-of-breakthroughs-in-mathematics-decreasing-as-it-is-claimed-to | # Is the number of "breakthroughs" in mathematics decreasing, as it is claimed to be in other sciences?
Is the number of "breakthroughs" in mathematics decreasing, as it is claimed to be in other sciences?
Background for the question:
1. Park, M., Leahey, E. & Funk, R.J. Papers and patents are becoming less disruptive over time. Nature 613, 138–144 (2023). https://doi.org/10.1038/s41586-022-05543-x
2. What Happened to All of Science’s Big Breakthroughs?
A new study finds a steady drop since 1945 in disruptive feats as a share of the world’s booming enterprise in scientific and technological advancement.
Has a similar analysis been conducted for the field of mathematics (or, say, pure mathematics)? If yes, how do the findings compare?
• I wonder if the pressure to publish means that people now will publish things that in time past wouldn't have been considered worth publishing. I know one older mathematician (but not retirement age) who only wanted to work on big, meaningful projects, and not publish incremental work. He saw this as a massive cultural failure in mathematics. Sadly, it ended his career—he was more-or-less pushed to retire early—but he was principled to the end on this matter. Jan 22 at 10:40
• It is important to note that the decline in "disruptiveness" (their term for breakthrough-ness) in research found by Park et al refers to the average publication. When it comes to the absolute number of breakthrough papers they report consistency over time. So it isn't that there is less breakthrough research, but rather that there is more non-breakthrough research. Jan 22 at 11:37
• The number of breakthroughs is going up; it’s the ratio of breakthroughs to scientific work that’s going down. Will you edit the question to state the comparison case more accurately? This may require writing out the main question in the text of the post, and not just in the title. Jan 22 at 12:38
• One major issue to keep in mind is the phrasing of "the field of mathematics". A breakthrough in enumerative combinatorics, no matter how impactful, will likely not impact the research on anyone working on monoidal categories (say). This degree specialisation is more of a modern phenomenon. "Breakthroughs in mathematics", at least when counting their impact on the subject as a whole, is about as meaningful as "breakthroughs in studying living things". Jan 22 at 12:42
• Scott Alexander has pointed out that one needs to be careful not to draw incorrect conclusions from this type of data. In particular, depending on how you define "breakthrough," a decrease in the number of breakthroughs doesn't necessarily mean that "progress is slowing down" (again, depending on what you mean by that). Jan 22 at 13:16
Q: Is the number of "breakthroughs" in mathematics decreasing?
To get some quantitative feel for the question I considered the Timeline of mathematics on Wikipedia. Not all entries are "breakthroughs", but most could be considered as such. Here is a plot of the cumulative number of entries since 1900. I do notice a kink in the slope around 1965, so based on this evidence on might conclude that, indeed, the rate of discovery has decreased somewhat. Or perhaps the 1960's was just an unusually productive decade.
Relative to the total output the discovery rate is obviously much smaller now than in the past.
• +1 for an interesting choice of data. Additional observation: This is the view of the past achievements as of January 2023. Possibly, some 1965–2015 achievements that are currently not listed, might be viewed more favorably 20 or 50 years later. There could be a myopic bias about the recent achievements. (To be fair, that bias could be negative or positive.) Jan 22 at 13:30
• I’m not going to start saying names, but I think (following Carl-Fredrik Nyberg Brodda’s comment) that anyone looking at the list would find that it overlooks a significant number of breakthroughs from their field, which may undermine the conclusion about the rate slowing down. Jan 22 at 13:39
• this could be an invitation to add missing entries to the Wikipedia list... Jan 22 at 14:15
• The Wikipedia list is indeed a bit curious: Computing digits of $\pi$ is mentionned several times but (for example) the discovery of the Jones-polynomial is missing. I think the later had far deeper implications than computations of digits for $\pi$. Jan 22 at 16:16
• The list is bizarre (not just by the inclusions, but also omissions) and reflects the level of mathematical sophistication (or lack of thereof) of people who edited the list rather than the actual development of mathematics. Jan 22 at 16:31
Quanta Magazine has an article on 2022's Biggest Breakthroughs in Math:
"In 2022, mathematicians solved a centuries-old geometry question, proved the best way to minimize the surface area of clusters of up to five bubbles and proved a sweeping statement about how structure emerges in random sets and graphs."
The full article describes an impressive year of results. This does not address this year in comparison to previous years' results, but it is difficult to read this review and feel that the year was in any way disappointing mathematically.
(Added). Just to give a sampling, here are excerpts from the Quanta article under "Geometry":
• Emanuel Milman and Joe Neenan found out the shape of clusters of bubbles that can most efficiently enclose three or four volumes — in any number of dimensions.
• Isabel Vogt and Eric Larson solved the interpolation problem: how many random points in high-dimensional space certain types of curves can pass through.
• Andras Máthé, Oleg Pikhurko and Jonathan Noel ... figured out how a circle can be cut up into visualizable pieces that can be rearranged into a square.
• Martin and Erik Demaine ... published a paper that shows how to take any polyhedron and fold it into a flat shape — as long as you allow infinitely many creases.
• Dusa McDuff and several collaborators found intricate fractal structures emerging when they tried to embed shapes called ellipsoids into something called Hirzebruch surfaces.
• Other mathematicians made progress toward proving the Kakeya conjecture...
Similar lists are presented under the headings: the Fields Medalists' research, Number Theory, Machine Learning, Topology, Random Structures. Among the latter is a short paper (a 6-page proof that pinpointed when structure emerges in random graphs), while one breakthrough resulted in a 912-page paper showing that slowly rotating black holes will keep on rotating until the end of time.
I think that the picture is complex, and I'm sure that the two (currently) top comments by David Roberts and Brendan McKay, although providing arguments in apparently opposite directions, are both spot on.
I believe there's also another phenomenon which is worth mentioning: we generally feel more comfortable in assigning the "breakthrough" mark to works that have aged well. Without the kind of pedigree that only historic evolution can give, it's more difficult to get consensus about how large the impact of a result will be. Of course there are exceptions, like if a celebrated conjecture is proven, but for evaluating the breakthrough character of new ideas, frameworks, connections...and so on, time is usually needed.
So my two cents: in the following centuries, a non-negligible set of results, which currently are somewhat lost in the clouds, will be considered breakthroughs, as it often happened in the history of math.
If it all, I would say that the number of breakthroughs is increasing. Here is more anecdotal evidence that supports the graph posted by Carlo Beenakker. On 23rd November 2022 Terence Tao posted on https://mathstodon.xyz/@tao/109390971278692349:
Maths at internet speed. On 16 Nov, Justin Gilmer https://arxiv.org/abs/2211.09055 makes a breakthrough on the #unionClosedSets conjecture, achieving a lower bound of 0.01 instead of the conjectured 0.5. The next day, Gil Kalai blogs about it at https://gilkalai.wordpress.com/2022/11/17/amazing-justin-gilmer-gave-a-constant-lower-bound-for-the-union-closed-sets-conjecture/ . Four days after that, three independent groups optimize the argument to $$\frac{3-\sqrt{5}}{2}=0.38$$: https://arxiv.org/abs/2211.11504 https://arxiv.org/abs/2211.11689 https://arxiv.org/abs/2211.11731 . (Via Rachel Greenfeld)
It is quite easy to see why innovation is slowing down in many academic fields, as compared, say to the sixties. The sixties were a time where the population of students increased a lot in many countries, and so the number of teachers in university increased accordingly. In higher education, teaching positions come with research duties and so the number of people doing academic research increased pretty fast. China is a country where this phenomena is taking place at the moment. For many other countries however, this is over.
• It is a very bizarre way of describe reality. Research positions usually come with teaching duties. I highly doubt that teaching positions with research duties often lead to breakthroughs. Jan 22 at 14:37
• With the caveat that I'm skeptical of that what is being measured is an accurate reflection of "breakthroughs" but it seems perfectly plausible that having a greater number of academic positions would allow for more people to continue to contribute to research even if their "productivity" was not high or they were working in unfashionable areas. Yitang Zhang is a contemporary example. Jan 22 at 15:30
• I do not find this bizarre. But then I got a position at that time. With much more freedom than is customary nowadays. Yes, we also had to teach. Times do change. Jan 23 at 8:32
• This answer would explain why a field of research would slow down, but not whether maths is doing that right now...
– AnoE
Jan 23 at 8:51
• @VladimirDotsenko I think the point was that the new academics were only hired because there was a bump in student numbers, and hence more academics were needed. As a by-product, the number of researchers therefore increased, so the amount of research increases. This explains why the sudden acceleration in the 60s tapered off, but does not explain why (if it is the case as in the graph in another answer), the totla output regressed to the mean. Jan 24 at 11:41 | 2023-01-30 09:12:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5772656798362732, "perplexity": 1053.9536216255951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00080.warc.gz"} |
http://learning.maxtech4u.com/2018/07/page/2/ | What is Web access Management?
Technology & Science / July 18, 2018
What is Management Information System (MIS)?
Technology & Science / July 17, 2018
Information system managers typically move up to such a position from other areas of information technology. The Management Information System (MIS) is a concept of the last decade or two. It has been understood and described in a number ways. It is also known as the Information System, the Information and Decision System, the Computer- based information System. MIS is the use of information technology, people, and business processes to record, store and process data to produce information that decision makers can use to make day to day decisions. Management Information Systems (MIS), referred to as Information Management and Systems, is the discipline covering the application of people, technologies, and procedures collectively called information systems, to solving business problems. Basic Overview of MIS MIS is the acronym for Management Information Systems. In a nutshell, MIS is a collection of systems, hardware, procedures and people that all work together to process, store, and produce information that is useful to the organization. “’MIS’ is a planned system of collecting, storing and disseminating data in the form of information needed to carry out the functions of management.” Academically, the term is commonly used to refer to the group of information management methods tied to the…
What is Cache Memory in Computer Architecture?
Technology & Science / July 15, 2018
Processors are generally able to perform operations on operands faster than the access time of large capacity main memory. Though semiconductor memory which can operate at speeds comparable with the operation of the processor exists, it is not economical to provide all the main memory with very high speed semiconductor memory. The most frequently used data or instructions are kept in cache so that it can be accessed at a very fast rate improving overall performance of the computer. There are multiple level of cache with last level being largest and slowest to the first level being fastest and smallest. Basic Description about Cache memory Cache memory is a small-sized type of volatile computer memory that provides high-speed data access to a processor and stores frequently used computer programs, applications and data. It is the fastest memory in a computer, and is typically integrated onto the motherboard and directly embedded in the processor or main random access memory (RAM). Cache memories are small fast memories used to temporarily hold the contents of portions of main memory that are (believed to be) likely to be used. The idea of cache memories is similar to virtual memory in that some active portion…
What is General Packet Radio Service (GPRS)?
/ July 14, 2018
GSM was the most successful second generation cellular technology, but the need for higher data rates spawned new developments to enable data to be transferred at much higher rates. The General Packet Radio Service (GPRS) is an enhancement to the existing GSM network infrastructure and provides a connectionless packet data service. The same cellular base-stations that support voice calls are used to support GPRS and as a consequence GPRS can be used wherever it is possible to make a voice call. GPRS roaming agreements exist with a large number of countries and this means users can use GPRS devices whilst abroad. GPRS is based on internet Protocols (IP) and enables users to utilize a wide range of applications – email and internet and/or intranet resources for instance. With throughput rates of up to 40 Kbit/s, users have a similar access speed to a dial-up modem, but with the convenience of being able to connect from anywhere. Basic Overview of GPRS GPRS (General Packet Radio Service) is a service within the GSM network, just like the two most popular services SMS and voice connections. GPRS is used for transmitting data in the GSM network in from of packets. The connection to…
What is Packet Sniffing in Networking?
/ July 13, 2018
Packets in computer communications can be defined as a quantity of data of limited size. In Internet all traffic travels in the form of packets, the entire file downloads, Web page retrievals, email, all these Internet communications always occur in the form of packets. In the internet, packet is a formatted unit of data carried by a packet mode in computer network Packets are the base of all data sent on the internet, yet they are often used insecurely. Tampering with live packets and the process it takes in order to alter packets traveling along the network are getting easier. Packet sniffing is commonly described as monitoring packets as they go across a network. Packet sniffers are typically software based, but can be hardware pieces installed directly along the network. Sniffers can go beyond network hosts that are seen in local area networks (LAN) that only handle data that is sent specifically to them. Basics Overview of Packet Sniffing Packet sniffing is the act of capturing packets of data flowing across a computer network. The software or device used to do this is called a packet sniffer. Packet sniffing is to computer networks what wiretapping is to a telephone network. …
What is Information System?
Technology & Science / July 12, 2018
An information system is designed to collect, process, and store and information. Although information systems are upgraded continuously in this context Information Technology (IT) plays an important role. Due to technological innovations, today most information systems are based on modern IT technology. That enables efficient computing as well as effective management for all sizes of organizations. The information systems enable more diverse human behavior, therefore a profound influence over society is also affected by information system. These systems accelerate speed of daily jobs. That technology also enables us to develop and maintain new and more-rewarding relationships. Defining Information System An information system is integrated and co-ordinate network, which combine together to convert data into information. Information system (IS) refers to a system of people, data records and activities. For example, that process information for organizations and includes manual and automated processes. Therefore large amounts of data are needed to be process. Basically, data is a value for given facts and stored on a database. However, information actually consists of data that can be organized is some information model that help in various application such as answers questions and solve problems. Such system is defined as software program that organize and…
What is Statistical Learning?
/ July 10, 2018
Statistical learning (SL) is the third mainstream in machine learning research. The main goal of statistical learning theory is to provide a framework for studying problem of inference. That is of gaining knowledge, making predictions, making decisions or constructing models from a set of data. Statistical Learning provides an accessible overview of the field of statistical learning, an essential tool-set for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. Basic Overview of Statistical Learning Statistical learning refers to a set of tools for modeling and understanding complex datasets. It is a recently developed area in statistics and blends with parallel developments in computer science and, in particular, machine learning. The field encompasses many methods such as the lasso and sparse regression, classification and regression trees, and boosting and support vector machines. It refers to a vast set of tools for understanding data. These tools can be classified as supervised or unsupervised. Broadly speaking, supervised SL involves building a statistical model for predicting, or estimating, an output based on one or more inputs. Problems of this nature occur in fields as…
What is Machine Learning & why it is important?
Technology & Science / July 7, 2018
Machine Learning (ML) is new age computing technology. It was born from pattern recognition and theory of computers. It can learn without being programmed to perform specific tasks. Researchers interested in artificial intelligence want to see if computers could learn from data. The iterative aspect of machine learning is important because as models are exposed to new data, they are able to independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results. It’s a science that’s not new – but one that has gained fresh momentum. Machine learning is domain of engineering where algorithms and computer based programs are learnt from data and past experience and provides decisions and analysis based on their experience and learned knowledge. Machine Learning: Basic description Machine learning is a set of tools that, broadly speaking, allow us to “teach” computers how to perform tasks by providing examples of how they should be done. For example, suppose we wish to write a program to distinguish between valid email messages and unwanted spam. Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. Machine learning is programming computers to optimize a performance criterion using example data or past experience. We need…
What is Artificial Intelligence (AI)?
/ July 6, 2018
Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial. Recent trends in the arena of Machine Learning have paved the way for advancements in Artificial Intelligence, and with the advent of Context and Contextual Applications for various platforms, the field of Artificial Intelligence just keeps getting better and better Definition of Artificial Intelligence Artificial Intelligence (AI) is fundamentally transforming the way businesses operate. It is a combination of cutting-edge technologies that enables machines to think, comprehend, and execute tasks hitherto carried out by humans. AI can be considered as an intelligent robot which possesses the cognitive characteristics of a human being. “Artificial intelligence is part of computer science concerned with designing intelligent computer systems that is systems that exhibit the characteristics we associate with intelligence in human behavior”. Communication has been one of the important aspects of intelligent behavior where vision and speech are all used effectively for communication purposes. Speech recognition, computer vision and language understanding have been some of the goals of artificial intelligence, because the possession of…
What is Denial of Service (DoS) Attack in Networking?
/ July 5, 2018
Network Security deals with all aspects related to protection of sensitive information on the network. It covers various mechanisms to provide fundamental security for data in communication systems. Security in computer network determines the ability of administration to manage, protect and distribute sensitive information. Data Security was found many years before the advent of wireless communication due to mankind’s need to send information without exposing its content to others. Every security system must provide a list of security functions. That can assure secrecy of the system. These functions are usually referred to as goals of security system. There are a number of different kinds of attacks are exist in network security that impact the system performance and security protocols. Among them a serious kind of attack namely Denial of Service (DoS) attack is described. Basic Description about denial of service (DoS) Attack Denial of Service attacks is undoubtedly a very serious problem in the Internet, whose impact has been well demonstrated in the computer network literature. The main aim of Denial of Service is the disruption of services by attempting to limit access to a machine or service instead of subverting the service itself. This kind of attack aims at…
Insert math as
$${}$$ | 2018-08-18 14:35:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23876671493053436, "perplexity": 1270.048254477092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213689.68/warc/CC-MAIN-20180818134554-20180818154554-00495.warc.gz"} |
https://zbmath.org/?q=an:0995.58003 | # zbMATH — the first resource for mathematics
Implicit function theorem for systems of polynomial equations with vanishing Jacobian and its application to flexible polyhedra and frameworks. (English) Zbl 0995.58003
The classical implicit function theorem (and its generalizations and modifications) for a function $$f:\mathbb{R}^p \times\mathbb{R}^q \to \mathbb{R}^n$$ provide sufficient conditions under which $$f(\dot x,\varphi(x)) \equiv 0$$ for some function $$\varphi:\mathbb{R}^p \to\mathbb{R}^q$$ in a neighborhood of a zero of $$f$$, the most important condition being the maximal rank of the partial derivative at this zero.
In the present paper, the author gives sufficient conditions, in case of a polynomial function $$f$$, which apply also if the partial derivative is singular. He also gives necessary conditions for the existence of $$\varphi$$, which therefore may serve as sufficient conditions for non-existence.
##### MSC:
58C15 Implicit function theorems; global Newton methods on manifolds 26B10 Implicit function theorems, Jacobians, transformations with several variables 52C25 Rigidity and flexibility of structures (aspects of discrete geometry) 26C10 Real polynomials: location of zeros 68T40 Artificial intelligence for robotics 70B15 Kinematics of mechanisms and robots 41A58 Series expansions (e.g., Taylor, Lidstone series, but not Fourier series)
Full Text: | 2021-05-11 13:53:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.518907368183136, "perplexity": 923.9289901473693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00194.warc.gz"} |
https://xeu.jerseyvera.us/high-yield-step-1-anki-deck.html | High yield step 1 anki deck
Ost_Took step 1 today…. Soooo i dont know whats going on, but i honestly went in thinking it wasnt going to be that hard….i was scoring 230’s on nbmes and UWSA. But this test made want to change careers. I probably knew about 10% of the questions therest i had no idea. What is Zanki Anki Deck. Likes: 643. Shares: 322.This is your cue to rate the card and move on. To add a timer, go to [Tools] → [Manage Note Types] → Select the note types to which you want to add a timer → [Cards] → Paste the code below to the front and back of each card (refer to image above). Make sure to paste it at end of of the existing code. If the card layout becomes distorted ...How I Made My Anki Decks For Step 1 (250+) Using large Step 1 Anki Decks How to use Anki filter decks during dedicated USMLE STEP 1 studying The AnKing Deck- How to Use it USMLE Step 1: 5 Ways to Score 240+ How to Study Step 2 CK can be a make or break test, especially if you were disappointed with Step 1 or are interested in a competitive ...Jul 19, 2022 · Now thoroughly updated and revised, this best-selling volume in the popular Step-Up series provides a high-yield review of medicine, ideal for preparing for clerkships or clinical rotations, shelf exams, and the USMLE Step 2 In Anki, the default behavior is massed practice of decks (you finish one deck then move on to another) but interleaved ... Definitely know the QBank questions and what is in First Aid. Biochemistry, Genetics, Cell Biology, Histology -QBanks and FA should be sufficient. Cell biology is more high yield than biochemistry. Know how each fits into different diseases. Microbiology and Immunology - Questions.I got 240+ on Step 1 and it was Physeo which helped me jump 15 to 20 points higher on the real exam, while my scores were stuck around 220 during the practice tests." "I was able to score above 250 on step 1 and above 750 on comlex level 1 and Physeo had a lot to do with my success.1 The Best Microbiology Anki Decks. 2 Zanki Microbiology. 3 Lolnotacop’s Deck (Sketchy Micro) 4 Pepper Deck. 5 Additional Information On Microbiology Anki Decks. 5.1 Lightyear Microbiology. 5.2 Recommended Microbiology Flashcard Resource. 5.3 Recommended Microbiology Textbook Resource. And to provide the highest high-yield deck, we have the completely updated Rapid Review Many medical students swear by Anki, and Anki's popularity has largely been attributed to the fact that it is free to use and offers many high-quality flashcard decks for USMLE Step 1 and Step 2 These flashcards were made with Anki, a flashcard application ...Expensive test were unfamiliar with, to Make things easier for you to identify Pathology on USMLE! Neuroanatomy, BRS Physiology, First Aid 2016/2017 and SketchyPharm Tips for Using Anki, high yield step 1 images anki! During your Micro block and re-watch the summer before 2 nd year ( time permitting ) Step...Jul 29, 2021 · Students have the flexibility to create their own cards and decks by importing pictures and charts as necessary. Anki is often used in tandem with First Aid and the UWorld question bank (QBank) to make flashcard sets based on the material being reviewed in each resource. However, UWorld, the QBank leader for USMLE Step 1 and Step 2 CK test ... Apr 07, 2020 · Anki Decks used for Step 1: Zanki (Biochem, Immunology, Basic Pathology, Reproductive) Lolnotacop (Follows sketchy micro and uworld micro) Lightyear (everything else - follows Boards and Beyond and FA) Dorian's Anatomy Deck.Find out how to weave Anki into your study routine, how to utilize the add-on to its full potential, and how it can be incorporated into a high-yield Step 1 strategy. At the end of the webinar, you'll also have an opportunity to live chat with AnKing, Glutanimate, and AMBOSS! AMBOSS x Anki: Excel on Step 1 has already aired, but you can still ...Expensive test were unfamiliar with, to Make things easier for you to identify Pathology on USMLE! Neuroanatomy, BRS Physiology, First Aid 2016/2017 and SketchyPharm Tips for Using Anki, high yield step 1 images anki! During your Micro block and re-watch the summer before 2 nd year ( time permitting ) Step...This is a fairly complicated setup, but it is definitely worth it! Part 1: The Evolution of Decks video (how these decks all came to be) Part 2: The AnKing Overhaul Deck video (WHY this deck is so awesome! Also how to use it.) Part 3: The Special Fields add-on video (HOW to update to this deck- watch this video step-by-step as you are updating. Apr 20, 2018 · This is the third update of my Step 1 Anki deck. If this is your first time reading or hearing about my deck, I suggest checking out my original post in which I explain the deck – Anki: Soze’s Step 1 Master Deck . (It’s different than the Zanki/Bros style). Also, I feel that it’s important to point out that this deck should be used as a ... Anki: Soze's Step 1 Master Deck It is February of my second year in medical school. Just over four months remain between me and the USMLE Step 1 exam. As such, my Step 1 preparation is well under way. Like many of you, figuring out how and what to study was half the battle. My tentative plan is in place.This is a high-yield guide on how I use Anki! These are the basics, so I've included links with more detail from the AnKing playlist below. Let me know if yo... Oct 22, 2020 · Zanki (Anking’s updated version) – Step 1. This deck has quickly become one of the most popular among which to choose. It was created by a medical student with the reddit username, u/ZankiStep1, and later expanded upon by many more medical students. It toutes a card count of over 25,000 and attempts to comprehensively cover most facts found ... This is your cue to rate the card and move on. To add a timer, go to [Tools] → [Manage Note Types] → Select the note types to which you want to add a timer → [Cards] → Paste the code below to the front and back of each card (refer to image above). Make sure to paste it at end of of the existing code. If the card layout becomes distorted ...Zanki (Anking's updated version) - Step 1. This deck has quickly become one of the most popular among which to choose. It was created by a medical student with the reddit username, u/ZankiStep1, and later expanded upon by many more medical students. It toutes a card count of over 25,000 and attempts to comprehensively cover most facts found ...Posted by December 15, 2020 Leave a comment on high yield step 1 images anki December 15, 2020 Leave a comment on high yield step 1 images anki. This deck is a comprehensive deck for USMLE Step 1. With Stream Deck, maximize your production value and focus on what matters most: your audience. This will be an all-inconclusive ultimate.Oct 13, 2020 · Thus, it is worth mentioning my digital gurus here – Prerak Juthani and Med School Insiders. Without further ado, let me give you the manageable steps to start creating your own Anki flashcards. Step -1 = Download Anki. – The simple steps are available on their site. NOTE – Creating Anki flashcards is much simpler on laptop or PC. 121 views1 year ago Service status The great thing about Anki, for medical school, is that there are many high quality premade decks out there with all the information you will need for Step 1 (pretty much) The AnKing Step 1 Deck Just FYI, the audio is a little off after 10 minutes This video explains some basic tips for using the AnKing ...And to provide the highest high-yield deck, we have the completely updated Rapid Review Many medical students swear by Anki, and Anki's popularity has largely been attributed to the fact that it is free to use and offers many high-quality flashcard decks for USMLE Step 1 and Step 2 These flashcards were made with Anki, a flashcard application ...A large portion of the Step 1 deck is now tagged with High Yield tags (Separates cards into High , Medium and Low yield ). These are under ^Other::^HighYield. We hope to finish this by V12. The first 3 chapters are ultra high yield for Step 1! Cons. Can feel oversimplified at times. ~$100; Sketchy. Pros. Memory palace is so effective! Helps you retain information long-term! Fun to watch! ... If you are using Anki and can't decide which pre-made deck to use, just pick one and stick to it. Most decks, such as Lightyear and Anking ...There isn't an official definition for "High Yield.". However, people usually mean, "material likely to be tested," usually on the USMLEs. The implication is that high yield material is worth studying. This is to contrast with "low yield" material. "Low yield" material are usually things covered in classes that won't show up ...Feb 08, 2022 · Anki: Long gone are the days of spending all afternoon on Anki. Efficiency is key now. Pathoma: I wish I had stuck to some of the more simple, high yield details in Pathoma. UWorld/First Aid love to bog you down with lots of details. NBME, STEP 1, and Pathoma tend to focus more on high yield details and common diseases. Step 1 Anki Deck | AnKing Overhaul The AnKing Step 1 Deck This deck is a comprehensive deck for USMLE Step 1. It combines the best parts of Lolnotacop and Zanki with images from the Pepper micro/antimicrobial and UltraZanki decks. It is re-organized and probably the best organization currently available.Anatomy Shelf Notes 100 cases anatomy for usmle step 1 It is designed to ensure ... your Musculoskeletal videos are ready!Board and Beyond 17 6 30 Anki 16 11 27 USMLE/NBME Qbanks 19 13 22 Cram figther 30 6 17 Goljan lectures 35 4 14 Sketchy Medical 35 2 11 YouTube videos 35 9 9 USMLE-Rx 47 0 6 Osmosis 45 5 4 UNMC course material 37 15 2.Resource #1: Anki. A resource that’s technically free that I think is nice to have and use is Anki. It goes with the Brosencephalon and Zanki cards which are pre-made, but I like to use Aki for Step 1 to make my own flashcards. I know a lot of you just don’t care about flashcards. Maybe you think it takes too much time. monotub contamination Now you can access any high-yield USMLE-Rx tool in one great app!* Get on-the-go access to USMLE board-style questions, Rx Bricks, flash cards, and videos. It contains Zanki and some Pepper and tons more. Usmle Rx Anki Deck Download First Aid Express Step 1 2019 (USMLE-Rx) Videos + FA 2019 Book. Final Thoughts: Best Anki Decks For Pharmacology. ...2326 Option 1 or 866 Click the Change Deck button Additionally, while Anki is an excellent tool for memorization and recall, the MCAT exam requires more than just remembering facts and After each exam, I suspended the cards, so around 12 weeks before Step I unsuspended them and was doing ~1500 reviews a day, which slowly decreased to ~300 by ...AMBOSS High-Yield Step 1 Study Plan Lecture; ... AnkiStill OMM Deck Flashcards; Anki Zanki Step 2 Neurology Flashcards; UWise Anki Deck Flashcards; Anki. 100 Concepts ... med BULLETS Step 1. MB BULLETS Step 1 For 1st and 2nd Year Med Students. MB BULLETS Step 2 & 3 For 3rd and 4th Year Med Students. ORTHO BULLETS Orthopaedic Surgeons & Providers. JOIN NOW LOGIN. ... Biochemistry High-Yield Topics. Topics with the highest number of questions. # Topic Importance Scrore Questions; 1:Study for Step 1 using an Anki deck of your choice and consider our favorite USMLE Step 1 Prep Courses. There's also this helpful video that gives tips on how to study fewer than five hours per day, including lecture time. Anki Deck Face-Off Zanki vs Lightyear. So a 240 on Step 1 is the 66th percentile Step 2 anki deck dorian Step 2 anki deck dorian Like the Zanki Step 2 deck, this deck is ...Find out how to weave Anki into your study routine, how to utilize the add-on to its full potential, and how it can be incorporated into a high-yield Step 1 strategy. At the end of the webinar, you'll also have an opportunity to live chat with AnKing, Glutanimate, and AMBOSS! AMBOSS x Anki: Excel on Step 1 has already aired, but you can still ...email protected] [email protected] [email protected] samantha safie forum afdak aan huis dumnezeu vorbeste prin viseAce Med School and Pass Step 1. The best of all the big resources in one place for an affordable price. High-yield videos, practice questions, and more. Study Medicine in One Place. Not All Over the Place. Using the MCAT for Victory Anki deck is the best way to memorize everything on the MCAT. Check out examples of some cards representative of the deck. ... The MCAT for Victory textbooks will help you learn and remember all the high-yield content on the MCAT. The 10 textbooks are full of tables, illustrations, and step-by-step problem solving ...AMBOSS High-Yield Step 1 Study Plan Lecture; AMBOSS Neurology Clerkship Study Plan Lecture; ... AnkiStill OMM Deck Flashcards; Anki Zanki Step 2 Neurology Flashcards; UWise Anki Deck Flashcards; Anki. 100 Concepts Anatomy - Clark Deck Flashcards; Adytumdweller Deck 2.0 - Pixorize Biochemistry Flashcards;Anki is a tool I use daily to remember things better. Below are the things I have learned about typesetting math equations in Anki using both MathJax and raw LaTeX. Hopefully these notes can save you some time. Update [2020-04-17] Anki 2.1+ now has built-in support for MathJax. This is now the best approach to math typesetting, since it removes.Master cardiology from a Harvard-trained anesthesiologist who scored USMLE 270 with these 130+ high-yield flash cards. ... 1. Anki Flashcard Mastery > Step 1, 2CK Anki Cards 2. Personalized Study Plan > 1-on-1 Tutoring (Step 1, ... Yousmle Step 1 Deck Stand out to residencies by mastering the most important Step 1 concepts ...Using the MCAT for Victory Anki deck is the best way to memorize everything on the MCAT. Check out examples of some cards representative of the deck. ... The MCAT for Victory textbooks will help you learn and remember all the high-yield content on the MCAT. The 10 textbooks are full of tables, illustrations, and step-by-step problem solving ...Learn the material with Step 1 Simplified (SOS)/FA/Pathoma/school, apply the material with question banks (eg Amboss or Kaplan or Uworld). ... I recommend you use a premade deck (try Zanki, ... (2 hours at 1.4x) (During this time, also making Anki cards for things that are tough to remember or high yield). Review anki cards (2 hours). I split ... gen x trivia questions Second, and more importantly, you'll be in the habit of reviewing the entire deck, which is very good for your larger exams, whether that's the MCAT or USMLE Step 1 or COMLEX. With a fragmented deck, this just doesn't happen. Remember, for spaced repetition software like Anki to work properly, you must regularly review information.Jul 18, 2022 · This deck is a mostly comprehensive deck for USMLE Step 2 If you go to the Anki shared decks, you can click through to the most popular subjects 2mo · dadrenergic Subtlety Rogue Shadowlands Talents 5 to allow Anki to sync faster and handle the high flashcard volume without the risk of your settings/data being corrupted on every sync with Anki ... Sample Decks: Spanish, Esp Anki Word List - Words - 1 - 500 - Eng to Span, Esp Anki Word List - Words - 501 - 1000 - Eng to Span Show Class Spanish from IsabelStep-Up to USMLE Step 2 CK is your one-stop shop for high-yield, systems-based review NMS Review for USMLE Step 2 CK / Editors Kenneth Ibsen & Step 2 PreMade Anki Decks **if the anking deck is the only deck that you have, feel free to edit the "default" options group Step-Up to USMLE Step 2 CK is your one-stop shop for high-yield, systems-based ...Sketchy is an animated video resource that students use to memorize material like microbiology and pharmacology. It gives a memorization technique called "memory palaces". A memory palace is an imagined place, or "palace" with different objects that represent specific concepts. Sketchy creates these memory palaces using animated ...Now thoroughly updated and revised, this best-selling volume in the popular Step-Up series provides a high-yield review of medicine, ideal for preparing for clerkships or clinical rotations, shelf exams, and the USMLE Step 2 In Anki, the default behavior is massed practice of decks (you finish one deck then move on to another) but interleaved ...These Anki decks can help you learn french, memorize geography, understand anatomy, & more! There are a lot of details missing, to be sure, but these videos give an excellent overview of the body's anatomy Pixorize Anki Deck The Step 2 CK Deck contains 5720 cards covering all of the content in the Clerkship Decks relevant for USMLE Step 2 CK ...baddie roblox outfits codes. There are a lot of details missing, to be sure, but these videos give an excellent overview of the body's anatomy Pixorize Anki Deck The Step 2 CK Deck contains 5720 cards covering all of the content in the Clerkship Decks relevant for USMLE Step 2 CK, as well as the practice Step 2 NBME exams, and the Step 2 UWorld Self Assessments.Resource #1: Anki. A resource that’s technically free that I think is nice to have and use is Anki. It goes with the Brosencephalon and Zanki cards which are pre-made, but I like to use Aki for Step 1 to make my own flashcards. I know a lot of you just don’t care about flashcards. Maybe you think it takes too much time. Many medical students swear by Anki, and Anki’s popularity has largely been attributed to the fact that it is free to use and offers many high-quality flashcard decks for USMLE Step 1 and Step 2. Anki is often used in tandem with First Aid and the UWorld question bank (QBank) to make flashcard sets based on the material being reviewed in each ... Feb 08, 2022 · Anki: Long gone are the days of spending all afternoon on Anki. Efficiency is key now. Pathoma: I wish I had stuck to some of the more simple, high yield details in Pathoma. UWorld/First Aid love to bog you down with lots of details. NBME, STEP 1, and Pathoma tend to focus more on high yield details and common diseases. This is your cue to rate the card and move on. To add a timer, go to [Tools] → [Manage Note Types] → Select the note types to which you want to add a timer → [Cards] → Paste the code below to the front and back of each card (refer to image above). Make sure to paste it at end of of the existing code. If the card layout becomes distorted ...USMLE Step 1 High Yield Images Document. Hey all, I posted here about a week or so ago on collaborating on high-yield images for the exam. Well, after a few entries, we're up to 43 pages of all images. I am posting again to remind y'all, especially those with free time after finishing step to contribute even just 1-2 diseases for future test ... This is a high-yield guide on how I use Anki! These are the basics, so I've included links with more detail from the AnKing playlist below. Let me know if yo... First Aid outlines all of the high-yield material covered on Step 1, as. medicalschoolanki). ... Step 1 Deck Overview Updated yearly, the MedSchoolGurus Official Step 1 Anki Deck is designed to comprehensively cover the material in First Aid for the USMLE Step 1, Pathoma, UWorld Step 1, as well as the UWorld and NBME practice exams. ...Search: Step 1 Anatomy Deck. What is Step 1 Anatomy Deck. Likes: 362. Shares: 181.Anki isn't only for Japanese language learners Roll on Anki options groups Adjust the interval modifier in Anki's deck options until you're getting 80 to 90% of your reviews correct Looking at the Anki shared Spanish decks available, some of them have errors The standard way of working with Anki, is with a pretty awkward GUI The standard way of working with Anki, is with a pretty awkward GUI.Tag: pathoma anki deck.Pathology USMLE step 1. Download Pathoma 2021 Fundamentals of Pathology PDF For USMLE Step 1 .Pathoma is high yield, exceptionally suitable explained +35 hours videos, and 218 pages textbook to explain basics of pathology for 3rd-year students and board review prepared by Husain A. Sattar, MD, who was born in Chicago, Illinois, and received.Master the terms you're most likely to see on Step 1, Step 2, and the Shelf exams. ... with its short summaries of high yield terms within Anki. It's like Anki now has its own medical search engine built-in. Adil S., Class of 2022, Campbell University School of Medicine. ... • The add-on works with any Anki deck.A large portion of the Step 1 deck is now tagged with High Yield tags (Separates cards into High , Medium and Low yield ). These are under ^Other::^HighYield. We hope to finish this by V12. Apr 07, 2020 · Anki Decks used for Step 1: Zanki (Biochem, Immunology, Basic Pathology, Reproductive) Lolnotacop (Follows sketchy micro and uworld micro) Lightyear (everything else – follows Boards and Beyond and FA) Dorian’s Anatomy Deck. I am separating Anki altogether because if I had to separate one source that helped me get this score, it would be Anki. step 1 anatomy anki deck . 07 May other ways to say sorry professionally. step 1 anatomy anki deck . 7 May 2022; Posted by decorah football roster; town of tonawanda fence codes. step 1 anatomy anki deck . 07 May other ways to say sorry professionally. step 1 anatomy anki deck . 7 May 2022; Posted by decorah football roster; town of tonawanda ... The high-yield Anki Hello all, I'm post Step-2 and now wanna study for step-1 quickly. I did few NBMEs and passed safely but I wanna reciew Some updated high-yield ANKI deck that is not overwhelming (max of 2000 cards).Anki deck for USMLE Step 1 High Yield Images - link in comments below New Clinical Deck Posted by 2 years ago USMLE Step 1 High Yield Images Document Hey all, I posted here about a week or so ago on collaborating on high-yield images for the exam. Well, after a few entries, we're up to 43 pages of all images.The deck consists of all 120 keyboard shortcuts (240 cards due to the use of reverse guessing) 2%), Pepper Micro (63 It's because of this that students argue that the Bros deck is better than Medical School Zanki First aid for the usmle step 1 2020 pdf direct download anki …. It combines the best parts of Lolnotacop and Zanki with images from ...USMLE Step 2 CK High Yield Facts. 0.60MB. 0 audio & 7 images. Updated 2018-04-24. The author has shared 2 other item(s). Description. This deck is comprised by the high yield rapid review fact section of the First Aid for the USMLE Step 2 Clinical Knowledge (Ninth edition). ... When this deck is imported into the desktop program, cards will ...Using the MCAT for Victory Anki deck is the best way to memorize everything on the MCAT. Check out examples of some cards representative of the deck. ... The MCAT for Victory textbooks will help you learn and remember all the high-yield content on the MCAT. The 10 textbooks are full of tables, illustrations, and step-by-step problem solving ...Mar 27, 2018 · The deck is pretty different from Bros/Zanki at times, so before downloading I’d recommend reading my “guide” and explanation of the deck here: Anki: Soze’s Step 1 Master Deck. I find that I understand and retain topics better from quizzing myself with in-depth (harder) questions compared to more cards with one fact per card. Step 1 Preparation Setting up Anki 3 In 2014, Redditor u/brosencephalon shared a Step 1 Anki deck on r/medicalschool Review the deck 27 Şub 2013 27 Şub 2013. . Step 1: Outline use cases, constraints, and assumptions Fixed a bug where failing a card didn't reset its interval when per day scheduling was off Anki Decks used for Step 1: Zanki ...2022. 6. 26. · These scripts typically contain a disease's epidemiology, presentation, pathophysiology, diagnosis criteria, and treatment Preclinical/Step I Pepper Pharmacology Anki Deck Increase cyopalsmic calcium in cell injury Outcome of Cell Injury Recovery of Cells: In cases of reversible injuries, restoring full structure and functions may take place doxycycline hyclate.Divine Intervention Episode 399 - The 3 Confusing Poisonings (HY for Step 1-3) July 1, 2022 ~ divineinterventionpodcast ~ Leave a comment. In this HY podcast, I discuss salient means for differentiating poisonings from carbon monoxide, cyanide, and methemoglobin. I spend time discussing physiology that should help you navigate these questions ...There isn't an official definition for "High Yield.". However, people usually mean, "material likely to be tested," usually on the USMLEs. The implication is that high yield material is worth studying. This is to contrast with "low yield" material. "Low yield" material are usually things covered in classes that won't show up ...UWORLD is the resource among step 1 resources that all medical students swear by. 2400+ questions are available for you during your dedicated period. The questions cover both high-yield info as well as lower yield items which can make the difference between a 230 and a 250+ score. To avoid some of the stresses I encountered, don’t pay too ... Oct 22, 2020 · Zanki (Anking’s updated version) – Step 1. This deck has quickly become one of the most popular among which to choose. It was created by a medical student with the reddit username, u/ZankiStep1, and later expanded upon by many more medical students. It toutes a card count of over 25,000 and attempts to comprehensively cover most facts found ... step 1 anatomy anki deck . 07 May other ways to say sorry professionally. step 1 anatomy anki deck . 7 May 2022; Posted by decorah football roster; town of tonawanda fence codes. step 1 anatomy anki deck . 07 May other ways to say sorry professionally. step 1 anatomy anki deck . 7 May 2022; Posted by decorah football roster; town of tonawanda ... Divine Intervention Episode 399 - The 3 Confusing Poisonings (HY for Step 1-3) July 1, 2022 ~ divineinterventionpodcast ~ Leave a comment. In this HY podcast, I discuss salient means for differentiating poisonings from carbon monoxide, cyanide, and methemoglobin. I spend time discussing physiology that should help you navigate these questions ...The AnKing Step 1 Deck You are required to send a bank draft or a Singapore bank cheque, payable to Step 2: About 2 to 3 months prior to commencement of the course, SMA will apply for Student 00:10 How updating This deck is a comprehensive deck for USMLE Step 1 anki decks step 1: 1 anki decks step 1: 1. .Search: Anking Step 2 Deck. See deck price, mana curve, type distribution, color distribution, mana sources, card probabilities, proxies TheMonsterOx created a new deck: Merfolk Mill For those with less-than-stellar Step 1 scores, Step 2 CK is even more critical, as it represents a student's only opportunity to compensate for Step 1 performance Step Two is a beautiful, modern two-step ...Browse through challenging USMLE Step 1-style questions that feature diagnostic materials like lab samples, images, and more. Plus, you can train under exam-day conditions by simulating the USMLE interface with our Exam Mode feature. The Basic Sciences in the Library. Get the full scope of Step 1 with the Basic Sciences area of the Library. vanilla gift card to paypal balance UWORLD is the resource among step 1 resources that all medical students swear by. 2400+ questions are available for you during your dedicated period. The questions cover both high-yield info as well as lower yield items which can make the difference between a 230 and a 250+ score. To avoid some of the stresses I encountered, don’t pay too ... During pre dedicated start off slow. Do 10, move to 20, then when it's closer to dedicated, try to do 40 every day. Do them non-timed (tutor mode). For your incorrects, I recommend make Anki cards for them. When you first start going through Uworld, it will take a LONG time. Don't worry about that.Master the terms you're most likely to see on Step 1, Step 2, and the Shelf exams. ... with its short summaries of high yield terms within Anki. It's like Anki now has its own medical search engine built-in. Adil S., Class of 2022, Campbell University School of Medicine. ... • The add-on works with any Anki deck.Step 2 anki deck Dec 03, 2021 · This MCAT Anki deck can help you in both low-yield and high-yield topics. But do not underestimate the time and energy investment needed. It’s one of the best MCAT Anki decks there is available. Ortho528 Anki MCAT Deck . The Ortho528 deck is one of the best Anki decks for MCAT available on student forums, containing 4351 cloze deletion cards. Absolutely! I have spent thousands upon thousands of hours honing my Anki card-making skills, to create the kinds of integrated and applied cards that helped me score 270 on Step 1. Add to your own Anki deck, or use these as the foundation for your own. Of course, you can make all of your own pathogenesis-to-presentation Anki cards, or you can ... These include Zanki, Brosencephalon, and a host of others Step-Up to USMLE Step 2 CK is your one-stop shop for high-yield, systems-based review NMS Review for USMLE Step 2 CK / Editors Kenneth Ibsen & The AnKing Step 1 Deck 3 minutes and 44 seconds Anking Reddit Anking Reddit Anking Reddit Anking Reddit.com USMLE STEP 1 Anki Deck 2017. The Anki deck comes pre-divided; Cons: There are just too many cards (20,000+) to get through in a single dedicated test prep time frame (. ... And to provide the highest high-yield deck, we have the completely updated Rapid Review. USMLE-Rx (by ScholarRx) Ready to own the USMLE and master medical school? Now ...A large portion of the Step 1 deck is now tagged with High Yield tags (Separates cards into High , Medium and Low yield ). These are under ^Other::^HighYield. We hope to finish this by V12. Master the terms you're most likely to see on Step 1, Step 2, and the Shelf exams. ... with its short summaries of high yield terms within Anki. It's like Anki now has its own medical search engine built-in. Adil S., Class of 2022, Campbell University School of Medicine. ... • The add-on works with any Anki deck.Many medical students swear by Anki, and Anki’s popularity has largely been attributed to the fact that it is free to use and offers many high-quality flashcard decks for USMLE Step 1 and Step 2. Anki is often used in tandem with First Aid and the UWorld question bank (QBank) to make flashcard sets based on the material being reviewed in each ... Dec 03, 2021 · This MCAT Anki deck can help you in both low-yield and high-yield topics. But do not underestimate the time and energy investment needed. It’s one of the best MCAT Anki decks there is available. Ortho528 Anki MCAT Deck . The Ortho528 deck is one of the best Anki decks for MCAT available on student forums, containing 4351 cloze deletion cards. Mar 27, 2018 · The deck is pretty different from Bros/Zanki at times, so before downloading I’d recommend reading my “guide” and explanation of the deck here: Anki: Soze’s Step 1 Master Deck. I find that I understand and retain topics better from quizzing myself with in-depth (harder) questions compared to more cards with one fact per card. There isn't an official definition for "High Yield.". However, people usually mean, "material likely to be tested," usually on the USMLEs. The implication is that high yield material is worth studying. This is to contrast with "low yield" material. "Low yield" material are usually things covered in classes that won't show up ...Read Online Yousmle Step 1 Anki Deck Yousmle Step 1 Anki Deck When somebody should go to the books stores, search instigation by shop, shelf by shelf, it is in fact problematic. ... Medschool Anki contains all the relevant information to High Yield Anki decks for the USMLE Step 1, 2, 3, Medical School, and Residency for Medical Students and ...Sample Decks: Spanish, Esp Anki Word List - Words - 1 - 500 - Eng to Span, Esp Anki Word List - Words - 501 - 1000 - Eng to Span Show Class Spanish from IsabelThe high-yield Anki Hello all, I'm post Step-2 and now wanna study for step-1 quickly. I did few NBMEs and passed safely but I wanna reciew Some updated high-yield ANKI deck that is not overwhelming (max of 2000 cards).Resource #1: Anki. A resource that’s technically free that I think is nice to have and use is Anki. It goes with the Brosencephalon and Zanki cards which are pre-made, but I like to use Aki for Step 1 to make my own flashcards. I know a lot of you just don’t care about flashcards. Maybe you think it takes too much time. This is a high-yield guide on how I use Anki! These are the basics, so I've included links with more detail from the AnKing playlist below. Let me know if yo... Step 1 Anki Deck | AnKing Overhaul The AnKing Step 1 Deck This deck is a comprehensive deck for USMLE Step 1. It combines the best parts of Lolnotacop and Zanki with images from the Pepper micro/antimicrobial and UltraZanki decks. It is re-organized and probably the best organization currently available.USMLE Step 1 High Yield Images Document. Hey all, I posted here about a week or so ago on collaborating on high-yield images for the exam. Well, after a few entries, we're up to 43 pages of all images. I am posting again to remind y'all, especially those with free time after finishing step to contribute even just 1-2 diseases for future test ... What is Zanki Anki Deck. Likes: 643. Shares: 322.Second, and more importantly, you'll be in the habit of reviewing the entire deck, which is very good for your larger exams, whether that's the MCAT or USMLE Step 1 or COMLEX. With a fragmented deck, this just doesn't happen. Remember, for spaced repetition software like Anki to work properly, you must regularly review information.Apr 15, 2019 · The deck includes cards which test extremely low-yield material. For example, many cards ask you to identify pathology on the basis of imaging findings alone. This will almost never be the case on USMLE Step 1, as test makers often provide other details to assist with the interpretation of images. Some students may therefore feel that Zanki’s ... It is also forever updatable so we as a medical school community can continually update it for new content anki addons anking This deck is a comprehensive deck for USMLE Step 1 Study USMLE Step 2 using smart web & mobile flashcards created by top students, teachers Sample Decks: First Aid for the USMLE Step 2 CK: Cardiovascular Sample Decks ...The AnKing deck is by far the best medical school Anki deck in the world. The number of hours that have gone into making this deck makes it something that nobody can replicate. Currently, the latest version is Step 1 Version 9 and Step 2 Version 4. Scroll to the bottom of the Reddit post to see whether there are any updates. 2019. 12. 5.This is a high-yield guide on how I use Anki! These are the basics, so I've included links with more detail from the AnKing playlist below. Let me know if yo... Nearly 200 new cards based on high-yield Step 1 concepts; A completely new media set with sourced images (500+ images) Entirely new organization with consistent tags. Perfect for use alongside your courses or any popular Pathology resource; Organized by system (cardiovascular, pulmonary, etc.) Clearer formatting of cards; Many 'Low-yield' cards ... A large portion of the Step 1 deck is now tagged with High Yield tags (Separates cards into High, Medium and Low yield). These are under ^Other::^HighYield. We hope to finish this by V12. Sketchy Biochem tags/images were updated. Reorganized Sketchy micro and pharm tags to current organization on the website I disagree! suspend the entire deck then search by tag and unsuspend the content after you cover it. You should be able to do 8-10 cards/min with a controller. the anking deck is clutch, make sure you use the plugins. You should be able to cover about 250 cards in 30min.Load balancer looks at your future review days and places new reviews on days with the least amount of load in a given interval. This way you won't have drastic swings in review numbers from day to day, so as to smoothen the peaks and troughs. 5. Frozen Fields. ( Anki 2.0 Code: 516643804 | Anki 2.1 Code: 516643804)Master the terms you're most likely to see on Step 1, Step 2, and the Shelf exams. ... with its short summaries of high yield terms within Anki. It's like Anki now has its own medical search engine built-in. Adil S., Class of 2022, Campbell University School of Medicine. ... • The add-on works with any Anki deck.When the best time to make (and review) your high-yield pharm flashcards; Much more; Note: this deals explicitly with learning the drugs for Step 1, Step 2, and Step 3. For pharmacology principles, you can use the basic principles of flashcard making here. USMLE pharmacology principles would involve things like competitive inhibitors, kinetics ...This is important, as otherwise, reviewing cards in the filtered deck will not count as reviewing that card in your main deck. Click build. By default, this will create a new deck called "Filtered Deck 1". You can rename this in the home screen by clicking the gear button to the right of the "Filtered Deck 1" and clicking "Rename".Search: Best Anki Decks. Tags are a very important part of using Anki properly After opening the file, you will be presented with a dialog which allows you to customize how your data is imported Early in your clinical rotation , you can go over 100-200 cards within 2 weeks Craft a Personalized Anki Deck Learn the fundamentals using Anki flashcards & Anki decks today!1 The Best Neurology Anki Decks. 1.1 Zanki Step 2 Neurology. 1.2 Hoop & Ruck's Step 3 Deck: Neurology. 1.3 Netter's Neuroscience Flashcards Deck. 1.4 Cheesy Lightyear: B&B Anki Deck for Neurology. 1.5 AVSM Clinical. 2 Other Neurology Resources. 2.1 Recommended Neurology Flashcards. 2.2 Recommended Neurology Textbooks.Part 9. Part 10. Anki does come with some stats that you can look at, but the Review Heatmap add-on gives you an advanced tracking functionality It is definitely a labor intensive process, Now with including full-length practice exams & a 2020 short format option! Public / Study 2) Download Premed95 P/S Anki deck from r/MCAT and use it as a ...I got 240+ on Step 1 and it was Physeo which helped me jump 15 to 20 points higher on the real exam, while my scores were stuck around 220 during the practice tests." "I was able to score above 250 on step 1 and above 750 on comlex level 1 and Physeo had a lot to do with my success.Many medical students swear by Anki, and Anki’s popularity has largely been attributed to the fact that it is free to use and offers many high-quality flashcard decks for USMLE Step 1 and Step 2. Anki is often used in tandem with First Aid and the UWorld question bank (QBank) to make flashcard sets based on the material being reviewed in each ... 2022. 6. 26. · These scripts typically contain a disease's epidemiology, presentation, pathophysiology, diagnosis criteria, and treatment Preclinical/Step I Pepper Pharmacology Anki Deck Increase cyopalsmic calcium in cell injury Outcome of Cell Injury Recovery of Cells: In cases of reversible injuries, restoring full structure and functions may take place doxycycline hyclate.Divine Intervention Episode 399 - The 3 Confusing Poisonings (HY for Step 1-3) July 1, 2022 ~ divineinterventionpodcast ~ Leave a comment. In this HY podcast, I discuss salient means for differentiating poisonings from carbon monoxide, cyanide, and methemoglobin. I spend time discussing physiology that should help you navigate these questions ...yousmle-step-1-anki-deck 3/20 Downloaded from ahecdata.utah.edu on June 23, 2022 by guest Step 1. USMLE Step 1 Lecture Notes 2021: 7-Book Set Kaplan Medical 2020-12 Kaplan Medical's USMLE Step 1 Lecture Notes 2021: 7-Book Set offers in-depth review with a focus on high-yield topics in every discipline—a comprehensive approach that will help The Picmonic Anki Add-On works with your existing Anki decks to integrate your resources and make it even easier to access high-yield Picmonic content from within your Anki decks. By installing the Picmonic Add-On for Anki, you'll see a pop-up containing: Explanations. High yield fact list. Picmonic thumbnail image.There are a lot of details missing, to be sure, but these videos give an excellent overview of the body's anatomy Pixorize Anki Deck The Step 2 CK Deck contains 5720 cards covering all of the content in the Clerkship Decks relevant for USMLE Step 2 CK, as well as the practice Step 2 NBME exams, and the Step 2 UWorld Self Assessments.Read Online Yousmle Step 1 Anki Deck Yousmle Step 1 Anki Deck When somebody should go to the books stores, search instigation by shop, shelf by shelf, it is in fact problematic. ... Medschool Anki contains all the relevant information to High Yield Anki decks for the USMLE Step 1, 2, 3, Medical School, and Residency for Medical Students and ...Search: Anking Step 2 Deck. See deck price, mana curve, type distribution, color distribution, mana sources, card probabilities, proxies TheMonsterOx created a new deck: Merfolk Mill For those with less-than-stellar Step 1 scores, Step 2 CK is even more critical, as it represents a student's only opportunity to compensate for Step 1 performance Step Two is a beautiful, modern two-step ...The AnKing Step 1 Deck This deck is a comprehensive deck for USMLE Step 1. It combines the best parts of Lolnotacop and Zanki with images from the Pepper micro/antimicrobial and UltraZanki decks. It is re-organized and probably the best organization currently available. Mar 27, 2018 · So, due to popular demand, I will share the link to my Step 1 Master Deck in this post for anyone who would like to use it to study. But first, I’d like to use this post to explain the deck and my method, because I feel that it goes against conventional Anki wisdom. It differs pretty heavily from the popular Bros and Zanki decks. 100 concepts anatomy deck - cloze style (469 cards) Preclinical/Step I Left things out if I had already seen it in Zanki for neuro. Other systems might be redundant. Haven't cross-checked with Dorian to see if that deck has info not in here, but I was pretty comprehensive with the PDF.These include Zanki, Brosencephalon, and a host of others Step-Up to USMLE Step 2 CK is your one-stop shop for high-yield, systems-based review NMS Review for USMLE Step 2 CK / Editors Kenneth Ibsen & The AnKing Step 1 Deck 3 minutes and 44 seconds Anking Reddit Anking Reddit Anking Reddit Anking Reddit.Tag: pathoma anki deck.Pathology USMLE step 1. Download Pathoma 2021 Fundamentals of Pathology PDF For USMLE Step 1 .Pathoma is high yield, exceptionally suitable explained +35 hours videos, and 218 pages textbook to explain basics of pathology for 3rd-year students and board review prepared by Husain A. Sattar, MD, who was born in Chicago, Illinois, and received.About Anki Reddit Pepper Deck . Months before the Step 1 test, you can already start studying the Broencephalon Anki deck which has 16, 000 flashcards from first aid and Pathoma. ... Medschool Anki contains all the relevant information to High Yield Anki decks for the USMLE Step 1, 2, 3, Medical School.Apr 07, 2020 · Anki Decks used for Step 1: Zanki (Biochem, Immunology, Basic Pathology, Reproductive) Lolnotacop (Follows sketchy micro and uworld micro) Lightyear (everything else – follows Boards and Beyond and FA) Dorian’s Anatomy Deck. I am separating Anki altogether because if I had to separate one source that helped me get this score, it would be Anki. Dec 03, 2021 · This MCAT Anki deck can help you in both low-yield and high-yield topics. But do not underestimate the time and energy investment needed. It’s one of the best MCAT Anki decks there is available. Ortho528 Anki MCAT Deck . The Ortho528 deck is one of the best Anki decks for MCAT available on student forums, containing 4351 cloze deletion cards. High Yield Tags. Improved Quizlet to Anki 2.1 Importer. Learning step and review interval retention. Load Balanced Scheduler. Mini Format Pack. Pokemanki for both Anki 20 and 21. Progress Bar. Puppy Reinforcement. Quick Colour Changing. Re-Color. Reset Ease. Speed Focus Mode. Syllabus. Put All due learning cards first1. Make pathogenesis presentation cards to improve retention and reduce the number of cards you make. You’ve probably noticed that the better you know something, the fewer words it takes to describe it. The same applies to making Anki cards; the more connections you can make, the fewer cards you end up making. What is Amboss Anki Deck. Likes: 586. Shares: 293.I disagree! suspend the entire deck then search by tag and unsuspend the content after you cover it. You should be able to do 8-10 cards/min with a controller. the anking deck is clutch, make sure you use the plugins. You should be able to cover about 250 cards in 30min.Master the terms you're most likely to see on Step 1, Step 2, and the Shelf exams. ... with its short summaries of high yield terms within Anki. It's like Anki now has its own medical search engine built-in. Adil S., Class of 2022, Campbell University School of Medicine. ... • The add-on works with any Anki deck.Anatomy Shelf Notes 100 cases anatomy for usmle step 1 It is designed to ensure ... your Musculoskeletal videos are ready!Board and Beyond 17 6 30 Anki 16 11 27 USMLE/NBME Qbanks 19 13 22 Cram figther 30 6 17 Goljan lectures 35 4 14 Sketchy Medical 35 2 11 YouTube videos 35 9 9 USMLE-Rx 47 0 6 Osmosis 45 5 4 UNMC course material 37 15 2.Anki: Soze's Step 1 Master Deck It is February of my second year in medical school. Just over four months remain between me and the USMLE Step 1 exam. As such, my Step 1 preparation is well under way. Like many of you, figuring out how and what to study was half the battle. My tentative plan is in place.Step 1: Identify what content you will be learning in lectures Step 2 anki deck dorian Anki is open-source and optimized for speed--it will handle reviewing decks of 100,000+ cards with no problems Step-Up to USMLE Step 2 CK Step Up to USMLE Step 2 CK name}}: {{mechanic name}}: {{mechanic. . ...Jul 21, 2022 · Anki isn't only for Japanese language learners Roll on Anki options groups Adjust the interval modifier in Anki’s deck options until you’re getting 80 to 90% of your reviews correct Looking at the Anki shared Spanish decks available, some of them have errors The standard way of working with Anki, is with a pretty awkward GUI The standard way of working with Anki, is with a pretty awkward GUI. The AnKing Step 1 Deck Step-Up to USMLE Step 2 CK is your one-stop shop for high-yield, systems-based review NMS Review for USMLE Step 2 CK / Editors Kenneth Ibsen & Using Anki to study for the USMLE is an amazing way of actually studying A deck to learn pathology primarily based off of Pathoma with images form several different resources The ...Jul 18, 2022 · This deck is a mostly comprehensive deck for USMLE Step 2 If you go to the Anki shared decks, you can click through to the most popular subjects 2mo · dadrenergic Subtlety Rogue Shadowlands Talents 5 to allow Anki to sync faster and handle the high flashcard volume without the risk of your settings/data being corrupted on every sync with Anki ... Yousmle.com Step 1 Anki Deck. Written by AfkEbooks | 30/12/2014 | 13 comments.apkg (ANKI)$59.99 ~ $3 30 Points. ... Finally, I've compiled some of the most high-yield Anki cards I've made, and added new explanations to concepts I wish I'd known before I'd begun my Step 1 preparations.Anki isn't only for Japanese language learners Roll on Anki options groups Adjust the interval modifier in Anki's deck options until you're getting 80 to 90% of your reviews correct Looking at the Anki shared Spanish decks available, some of them have errors The standard way of working with Anki, is with a pretty awkward GUI The standard way of working with Anki, is with a pretty awkward GUI. rave dresses uk Search: Step 1 Anatomy Deck. What is Step 1 Anatomy Deck. Likes: 362. Shares: 181.Search: Pixorize Anki Deck RedditLearn the material with Step 1 Simplified (SOS)/FA/Pathoma/school, apply the material with question banks (eg Amboss or Kaplan or Uworld). ... I recommend you use a premade deck (try Zanki, ... (2 hours at 1.4x) (During this time, also making Anki cards for things that are tough to remember or high yield). Review anki cards (2 hours). I split ...Apr 07, 2020 · Anki Decks used for Step 1: Zanki (Biochem, Immunology, Basic Pathology, Reproductive) Lolnotacop (Follows sketchy micro and uworld micro) Lightyear (everything else - follows Boards and Beyond and FA) Dorian's Anatomy Deck.Jul 18, 2022 · This deck is a mostly comprehensive deck for USMLE Step 2 If you go to the Anki shared decks, you can click through to the most popular subjects 2mo · dadrenergic Subtlety Rogue Shadowlands Talents 5 to allow Anki to sync faster and handle the high flashcard volume without the risk of your settings/data being corrupted on every sync with Anki ... Anki is a tool I use daily to remember things better. Below are the things I have learned about typesetting math equations in Anki using both MathJax and raw LaTeX. Hopefully these notes can save you some time. Update [2020-04-17] Anki 2.1+ now has built-in support for MathJax. This is now the best approach to math typesetting, since it removes.Jul 29, 2021 · Students have the flexibility to create their own cards and decks by importing pictures and charts as necessary. Anki is often used in tandem with First Aid and the UWorld question bank (QBank) to make flashcard sets based on the material being reviewed in each resource. However, UWorld, the QBank leader for USMLE Step 1 and Step 2 CK test ... Now you can access any high-yield USMLE-Rx tool in one great app!* Get on-the-go access to USMLE board-style questions, Rx Bricks, flash cards, and videos. It contains Zanki and some Pepper and tons more. Usmle Rx Anki Deck Download First Aid Express Step 1 2019 (USMLE-Rx) Videos + FA 2019 Book. Final Thoughts: Best Anki Decks For Pharmacology. ...Sample Decks: Spanish, Esp Anki Word List - Words - 1 - 500 - Eng to Span, Esp Anki Word List - Words - 501 - 1000 - Eng to Span Show Class Spanish from IsabelStep 1 Preparation Setting up Anki 3 In 2014, Redditor u/brosencephalon shared a Step 1 Anki deck on r/medicalschool Review the deck 27 Şub 2013 27 Şub 2013. . Step 1: Outline use cases, constraints, and assumptions Fixed a bug where failing a card didn't reset its interval when per day scheduling was off Anki Decks used for Step 1: Zanki ...You're listening to Step 1 Success Stories by Physeo: The playbook of those who dominated the USMLE Posted on January 11, 2021 by January 11, 2021 by Step 2 anki deck dorian This will be an all-inconclusive ultimate Let's give it a try:" A 4-year-old African-American male presents at a free clinic with pale conjunctiva, breathlessness, and ...Study for Step 1 using an Anki deck of your choice and consider our favorite USMLE Step 1 Prep Courses. There's also this helpful video that gives tips on how to study fewer than five hours per day, including lecture time. Anki Deck Face-Off Zanki vs Lightyear. So a 240 on Step 1 is the 66th percentile Step 2 anki deck dorian Step 2 anki deck dorian Like the Zanki Step 2 deck, this deck is ...The cards are each brief yet high-yield. Step 2 anki deck dorian Step 2 anki deck dorian Like the Zanki Step 2 deck, this deck is based on UWorld Step 2 CK questions, but this deck is made more in the style of Brosencephalon's Step 1 deck, with short questions and limited context. Physeo - Anatomy.The Lightyear biochemistry is deck is similar to Zanki in that it makes up a small part of a much larger deck. There's also a lot of overlap and some of the cards here have found there way into Anking's latest overhaul. For users of Board and Beyond though (the video series all Lightyear decks are based on) this deck is a really great choice.Anki: Long gone are the days of spending all afternoon on Anki. Efficiency is key now. Pathoma: I wish I had stuck to some of the more simple, high yield details in Pathoma. UWorld/First Aid love to bog you down with lots of details. NBME, STEP 1, and Pathoma tend to focus more on high yield details and common diseases.Avtandil Kochiashvili — My Step 1 Experience. Score: 273. Target: 255+. First of all we have to talk about foundation from basic subjects in medical school. I did basic sciences for the first 2. ...The book itself is designed as a review book specifically for Step 1, running through all the high yield principles of a typical med school curriculum. The deck itself was originally built and shared in 2017 by the reddit user ZankiStep1. It makes up the foundation of Anking's deck, considered the gold standard of USMLE Step 1 study.Jul 21, 2022 · Anki isn't only for Japanese language learners Roll on Anki options groups Adjust the interval modifier in Anki’s deck options until you’re getting 80 to 90% of your reviews correct Looking at the Anki shared Spanish decks available, some of them have errors The standard way of working with Anki, is with a pretty awkward GUI The standard way of working with Anki, is with a pretty awkward GUI. First Aid outlines all of the high-yield material covered on Step 1, as. medicalschoolanki). ... Step 1 Deck Overview Updated yearly, the MedSchoolGurus Official Step 1 Anki Deck is designed to comprehensively cover the material in First Aid for the USMLE Step 1, Pathoma, UWorld Step 1, as well as the UWorld and NBME practice exams. ...High Yield Tags. Improved Quizlet to Anki 2.1 Importer. Learning step and review interval retention. Load Balanced Scheduler. Mini Format Pack. Pokemanki for both Anki 20 and 21. Progress Bar. Puppy Reinforcement. Quick Colour Changing. Re-Color. Reset Ease. Speed Focus Mode. Syllabus. Put All due learning cards first Resource #1: Anki. A resource that’s technically free that I think is nice to have and use is Anki. It goes with the Brosencephalon and Zanki cards which are pre-made, but I like to use Aki for Step 1 to make my own flashcards. I know a lot of you just don’t care about flashcards. Maybe you think it takes too much time. Apr 20, 2018 · This is the third update of my Step 1 Anki deck. If this is your first time reading or hearing about my deck, I suggest checking out my original post in which I explain the deck – Anki: Soze’s Step 1 Master Deck . (It’s different than the Zanki/Bros style). Also, I feel that it’s important to point out that this deck should be used as a ... telegram channels russia I made this into an Anki deck. I'll cram them after I read. ... High-Yield Step 1 Fact Wiki I've decided to compile a Wiki of high-yield facts that I pick up from my studies. My intention is to build a review sheet for me to use right before Step, and also for other people to peruse when they want to learn some useful knowledge. I'd ideally be ...The Brosencephalon deck is most effective for reinforcing previously learned facts and relationships. However, we do recognize and respect that all students are different, and that some students strongly prefer to use Anki cards and the Bros deck as a primary resource. We certainly believe that this is compatible with strong USMLE performance, if tailored appropriately to an individual student ...Sep 19, 2021 · This deck has a lot of notes, but, let’s face it, Step 1 is an eight-hour exam! You’re going to have to cover a lot of material and there are multiple Reddit threads, YouTube videos, blogs, and the like, including AnKing’s own website AnkiHub that will allow for collaboration in editing and updating for their newest version. Jul 21, 2022 · Anki isn't only for Japanese language learners Roll on Anki options groups Adjust the interval modifier in Anki’s deck options until you’re getting 80 to 90% of your reviews correct Looking at the Anki shared Spanish decks available, some of them have errors The standard way of working with Anki, is with a pretty awkward GUI The standard way of working with Anki, is with a pretty awkward GUI. File Extension ANKI has only one distinct file type (Anki Deck Data format) and is mostly associated with a single related software program from Damien Elmes (Anki). separate deck yousmle-step-1-anki-deck 3/20 Downloaded from ahecdata.utah.edu on June 23, 2022 by guest Step 1. USMLE Step 1 Lecture Notes 2021: 7-Book Set Kaplan Medical 2020-12 Kaplan Medical's USMLE Step 1 Lecture Notes 2021: 7-Book Set offers in-depth review with a focus on high-yield topics in every discipline—a comprehensive approach that will help The first 3 chapters are ultra high yield for Step 1! Cons. Can feel oversimplified at times. ~$100; Sketchy. Pros. Memory palace is so effective! Helps you retain information long-term! Fun to watch! ... If you are using Anki and can't decide which pre-made deck to use, just pick one and stick to it. Most decks, such as Lightyear and Anking ...USMLE Step 2 CK High Yield Facts. 0.60MB. 0 audio & 7 images. Updated 2018-04-24. The author has shared 2 other item(s). Description. This deck is comprised by the high yield rapid review fact section of the First Aid for the USMLE Step 2 Clinical Knowledge (Ninth edition). ... When this deck is imported into the desktop program, cards will ...Apr 20, 2018 · This is the third update of my Step 1 Anki deck. If this is your first time reading or hearing about my deck, I suggest checking out my original post in which I explain the deck – Anki: Soze’s Step 1 Master Deck . (It’s different than the Zanki/Bros style). Also, I feel that it’s important to point out that this deck should be used as a ... Anking’s deck is the most comprehensive Step 1 deck out there – this is also it’s one major criticism from some users; the fact it’s too comprehensive! Due to its size, it will take a lot of time to get through. Most users recommend setting aside months to go through (not just your dedicated exam period). The time will also vary depending on what first-pass materials you use and how long it takes you to complete them. Lightyear A Boards And Beyond Based Step 1 Anki Deck 22 5k Cards Medicalschoolanki Lightyear is 1 deck but is organized via hierarchical tags into 4 different major sections. ... Medschool Anki contains all the relevant information to High Yield Anki decks for the USMLE Step 1, 2, 3, Medical School, and Residency for Medical Students and ...Jul 21, 2022 · Anki isn't only for Japanese language learners Roll on Anki options groups Adjust the interval modifier in Anki’s deck options until you’re getting 80 to 90% of your reviews correct Looking at the Anki shared Spanish decks available, some of them have errors The standard way of working with Anki, is with a pretty awkward GUI The standard way of working with Anki, is with a pretty awkward GUI. Apr 15, 2019 · The deck includes cards which test extremely low-yield material. For example, many cards ask you to identify pathology on the basis of imaging findings alone. This will almost never be the case on USMLE Step 1, as test makers often provide other details to assist with the interpretation of images. Some students may therefore feel that Zanki’s ... Sketchy is an animated video resource that students use to memorize material like microbiology and pharmacology. It gives a memorization technique called "memory palaces". A memory palace is an imagined place, or "palace" with different objects that represent specific concepts. Sketchy creates these memory palaces using animated ...A large portion of the Step 1 deck is now tagged with High Yield tags (Separates cards into High, Medium and Low yield). These are under ^Other::^HighYield. We hope to finish this by V12. Sketchy Biochem tags/images were updated. Reorganized Sketchy micro and pharm tags to current organization on the website The AnKing Step 1 Deck This deck is a comprehensive deck for USMLE Step 1. It combines the best parts of Lolnotacop and Zanki with images from the Pepper micro/antimicrobial and UltraZanki decks. It is re-organized and probably the best organization currently available. 50 Step2 coupons now on RetailMeNot Standard Trickery v2 deck list with prices for Magic: the Gathering (MTG) (This deck does not add any new material, it only reorganizes the pepper, zanki pharm and lolnotacop decks- you will need paid subscription to pathoma, boards and beyond and sketchy to legally use these deck) Step-Up to USMLE Step 2 CK is your one-stop shop for high-yield, systems ...Resource #1: Anki. A resource that’s technically free that I think is nice to have and use is Anki. It goes with the Brosencephalon and Zanki cards which are pre-made, but I like to use Aki for Step 1 to make my own flashcards. I know a lot of you just don’t care about flashcards. Maybe you think it takes too much time. Part 9. Part 10. Anki does come with some stats that you can look at, but the Review Heatmap add-on gives you an advanced tracking functionality It is definitely a labor intensive process, Now with including full-length practice exams & a 2020 short format option! Public / Study 2) Download Premed95 P/S Anki deck from r/MCAT and use it as a ...Jun 22, 2020 · 1. Used Anking Deck and KEPT UP WITH REVIEWS. Anki. For those of you unfamiliar with it, Anki is a FREE electronic flashcard program that uses a spaced-repetition algorithm available Android, Windows and Mac. Apple app is $25, smh. You can make your own cards (I do not recommend this) or use pre-made decks (see below). Pathoma is a tremendous resource in studying for USMLE Step 1 and preparing for third year clerkships. Dr. Sattar's lectures cover all of the high yield pathology points. He explains everything from a basic mechanistic approach which is critical because that is how the questions are tested, but more importantly it develops true understanding ...Free Step 1 Diagnostic Exams. Kaplan's free USMLE Step 1 Diagnostic test is 3-hours in length and provides you with detailed feedback showing how you did overall and on individual disciplines. Learn More. USMLE Advising Sessions. Schedule a free 20-minute session with one of our advisors. They know every exam and every part of the medical ...These Anki decks can help you learn french, memorize geography, understand anatomy, & more! There are a lot of details missing, to be sure, but these videos give an excellent overview of the body's anatomy Pixorize Anki Deck The Step 2 CK Deck contains 5720 cards covering all of the content in the Clerkship Decks relevant for USMLE Step 2 CK ...Apr 15, 2019 · The deck includes cards which test extremely low-yield material. For example, many cards ask you to identify pathology on the basis of imaging findings alone. This will almost never be the case on USMLE Step 1, as test makers often provide other details to assist with the interpretation of images. Some students may therefore feel that Zanki’s ... Part 9. Part 10. Anki does come with some stats that you can look at, but the Review Heatmap add-on gives you an advanced tracking functionality It is definitely a labor intensive process, Now with including full-length practice exams & a 2020 short format option! Public / Study 2) Download Premed95 P/S Anki deck from r/MCAT and use it as a ...This is important, as otherwise, reviewing cards in the filtered deck will not count as reviewing that card in your main deck. Click build. By default, this will create a new deck called "Filtered Deck 1". You can rename this in the home screen by clicking the gear button to the right of the "Filtered Deck 1" and clicking "Rename".Step 1 Anki Deck | AnKing Overhaul The AnKing Step 1 Deck This deck is a comprehensive deck for USMLE Step 1. It combines the best parts of Lolnotacop and Zanki with images from the Pepper micro/antimicrobial and UltraZanki decks. It is re-organized and probably the best organization currently available.Load balancer looks at your future review days and places new reviews on days with the least amount of load in a given interval. This way you won't have drastic swings in review numbers from day to day, so as to smoothen the peaks and troughs. 5. Frozen Fields. ( Anki 2.0 Code: 516643804 | Anki 2.1 Code: 516643804)Now you can access any high-yield USMLE-Rx tool in one great app!* Get on-the-go access to USMLE board-style questions, Rx Bricks, flash cards, and videos. It contains Zanki and some Pepper and tons more. Usmle Rx Anki Deck Download First Aid Express Step 1 2019 (USMLE-Rx) Videos + FA 2019 Book. Final Thoughts: Best Anki Decks For Pharmacology. ...These are complete SketchyMicro and SketchyPharm Anki decks that I made while I was studying for USMLE Step 1 and COMLEX Level 1! I found that Anki was the best way for me to memorize all of the SketchyMedical sketches. ... high yield information. ... If you have downloaded the deck, you will see that the cards are separated by subject. The ...email protected] [email protected] [email protected] samantha safie forum afdak aan huis dumnezeu vorbeste prin viseSearch: Best Usmle Anki Decks. What is Best Usmle Anki Decks. Likes: 594. Shares: 297.Expensive test were unfamiliar with, to Make things easier for you to identify Pathology on USMLE! Neuroanatomy, BRS Physiology, First Aid 2016/2017 and SketchyPharm Tips for Using Anki, high yield step 1 images anki! During your Micro block and re-watch the summer before 2 nd year ( time permitting ) Step... The AnKing Step 1 Deck You are required to send a bank draft or a Singapore bank cheque, payable to Step 2: About 2 to 3 months prior to commencement of the course, SMA will apply for Student 00:10 How updating This deck is a comprehensive deck for USMLE Step 1 anki decks step 1: 1 anki decks step 1: 1. .The deck consists of all 120 keyboard shortcuts (240 cards due to the use of reverse guessing) 2%), Pepper Micro (63 It's because of this that students argue that the Bros deck is better than Medical School Zanki First aid for the usmle step 1 2020 pdf direct download anki …. It combines the best parts of Lolnotacop and Zanki with images from ...1-2 days before an exam, read First Aid and BRS Physiology as a high yield review. At the end of the block, move all your cards from "Current" deck to the "Combined Review" deck. Repeat! Keep doing cards from the Combined Review deck along with your Current Class deck.Oct 13, 2020 · Thus, it is worth mentioning my digital gurus here – Prerak Juthani and Med School Insiders. Without further ado, let me give you the manageable steps to start creating your own Anki flashcards. Step -1 = Download Anki. – The simple steps are available on their site. NOTE – Creating Anki flashcards is much simpler on laptop or PC. Part 9. Part 10. Anki does come with some stats that you can look at, but the Review Heatmap add-on gives you an advanced tracking functionality It is definitely a labor intensive process, Now with including full-length practice exams & a 2020 short format option! Public / Study 2) Download Premed95 P/S Anki deck from r/MCAT and use it as a ...Learn the material with Step 1 Simplified (SOS)/FA/Pathoma/school, apply the material with question banks (eg Amboss or Kaplan or Uworld). ... I recommend you use a premade deck (try Zanki, ... (2 hours at 1.4x) (During this time, also making Anki cards for things that are tough to remember or high yield). Review anki cards (2 hours). I split ...The Lightyear biochemistry is deck is similar to Zanki in that it makes up a small part of a much larger deck. There's also a lot of overlap and some of the cards here have found there way into Anking's latest overhaul. For users of Board and Beyond though (the video series all Lightyear decks are based on) this deck is a really great choice.The book itself is designed as a review book specifically for Step 1, running through all the high yield principles of a typical med school curriculum. The deck itself was originally built and shared in 2017 by the reddit user ZankiStep1. It makes up the foundation of Anking's deck, considered the gold standard of USMLE Step 1 study.2022. 6. 26. · These scripts typically contain a disease's epidemiology, presentation, pathophysiology, diagnosis criteria, and treatment Preclinical/Step I Pepper Pharmacology Anki Deck Increase cyopalsmic calcium in cell injury Outcome of Cell Injury Recovery of Cells: In cases of reversible injuries, restoring full structure and functions may take place doxycycline hyclate.MEHLMANMEDICAL. Premium 1-on-1 Tutoring for USMLE, CBSE/COMP, Shelf Exams, Clinical Rotations, Medical CourseworkThis deck is a mostly comprehensive deck for USMLE Step 2 If you go to the Anki shared decks, you can click through to the most popular subjects 2mo · dadrenergic Subtlety Rogue Shadowlands Talents 5 to allow Anki to sync faster and handle the high flashcard volume without the risk of your settings/data being corrupted on every sync with Anki ...Access by logging in to the "MCAT Official Prep Hub Read Free Yousmle Step 1 Anki Deck How I Made My Anki Decks For Step 1 (250+) How I Made My Anki Decks For Step 1 (250+) by Muggle Doctor, Almost 9 months ago 5 minutes, 14 seconds 1,827 views In this video, I talk about how I made my, Anki decks , from FirstAid as well as test question Read ...Step 2 anki deck Posted by December 15, 2020 Leave a comment on high yield step 1 images anki December 15, 2020 Leave a comment on high yield step 1 images anki. ... Step 1 Anki Deck; Step 2 Anki Deck; Pharm Deck; Sample Cards; Search; FREE Consult: Master More - Faster - for Impressive Boards Scores. Boards and Beyond: Boards and Beyond is a video series where Dr.Expensive test were unfamiliar with, to Make things easier for you to identify Pathology on USMLE! Neuroanatomy, BRS Physiology, First Aid 2016/2017 and SketchyPharm Tips for Using Anki, high yield step 1 images anki! During your Micro block and re-watch the summer before 2 nd year ( time permitting ) Step...These are complete SketchyMicro and SketchyPharm Anki decks that I made while I was studying for USMLE Step 1 and COMLEX Level 1! I found that Anki was the best way for me to memorize all of the SketchyMedical sketches. ... high yield information. ... If you have downloaded the deck, you will see that the cards are separated by subject. The ...MEHLMANMEDICAL. Premium 1-on-1 Tutoring for USMLE, CBSE/COMP, Shelf Exams, Clinical Rotations, Medical CourseworkThese include Zanki, Brosencephalon, and a host of others Step-Up to USMLE Step 2 CK is your one-stop shop for high-yield, systems-based review NMS Review for USMLE Step 2 CK / Editors Kenneth Ibsen & The AnKing Step 1 Deck 3 minutes and 44 seconds Anking Reddit Anking Reddit Anking Reddit Anking Reddit.Then, stick to it. 1. Make a schedule - Set reasonable goals and stick to the schedule. Most people study for approximately 4-6 weeks. In general, 4 weeks is sufficient to prepare for Step I, but if you choose to study for five or six weeks, just increase the study time listed below for each subject proportionally.Join the Officer Board to receive a 25% discount on MCAT courses Anki French Vocabulary Flashcards: This is a set of 300 Anki flashcards pre-made for French learners Thus, I recommend making a separate deck for pharmacology High Quality Japanese Anki Decks 'days' : 'day' }} In fact, over a quarter of medical school applicants have taken the ...Resource #1: Anki. A resource that’s technically free that I think is nice to have and use is Anki. It goes with the Brosencephalon and Zanki cards which are pre-made, but I like to use Aki for Step 1 to make my own flashcards. I know a lot of you just don’t care about flashcards. Maybe you think it takes too much time. Anki: Long gone are the days of spending all afternoon on Anki. Efficiency is key now. Pathoma: I wish I had stuck to some of the more simple, high yield details in Pathoma. UWorld/First Aid love to bog you down with lots of details. NBME, STEP 1, and Pathoma tend to focus more on high yield details and common diseases.Most of your anatomy grade will be determined by how well you memorize the structures and their relationships. Anki is the perfect tool to memorize large amounts of information. Even better, past medical students have already spent the time making high-quality, meticulously tagged Anki decks. For a one sentence TL;DR of the decks I would use if ...The Practice exams also include 230 authentic questions and replicate the MCAT experience. 3 level 1 TheKingofKeto · 3y No, it is not comprehensive enough. 0:27 - Making Your Own Anki Cards vs Premade3:05 - AAMC CARS vs JW CARS5:09 - KDramas#mcat #study #premed #kpop #twice #gidle #yuqi #kdrama #anki Apr 15, 2020 Best CFA Level 1 Books 2021.Nearly 200 new cards based on high-yield Step 1 concepts; A completely new media set with sourced images (500+ images) Entirely new organization with consistent tags. Perfect for use alongside your courses or any popular Pathology resource; Organized by system (cardiovascular, pulmonary, etc.) Clearer formatting of cards; Many 'Low-yield' cards ... High Yield Tags. Improved Quizlet to Anki 2.1 Importer. Learning step and review interval retention. Load Balanced Scheduler. Mini Format Pack. Pokemanki for both Anki 20 and 21. Progress Bar. Puppy Reinforcement. Quick Colour Changing. Re-Color. Reset Ease. Speed Focus Mode. Syllabus. Put All due learning cards first The cards are each brief yet high-yield. Step 2 anki deck dorian Step 2 anki deck dorian Like the Zanki Step 2 deck, this deck is based on UWorld Step 2 CK questions, but this deck is made more in the style of Brosencephalon's Step 1 deck, with short questions and limited context. Physeo - Anatomy.Jun 22, 2020 · 1. Used Anking Deck and KEPT UP WITH REVIEWS. Anki. For those of you unfamiliar with it, Anki is a FREE electronic flashcard program that uses a spaced-repetition algorithm available Android, Windows and Mac. Apple app is$25, smh. You can make your own cards (I do not recommend this) or use pre-made decks (see below). When the best time to make (and review) your high-yield pharm flashcards; Much more; Note: this deals explicitly with learning the drugs for Step 1, Step 2, and Step 3. For pharmacology principles, you can use the basic principles of flashcard making here. USMLE pharmacology principles would involve things like competitive inhibitors, kinetics ...Second, and more importantly, you'll be in the habit of reviewing the entire deck, which is very good for your larger exams, whether that's the MCAT or USMLE Step 1 or COMLEX. With a fragmented deck, this just doesn't happen. Remember, for spaced repetition software like Anki to work properly, you must regularly review information.Search: Best Anki Decks. Tags are a very important part of using Anki properly After opening the file, you will be presented with a dialog which allows you to customize how your data is imported Early in your clinical rotation , you can go over 100-200 cards within 2 weeks Craft a Personalized Anki Deck Learn the fundamentals using Anki flashcards & Anki decks today!Apr 15, 2019 · The deck includes cards which test extremely low-yield material. For example, many cards ask you to identify pathology on the basis of imaging findings alone. This will almost never be the case on USMLE Step 1, as test makers often provide other details to assist with the interpretation of images. Some students may therefore feel that Zanki’s ... baddie roblox outfits codes. There are a lot of details missing, to be sure, but these videos give an excellent overview of the body's anatomy Pixorize Anki Deck The Step 2 CK Deck contains 5720 cards covering all of the content in the Clerkship Decks relevant for USMLE Step 2 CK, as well as the practice Step 2 NBME exams, and the Step 2 UWorld Self Assessments.Best Anki Deck For Step 1 2022 [ANKI USMLE GUIDE] Sep 19, 2021 · Best for general prep for USMLE Step 1. ... Unify your resources and connect to high-yield info instantly. With the AMBOSS add-on for Anki, your flashcards get an upgrade with pop-up explanations and links to articles from the AMBOSS Library. To download the add-on,.Step 1 Pack USMLE Mastery Pack $189$378 3 Decks at a Discount Over 1800 Step 1 Cards Step 1, 2 and Pharm Decks Indexed by all major organ groups Covers all topics of USMLE Use anywhere with Anki Best for dedicated students BUY NOW Save \$189 FAQ Yousmlers Stand Out to Top Residencies "I searched online looking for advice when I came across Yousmle.Main Deck Step 2 Content Review Flashcard Maker: Darshan Vora. 7,624 Cards - 24 Decks - 31 Learners Sample Decks: Cardiology, Pulmonology, Neurology ... Sample Decks: Step 2 High Yield CV, Step 2 OB/GYN, Step 2 Peds Show Class Usmle 1 Microbiology. Usmle 1 Microbiology Flashcard Maker: Andrew Kuhle. 436 Cards - 15 Decks -Read Online Yousmle Step 1 Anki Deck Yousmle Step 1 Anki Deck When somebody should go to the books stores, search instigation by shop, shelf by shelf, it is in fact problematic. ... Medschool Anki contains all the relevant information to High Yield Anki decks for the USMLE Step 1, 2, 3, Medical School, and Residency for Medical Students and ...Yousmle Anki Deck (got through about 1000 of 3000 cards) My own Anki deck I made as I went along. Sketchy Medical is a picture mnemonic visual resource that includes 530+ videos for Micro, Path and Pharm, Biochem, Internal Med, Pediatrics, Obstetrics, Surgery and is used as a study resource for Step 1 and Step 2. Report Save.Anki: Soze's Step 1 Master Deck It is February of my second year in medical school. Just over four months remain between me and the USMLE Step 1 exam. As such, my Step 1 preparation is well under way. Like many of you, figuring out how and what to study was half the battle. My tentative plan is in place.Divine Intervention Episode 399 - The 3 Confusing Poisonings (HY for Step 1-3) July 1, 2022 ~ divineinterventionpodcast ~ Leave a comment. In this HY podcast, I discuss salient means for differentiating poisonings from carbon monoxide, cyanide, and methemoglobin. I spend time discussing physiology that should help you navigate these questions ...Anki: Long gone are the days of spending all afternoon on Anki. Efficiency is key now. Pathoma: I wish I had stuck to some of the more simple, high yield details in Pathoma. UWorld/First Aid love to bog you down with lots of details. NBME, STEP 1, and Pathoma tend to focus more on high yield details and common diseases. kmspico reviewno background check apartments in las vegasdoll 10 reviewswho makes rugged radios | 2022-08-14 16:33:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22349515557289124, "perplexity": 4508.399850518348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00583.warc.gz"} |
http://davidwees.com/category/topic/mathematics | Thoughts from a reflective educator.
## Two views of mathematics
(Image credit: DanCentury)
## Math in the real world: Train tracks
This is another in a series of posts about how one could find mathematics in the world around us.
## Automaticity in programming and math
I've been learning how to program for a long time, a task that has much in common with mathematics. Both programming and mathematics involve being able to solve problems. Some of the problems in programming and mathematics have well established solutions and other problems do not. On a micro-level, programming involves manipulating code, a task much like the symbolic manipulation often used in mathematics. On a macro-level, programmers and mathematicians both need to be able to trouble-shoot, organize, and communicate their solutions.
Sample code:
## Presentation: Programming in Mathematics Class
This presentation is based in part on the TED talk Conrad Wolfram gave a couple of years ago, and on some insights gained at the Computer Based Math summit I attended in November. The below presentation is slightly abreviated to make it easier to share on the web.
## Exploring algebraic complexity
Here is an idea I am exploring.
I'd like some feedback on this idea. If anyone can point me at research already done in this area, that would be appreciated. My objective is to use this to justify the use of technology in mathematics as a way of reducing algorithmic complexity so that deeper concepts can be more readily understood.
## What should be on a high school exit exam in mathematics?
Personally, I think an exit exam for school (an exam a student needs to graduate from secondary school) is not necessarily the best way to determine if a student has been prepared by their school. That aside, some of sort of assessment of what a student has learned from their school, whatever form that would take, should satisfy an important criterion; that the student is somewhat prepared for the challenges that life will throw at them.
## Mumbo Jumbo
Algebra is just mumbo jumbo to most people. Seriously.
If you asked 100 high school graduates to explain how algebra works, and why it works, I'd guess that 99% of them couldn't, not in sufficient detail to show that they really deeply understand it. Remember that I am talking about high school graduates, so these people have almost certainly had many years of algebra and algebraic concepts taught to them. Most of these people will only be able to give you some of the rules of algebra at best, and some of them don't even remember that much.
## You need to give them the tools
Every elementary school classroom should have about $20 in change. Not fake money printed on a piece of paper, but real money. Yes, some of it will go missing over time, and you might need to lock it up depending on your community, but honestly it's worth the risk. It's only$20.
## Computers should transform mathematics education
Stephen Shankland posted an interesting article on CNET today. Here is an exerpt from his article, which you should read in full. He says:
## Mathematical problem solving
Today I decided to record the process of solving a mathematical puzzle I found at the Project Euler website, in an effort to try and begin to analyze the problem solving techniques I use. My interest here is mostly in how the process unfolds, and the skills I use to solve these problems, rather than the actual problems themselves, although those are interesting. Below is the video I recorded when solving this problem. | 2013-06-19 07:22:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5253066420555115, "perplexity": 699.1474904273098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708143620/warc/CC-MAIN-20130516124223-00082-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://physionet.fri.uni-lj.si/challenge/2018/ | # You Snooze, You Win: the PhysioNet/Computing in Cardiology Challenge 2018
20 June 2018:
• CinC abstract acceptances/rejections have now been sent out.
• If your abstract was rejected, do not be despondent - please see Rules and Deadlines below for a second chance to have your abstract accepted for and stay in the official running for the Challenge. This is chance is open to everyone, not just those with rejected abstracts.
• If your abstract was accepted, please log in to the conference site and agree that you will attend. Then, you must submit a full article describing your results and mark it as a preprint (for others to read) by September 15th. (Don't forget that the competition deadline is noon GMT on the 1st September - this deadline will not be extended)
• If you will need a visa to attend CinC, please do this immediately, and follow these instructions.
7 May 2018:
19 April 2018:
• A Python implementation of the scoring function (gross area under precision-recall curve) is available here.
10 April 2018:
• A second sample submission, implemented using Matlab, has been posted.
• Be sure to submit your abstract for Computing in Cardiology before April 15! See the CinC site for more information. Please submit an abstract with your training results, even if you have not yet been able to obtain a score on the test set.
7 April 2018:
• A sample submission and scoring function are now online.
• The entry submission system is now open; you can access it here.
• The deadline for the Unofficial Phase has been extended. The new deadline is noon GMT, April 13.
21 February 2018:
If you have any questions or comments regarding this challenge, please post it directly in our Community Discussion Forum. This will increase transparency (benefiting all the competitors) and ensure that all the challenge organizers see your question.
## Introduction:
At the end of last year, American scientists Jeffrey Hall, Michael Rosbash and Michael Young received a Nobel Prize in Physiology “for their discoveries of molecular mechanisms controlling the circadian rhythm"— the mechanism that regulates sleep (Osborn, 2017). The precise reasons why humans sleep (and even how much sleep we need) remains a topic of scientific inquiry. Contemporary theorists indicate that sleep may be responsible for learning and/or the clearing of neural waste products (Ogilvie and Patel, 2017).
While the precise reasons why we sleep are not perfectly understood, there is consensus on the importance of sleep for our overall health, and well-being. Inadequate sleep is associated with a wide range of negative outcomes including: impaired memory and learning, obesity, irritability, cardiovascular dysfunction, hypotension, diminished immune function (Harvard Medical School, 2006), depression (Nutt et al, 2008), and quality of life (Lee, 2009). Further studies even suggest causal links between quality of sleep, and important outcomes including mental health.
It follows that improving the quality of sleep could be used to improve a range of societal health outcomes, more generally. Of course, the treatment of sleep disorders is necessarily preceded by the diagnosis of sleep disorders. Traditionally, such diagnoses are developed in sleep laboratory settings, where polysomnography, audio, and videography of sleeping subject may be carefully inspected by sleep experts to identify potential sleep disorders.
One of the more well-studied sleep disorders is Obstructive Sleep Apnea Hypopnea Syndrome (or simply, apnea). Apneas are characterized by a complete collapse of the airway, leading to awakening, and consequent disturbances of sleep. While apneas are arguably the best understood of sleep disturbances, they are not the only cause of disturbance. Sleep arousals can also be spontaneous, result from teeth grinding, partial airway obstructions, or even snoring. In this year's PhysioNet Challenge we will use a variety of physiological signals, collected during polysomnographic sleep studies, to detect these other sources of arousal (non-apnea) during sleep.
## Challenge Data
Data for this challenge were contributed by the Massachusetts General Hospital’s (MGH) Computational Clinical Neurophysiology Laboratory (CCNL), and the Clinical Data Animation Laboratory (CDAC). The dataset includes 1,985 subjects which were monitored at an MGH sleep laboratory for the diagnosis of sleep disorders. The data were partitioned into balanced training (n = 994), and test sets (n = 989).
The sleep stages of the subjects were annotated by clinical staff at the MGH according to the American Academy of Sleep Medicine (AASM) manual for the scoring of sleep. More specifically, the following six sleep stages were annotated in 30 second contiguous intervals: wakefulness, stage 1, stage 2, stage 3, rapid eye movement (REM), and undefined.
Certified sleep technologists at the MGH also annotated waveforms for the presence of arousals that interrupted the sleep of the subjects. The annotated arousals were classified as either: spontaneous arousals, respiratory effort related arousals (RERA), bruxisms, hypoventilations, hypopneas, apneas (central, obstructive and mixed), vocalizations, snores, periodic leg movements, Cheyne-Stokes breathing or partial airway obstructions.
The subjects had a variety of physiological signals recorded as they slept through the night including: electroencephalography (EEG), electrooculography (EOG), electromyography (EMG), electrocardiology (EKG), and oxygen saturation (SaO2). Excluding SaO2, all signals were sampled to 200 Hz and were measured in microvolts. For analytic convenience, SaO2 was resampled to 200 Hz, and is measured as a percentage.
## Objective of the Challenge
The goal of the challenge is use information from the available signals to correctly classify target arousal regions. For the purpose of the Challenge, target arousals are defined as regions where either of the following conditions were met:
• From 2 seconds before a RERA arousal begins, up to 10 seconds after it ends or,
• From 2 seconds before a non-RERA, non-apnea arousal begins, up to 2 seconds after it ends.
Please note that regions falling within 10 seconds before or after a subject wakes up, has an apnea arousal, or a hypopnea arousal will not be scored for the Challenge.
We have pre-computed the target arousals for you. They are contained in a sample-wise vector (described below in “Accessing the Data”), marked by “1”. Regions that will not be scored are marked by a “-1”, and regions that will be penalized if marked by your algorithm are marked by “0”. You do not need to recompute these scores.
## Accessing the Data
If you don't have a BitTorrent client, we recommend Transmission.
The Challenge data repository contains two directories (training and test) which are each approximately 135 GB in size. Each directory contains one subdirectory per subject (e.g. training/tr03-0005). Each subdirectory contains signal, header, and arousal files; for example:
1. tr03-0005.mat: a Matlab V4 file containing the signal data.
2. tr03-0005.hea: record header file - a text file which describes the format of the signal data.
3. tr03-0005.arousal: arousal and sleep stage annotations, in WFDB annotation format.
4. tr03-0005-arousal.mat: a Matlab V7 structure containing a sample-wise vector with three distinct values (+1, 0, -1) where:
• +1: Designates arousal regions
• 0: Designates non-arousal regions
• -1: Designates regions that will not be scored
Table 1 lists functions that can be used to import the data into Python, Matlab, and C programs.
Table 1: Functions that can be used to import Challenge data.
File type Python Matlab C / C++
Signal (.mat) and header (.hea) files wfdb.rdrecord rdmat isigopen
Arousal annotation files (.arousal) wfdb.rdann rdann annopen
Participants should use the provided signal and arousal data to develop a model that classifies test-set subjects. More specifically, for each subject in /test, participants must generate a .vec text file that describes the probability of arousal at each sample, such as:
0.001
0.000
0.024
0.051
The names of the generated annotation files should match the name of the test subject. For instance, test/te09-0094.mat should have a corresponding file named annotations/te09-0094.vec.
Entries must be submitted as a zip file containing:
• All of the code and data files needed to train and run your algorithm
• An AUTHORS.txt file containing the list of authors
• A LICENSE.txt file containing the license for your code
• The .vec files described above
To upload your entry, create a PhysioNetWorks account (if you don't have one), and go to challenge.physionet.org. Entries must be uploaded prior to the deadline in order to be eligible.
### Scoring
Your final algorithm will only be graded for its binary classification performance on target arousal and non-arousal regions (designated by +1 and 0 in teNN-NNNN-arousals.mat), measured by the area under the precision-recall curve. The area is defined as follows:
$Rj = number of arousal samples with predicted probability (j/1000) or greater total number of arousal samples$
$Pj = number of arousal samples with predicted probability (j/1000) or greater total number of samples with predicted probability (j/1000) or greater$
$AUPRC = ∑ j Pj ( Rj − Rj+1 )$
Note that this is the gross AUPRC (i.e., for each possible value of $j$, the precision and recall are calculated for the entire test database), which is not the same as averaging the AUPRC for each record.
A Python implementation of the scoring algorithm is available here, and a Matlab/Octave implementation is here.
## Sample Submission
Two simple example algorithms are provided and may be used as a template for your own submission:
Entrants may have an overall total of up to three submitted entries over both the unofficial and official phases of the competition (see Table 2). Following submission, entrants will receive an email confirming their submission and reporting how well their arousal annotations match those of the held-out test set.
All deadlines occur at noon GMT (UTC) on the dates mentioned below. If you do not know the difference between GMT and your local time, find out what it is before the deadline!
Start at noon GMT on Entry limit End at noon GMT on
Unofficial Phase 15 February 1 13 April
[Hiatus] 13 April 0 22 April
Official Phase 23 April 2 1 September
* Wildcard submissions due 15 July
All official entries must be received no later than noon GMT on Saturday, 1 September 2018. In the interest of fairness to all participants, late entries will not be accepted or scored. Entries that cannot be scored (because of missing components, improper formatting, or excessive run time) are not counted against the entry limits.
To be eligible for the open-source award, you must do all of the following:
1. Submit at least one open-source entry that can be scored before the Phase I deadline (noon GMT on Monday, 9 April 2018).
2. Submit at least one entry during the second phase (between noon GMT on Monday, 16 April 2018 and noon GMT on Saturday, 1 September 2018). Only your final entry will count for ranking.
3. Entering an Abstract to CinC: Submit an acceptable abstract (about 299 words) on your work on the Challenge to Computing in Cardiology no later than 15 April 2018. Include the overall score for your Phase I entry in your abstract. Please select “PhysioNet/CinC Challenge” as the topic of your abstract, so it can be identified easily by the abstract review committee. You will be notified if your abstract has been accepted by email from CinC during the first week in June.
4. Wildcard submissions: For teams who did not submit an abstract in time, or whose abstracts were not accepted, the team who submits the highest-scoring entry before 15 July 2018 will have another chance to compete, if they submit a high-quality abstract and present their work at the CinC conference. We will contact the winners in July with more information.
5. Submit a full (4-page) paper on your work on the Challenge to CinC no later than the deadline of conference paper submission.
6. Attend CinC 2018 (23-26 September 2018) in Maastricht and present your work there.
Please do not submit analysis of this year’s Challenge data to other Conferences or Journals until after CinC 2018 has taken place, so the competitors are able to discuss the results in a single forum. We expect a special issue from the journal Physiological Measurement to follow the conference and encourage all entrants (and those who missed the opportunity to compete or attend CinC 2018) to submit extended analysis and articles to that issue, taking into account the publications and discussions at CinC 2018.
## Attending the Conference
If your abstract is accepted, you must log in to the conference site and agree that you will attend. Then, you must submit a full article describing your results and mark it as a preprint (for others to read) by September 15th. (Don't forget that the competition deadline is noon GMT on the 1st September - this deadline will *not* be extended.)
After agreeing to attend, you must register for the conference, pay the conference fee (prices go up after July ends), and secure a visa if you need one. See the Computing in Cardiology site for more information.
If your abstract is rejected, then you have one more chance! This year we are introducing a 'wildcard' submission. On July the 15th, the top scoring entry that has not so far been accepted to CinC will be offered the opportunity to submit another (or a new) abstract to the conference system (containing full results). If the team can submit a quality abstract (with performance results) and register for the conference then it's members will be eligible for a prize (assuming they also attend the conference and present a poster). Don't forget, your abstract was probably rejected because it didn't contain any useful results (even on training data) and/or did not describe your methods well. So please pay attention to the abstract when submitting - it won't be automatic. We strongly believe that if you are unable to explain what you did and why, then the code is of very limited value.
We hope this is a suitable encouragement for teams that are either late to the Challenge or failed to secure a place at the conference to continue with their efforts in the competition. It would be a shame not to see potentially great works at the conference.
Look out for future announcements via the community discussion forum.
## After the Challenge
As is customary, we hope to run a special issue in Physiological Measurement with a closing date of 31 January 2019. We will therefore encourage competitors (and non-competitors) to submit updates and further reworks based on the Challenge after the award ceremony at the Computing in Cardiology Conference in Maastricht in September. | 2018-07-18 15:55:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2041187584400177, "perplexity": 3970.2594312686942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590295.61/warc/CC-MAIN-20180718154631-20180718174631-00621.warc.gz"} |
http://202.194.119.110/problem.php?id=1020 | Problem 1020 --I think it
## 1020: I think it
Time Limit: 1 Sec Memory Limit: 32 MB
Submit: 903 Solved: 130
[Submit][Status][Web Board][Creator:123]
## Description
Xiao Ming is only seven years old, Now I give him some numbers, and ask him what is the second largest sum if he can choose a part of them. For example, if I give him 1 、 2 、 3 , then he should tell me 5 as 6 is the largest and 5 is the second. I think it is too hard for him, isn ’ t it?
## Input
Standard input will contain multiple test cases. The first line of the input is a single integer T (1 <= T <=10) which is the number of test cases. And it will be followed by T consecutive test cases.
Each test case starts with a line containing an integer N (1<N<10) , the number I give Xiao Ming . The second line contains N Integer numbers ai (-10<ai<10),
## Output
For each test case, output the answer.
## Sample Input
2
3
1 2 3
4
0 1 2 3
## Sample Output
5
5
[Submit][Status] | 2018-10-21 22:23:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3292900621891022, "perplexity": 1368.8717800318645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514355.90/warc/CC-MAIN-20181021203102-20181021224602-00504.warc.gz"} |
http://www.maths.ox.ac.uk/node/9271 | # Multilevel dual approach for pricing American style derivatives
2 December 2011
14:15
John Schoenmakers
Abstract
In this article we propose a novel approach to reduce the computational complexity of the dual method for pricing American options. We consider a sequence of martingales that converges to a given target martingale and decompose the original dual representation into a sum of representations that correspond to different levels of approximation to the target martingale. By next replacing in each representation true conditional expectations with their Monte Carlo estimates, we arrive at what one may call a multilevel dual Monte Carlo algorithm. The analysis of this algorithm reveals that the computational complexity of getting the corresponding target upper bound, due to the target martingale, can be significantly reduced. In particular, it turns out that using our new approach, we may construct a multilevel version of the well-known nested Monte Carlo algorithm of Andersen and Broadie (2004) that is, regarding complexity, virtually equivalent to a non-nested algorithm. The performance of this multilevel algorithm is illustrated by a numerical example. (joint work with Denis Belomestny)
• Nomura Seminar | 2017-12-15 19:46:01 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458563089370728, "perplexity": 415.046232074688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579564.61/warc/CC-MAIN-20171215192327-20171215214327-00729.warc.gz"} |
http://indiatry.com/interview.php?id=29&hm=2 | Nonverbal Reasoning
• ( 1 ) There is a family of six persons A, B, C, D, E and F. They are Lawyer, Doctor, Teacher, Salesman, Engineer and Accountant There are two married couples in the family. D, the Salesman is married to the Lady Teacher. The Doctor is married to the Lawyer. F, the Accountant is the son of B and brother of E. C, the Lawyer is the daughter-in-law of A. E is the unmarried Engineer. A is the grandmother of F. What is the profession of A?
• 1) Lawyer
• 2) Teacher
• 3) Doctor
• 4) None of these
• Discussion in forum
Solution : C is the daughter-in-law of A who is the grandmother of F means C is the mother of F But F is the son of B.
So. B is C's husband. But C, the lawyer. is married to the Doctor So. B is the Doctor. F. the Accountant, will he the non of B and C E is the unmarried Engineer
So. the other married couple can be that of grandmother of F. i.e, A and D. But D, the Salesman, is married to the Lady Teacher.
So. D, the Salesman, is the grandfather of F. father of B and the husband of A. the Lady Teacher.
discussion
• ( 2 ) Muthu said, "This girl is the wife of the grandson of my mother". Who is Suresh to the girl?
• 1) Father
• 2) Grandfather
• 3) Father-in-law
• 4) Husband
• Discussion in forum
Solution : Mother's grandson - Son;
Son's wife - Daughter-in-law
discussion
• ( 3 ) A family consists of six members P, Q, R, X, Y and Z. Q is the son of R but R is not mother of Q. P and R are a married couple. Y is the brother of R. X is the daughter of P. Z is the brother of P. How many children does P have ?
• 1) 1
• 2) 2
• 3) 3
• 4) 4
• Discussion in forum
Solution : Q is the son of R but R is not the mother. So, R is the Father of Q. P is married to R
So, P is the wife of R and the mother of Q, X is the daughter of P and hence of R and so she is the Sister of Q, Y is the brother of R and Z is the brother of P.
Clearly, Q is the son of P and X is the daughter of P. So, P has two children.
discussion
• ( 4 ) Rajan is the brother of Sachin and Manick is the father of Rajan. Jagat is the brother of Priya and Priya is the daughter of Sachin. Who is the uncle of Jagat ?
• 1) Rajan
• 2) Sachin
• 3) Manick
• 4) None of these
• Discussion in forum
Solution : Jagat is the brother of Priya and Priya is the daughter of Sachin. So. Jagat is the son of Sachin. Now, Rajan is the brother of Sachin.
Thus, Rajan is the uncle of Jagat
discussion
• ( 5 ) If A $B means A is the brother of B; A @ B means A is the wife of B; A # B means A is the daughter of B and A * B means A is the father of B, which of the following indicates that U is the father-in-law of P? • 1) P @ Q$ T # U * W
• 2) P @ W $Q * T # U • 3) P @ Q$ W * T # U
• 4) P @ Q $T # W * U • Discussion in forum Answer : 1) P @ Q$ T # U * W
Solution : P @ Q → P is the wife of Q----------(1)
Q $T → Q is the brother of T---------(2) T # U → T is the daughter of U Hence, → Q is the son of U-----------(3) U * W → U is the father of W. From (1) and (3), U is the father-in-law of P. discussion Answer : 1) P @ Q$ T # U * W
• ( 6 ) Pointing to a photograph. Bajpai said, "He is the son of the only daughter of the father of my brother." How Bajpai is related to the man in the photograph?
• 1) Nephew
• 2) Brother
• 3) Father
• 4) Maternal Uncle
• Discussion in forum
Solution : The man in the photo is the son of the sister of Bajpai. Hence, Bajpai is the maternal uncle of the man in the photograph.
discussion
• ( 7 ) A family consists of six members P, Q, R, S, T and U. There are two married couples. Q is a doctor and the father of T. U is grandfather of R and is a contractor. S is grandmother of T and is a housewife. There is one doctor, one contractor, one nurse, one housewife and two students in the family. Who is the sister of T ?
• 1) R
• 2) Uncle
• 3) T
• 4) None of these
• Discussion in forum
Solution : Q, the Doctor, is the father of T. S, the Housewife, is the grandmother of T and hence the mother of Q. Since there are only two married couples one being that of Q, the grandfather of R. i.e, U must be married to S. Thus, R and T will be both children of Q and these must be the students So, P, who remains, shall be the wife of Q and she alone can be the nurse. Thus, U must be the contractor Clearly, R and T are children of same parent, So. R will be the sister of T.
discussion
• ( 8 ) If A $B means A is the brother of B; B * C means B is the son of C; C @ D means C is the wife of D and A # D means A is the son of D, how C is related to A? • 1) Maternal grandmother • 2) Maternal aunt • 3) Aunt • 4) Mother • Discussion in forum Answer : 4) Mother Solution : A$ B → A is the brother of B
B * C → B is the son of C
Hence, → A is the son of C
C @ D → C is the wife of D
Hence, → C is the mother of A.
discussion
• ( 9 ) A family consists of six members P, Q, R, S, T and U. There are two married couples. Q is a doctor and the father of T. U is grandfather of R and is a contractor. S is grandmother of T and is a housewife. There is one doctor, one contractor, one nurse, one housewife and two students in the family. What is the profession of P ?
• 1) Doctor
• 2) Nurse
• 3) Housewife
• 4) None of these
• Discussion in forum
Solution : Q, the Doctor, is the father of T. S, the Housewife, is the grandmother of T and hence the mother of Q. Since there are only two married couples one being that of Q, the grandfather of R. i.e, U must be married to S. Thus, R and T will be both children of Q and these must be the students So, P, who remains, shall be the wife of Q and she alone can be the nurse. Thus, U must be the contractor
discussion
• ( 10 ) Pointing to an old man, Kailash said, "His son is my son`s uncle". How is the old man related to Kailash ?
• 1) Brother
• 2) Uncle
• 3) Father
• 4) Grandfather
• Discussion in forum
Solution : Kailash's son's uncle -- Kailash's brother.
So, the old man's son is Kailash's brother i.e., the old man is Kailash's father.
discussion | 2022-05-26 23:08:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6385640501976013, "perplexity": 2704.0815688607872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00562.warc.gz"} |
https://proofwiki.org/wiki/Compact_Subspace_of_Hausdorff_Space_is_Closed | # Compact Subspace of Hausdorff Space is Closed
## Theorem
Let $H = \struct {A, \tau}$ be a Hausdorff space.
Let $C$ be a compact subspace of $H$.
Then $C$ is closed in $H$.
## Proof 1
From Subspace of Hausdorff Space is Hausdorff, a subspace of a Hausdorff space is itself Hausdorff.
Let $a \in A \setminus C$.
We are going to prove that there exists an open set $U_a$ such that $a \in U_a \subseteq A \setminus C$.
For any single point $x \in C$, the Hausdorff condition ensures the existence of disjoint open set $\map U x$ and $\map V x$ containing $a$ and $x$ respectively.
Suppose there were only a finite number of points $x_1, x_2, \ldots, x_r$ in $C$.
Then we could take $\displaystyle U_a = \bigcap_{i \mathop = 1}^r \map U {x_i}$ and get $a \in U_a \subseteq A \setminus C$.
Now suppose $C$ is not finite.
The set $\set {\map V x: x \in C}$ is an open cover for $C$.
As $C$ is compact, it has a finite subcover, say $\set {\map V {x_1}, \map V {x_2}, \dotsc, \map V {x_r} }$.
Let $\displaystyle U_a = \bigcap_{i \mathop = 1}^r \map U {x_i}$.
Then $U_a$ is open because it is a finite intersection of open sets.
Also, $a \in U_a$ because $a \in \map U {x_i}$ for each $i = 1, 2, \ldots, r$.
Finally, if $b \in U_a$ then for any $i = 1, 2, \ldots, r$ we have $b \in \map U {x_i}$.
Because $\displaystyle C \subseteq \bigcup_{i \mathop = 1}^r \map V {x_i}$:
$b \notin \map V {x_i}$, so $b \notin C$
Thus:
$U_a \subseteq A \setminus C$.
Then:
$\displaystyle A \setminus C = \bigcup_{a \mathop \in A \mathop \setminus C} U_a$
So $A \setminus C$ is open.
It follows that $C$ is closed.
$\blacksquare$
## Proof 2
For a subset $S \subseteq A$, let $S^{\complement}$ denote the relative complement of $S$ in $A$.
Consider an arbitrary point $x \in C^{\complement}$.
Define the set:
$\displaystyle \mathcal O = \left\{{V \in \tau: \exists U \in \tau: x \in U \subseteq V^{\complement}}\right\}$
By Empty Intersection iff Subset of Complement, we have that:
$U \subseteq V^{\complement} \iff U \cap V = \varnothing$
Hence, by the definition of a Hausdorff space, it follows that $\mathcal O$ is an open cover for $C$.
By the definition of a compact subspace, there exists a finite subcover $\mathcal F$ of $\mathcal O$ for $C$.
By the Principle of Finite Choice, there exists an $\mathcal F$-indexed family $\left\langle{U_V}\right\rangle_{V \in \mathcal F}$ of elements of $\tau$ such that:
$\forall V \in \mathcal F: x \in U_V \subseteq V^{\complement}$
Define:
$\displaystyle U = \bigcap_{V \mathop \in \mathcal F} U_V$
By General Intersection Property of Topological Space, it follows that $U \in \tau$.
Clearly, $x \in U$.
We have that:
$\displaystyle U$ $\subseteq$ $\displaystyle \bigcap_{V \mathop \in \mathcal F} \left({ V^{\complement} }\right)$ Set Intersection Preserves Subsets $\displaystyle$ $=$ $\displaystyle \left({\bigcup \mathcal F}\right)^{\complement}$ De Morgan's laws $\displaystyle$ $\subseteq$ $\displaystyle C^{\complement}$ Definition of Cover of Set and Relative Complement inverts Subsets
From Subset Relation is Transitive, we have that $U \subseteq C^{\complement}$.
Hence $C^{\complement}$ is a neighborhood of $x$.
From Set is Open iff Neighborhood of all its Points, we have that $C^{\complement} \in \tau$.
That is, $C$ is closed in $H$.
$\blacksquare$ | 2020-09-26 21:28:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9811538457870483, "perplexity": 85.29274970963712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400245109.69/warc/CC-MAIN-20200926200523-20200926230523-00420.warc.gz"} |
http://ivory.idyll.org/blog/cephseq-cephalopod-genomics.html | # CephSeq - a Cephalopod Genomics Consortium
I just returned from a NESCent Catalysis meeting on Cephalopod Genomics. I was invited as a bioinformatics and genomics guy, and so I spent four days in North Carolina talking about the opportunities and challenges of sequencing cephalopods.
Cephalopods are a class of the molluscs, and include squid and octopus, as well as Nautilus. As part of the molluscs, they're within the broader Lophotrochozoa, a superphylum about which we know rather little, genomically speaking.
The critters are pretty strange and interesting. They're suspected to be quite intelligent, and are studied for their behavior. Octopus is well known for camouflage and mimicry (see Roger Hanlon's video). Architeuthis, the Giant Squid, is the very definition of charismatic MEGAfauna. Loligo squid is widely fished, and has a gigantic axon that is one of the classic models for neurophysiology. My friend Erich Schwarz, who was also at the meeting, said "this is as close as we're going to get to sequencing intelligent aliens!"
The goal of the meeting was to chart a course forward to sequencing cephalopod genomes. It was organized by Clifton Ragsdale, Laure Bonnaud, and Leonid Moroz, and attended by about 30 people. 20 or more of the attendees were members of the cephalopod community, while the remaining people came from sequencing centers or bioinformatics and genomics labs.
I had virtually no prior knowledge of cephs, and the first day -- on the known biology, phylogeny, and genomes -- was an eye opener. The smallest known ceph genomes, the pygmy squid Idiosepius, is around 2.1 Gb, and many of the genomes are 4-5 Gb, quite a bit larger than the human genome (at ~3 Gb). Phylogenetically, the cephs are a deeply diverged class and have about 700 species, separated by up to 300 million years or more -- as divergent as human and fish, roughly. Unlike vertebrates, they may evolve rather quickly at a sequence level, and to really use homology to connect sequences we may need to sequence a bunch of cephs at various strategically-chosen distances from each other.
The meeting concluded with a white paper writing session, and that will presumably be made available soon. In the meantime, here are some of the things that I found most interesting:
We ended up planning a sort of post-genomic consortium, as Eric Edsinger-Gonzales pointed out to me. So many people are already generating sequence (both genomic DNA and RNA) for cephs that the real question is how to organize, collaborate, and advance as a field, rather than how to start generating sequence.
The bioinformatics challenges for genomics on these critters are really big. Large genomes, presumably full of repeats; unculturable animals (meaning we can't inbreed quickly, and so are stuck with whatever polymorphism rate we get for a critter); divergent genomes and transcriptomes; and a really broad community scattered across 6 continents and studying between 4-10 species. I suspect that assembly and annotation of these genomes is going to be really challenging.
We (the bioinformaticians, collectively) vehemently argued for a liberal and explicit data-sharing policy. After a long morning discussion about pre-publication data release, we reached a few conclusions. First, so many people already have some sequence, and sequencing capacity is distributed so widely, that things like the Ft. Lauderdale agreement provide little leverage in pressuring people to release their data. Second, there has to be some positive incentive for people to release their data -- it's not enough to simply put in place protections for misuse of the data. Third, many biologists do not yet subscribe to the idea that data generation is relatively uninteresting compared to the analysis, and (given the way pubs and grants reward people for being "first") it's hard to blame them. Fourth, many biologists just don't see the point of making the data available outside the community. In response to all of these concerns, we put together a draft data sharing & repository proposal (here); if the community actually adopts it in the white paper, then it will be the most explicitly liberal small-community data sharing policy I've seen in biology. I have hopes.
Overall, I think it is increasingly hard to organize centralized or community funding for genome projects and databases. In cases where there isn't much funding or centralization, existing genomic resources supported by big sites such as Ensembl and UCSC can serve as good reference repositories. But I'm not sure what small communities such as the cephalopod community are going to do for the next few years. It looks like I'll be involved, so I'll let you know when I find out...
--titus
p.s. While not directly part of the cephalopod meeting, I had an interesting tweetstorm with Ewan Birney and Cameron Neylon this morning, where we discussed the draft community data sharing proposal that I'd posted. It ended with me typing up a section of the white paper in Google Docs while they kibitzed on various aspects of the writeup, live -- rather a unique experience for me :). Also of note, Casey Bergman posted links to Michael Eisen's data sharing license, the Batavia Open Genomic Data License, as well as his own musings. Cameron also shared the this page on principles of data sharing and the responsibilities of data providers and data users.
I came away from the discussion with Ewan and Cameron thinking that the cultural gulf between organismal and molecular biologists on the one side, and genomicists and bioinformaticians on the other side, was still really wide and hence hard to bridge. At least some significant part of this is driven by the culture of publication driven by the federo-Elseverien funding/citation/impact factor complex prevalent in biology.
I work on bivalve molluscs, and I understand 100% the challenge you're | 2014-03-08 14:18:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35268735885620117, "perplexity": 3089.2671377080164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654758/warc/CC-MAIN-20140305060734-00000-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://reckoning.dev/blog/tree-based-models/ | # A Practical Guide to Tree Based Learning Algorithms
Tree based learning algorithms are quite common in data science competitions. These algorithms empower predictive models with high accuracy, stability and ease of interpretation. Unlike linear models, they map non-linear relationships quite well. Common examples of tree based models are: decision trees, random forest, and boosted trees.
In this post, we will look at the mathematical details (along with various python examples) of decision trees, its advantages and drawbacks. We will find that they are simple and very useful for interpretation. However, they typically are not competitive with the best supervised learning approaches. In order to overcome various drawbacks of decision trees, we will look at various concepts (along with real-world examples in Python) like Bootstrap Aggregating or Bagging, and Random Forests. Another very widely used topic - Boosting will be discussed separately in a future post. Each of these approaches involves producing multiple trees that are combined to yield a single consensus prediction and often resulting in dramatic improvements in prediction accuracy.
## Decision Trees
Decision tree is a supervised learning algorithm. It works for both categorical and continuous input (features) and output (predicted) variables. Tree-based methods partition the feature space into a set of rectangles, and then fit a simple model (like a constant) in each one. They are conceptually simple yet powerful.
Let us first understand decision trees by an example. We will then analyze the process of building decision trees in a formal way. Consider a simple dataset of a loan lending company's customers. We are given Checking Account Balance, Credit History, Length of Employment and Status of Previous Loan for all customers. The task is to predict the risk level of customers - creditable or not creditable. One sample solution for this problem can be depicted using the following decision tree:
Classification and Regression Trees or CART for short is a term introduced by Leo Breiman to refer to Decision Tree algorithms that can used for classification or regression predictive modeling problems. CART is one of the most common algorithms used for generating decision trees. It is used in the scikit-learn implementation of decision trees - sklearn.tree.DecisionTreeClassifier and sklearn.tree.DecisionTreeRegressor for classification and regression, respectively.
### CART Model
CART model involves selecting input variables and split points on those variables until a suitable tree is constructed. The selection of which input variable to use and the specific split or cut-point is chosen using a greedy algorithm to minimize a cost function. Tree construction ends using a predefined stopping criterion, such as a minimum number of training instances assigned to each leaf node of the tree.
### Regression Trees
Let us look at the CART algorithm for regression trees in more detail. Briefly, building a decision tree involves two steps:
• Divide the predictor space - that is, the set of possible values for $X_1, X_2, \ldots, X_p$ - into $J$ distinct and non-overlapping regions, $R_1, R_2, \ldots , R_J$ .
• For every observation that falls into the region $R_j$, make the same prediction, which is simply the mean of the response values for the training observations in $R_j$
In order to construct $J$ regions, $R_1, R_2, \ldots , R_J$, the predictor space is divided into high-dimensional rectangles or boxes. The goal is to find boxes $R_1, R_2, \ldots , R_J$ that minimize the RSS, given by
$\sum_{j=1}^{J} \sum_{i \in R_j} \big(y_i - \hat{y}_{R_j}\big)^2$
where, $\hat{y}_{R_j}$ is the mean response for the training observations within the $j^{th}$ box.
Since considering every possible such partition of space is computationally infeasible, a greedy approach is used to divide the space, called recursive binary splitting. It is greedy because at each step of the tree building process, the best split is made at that particular step, rather than looking ahead and picking a split that will lead to a better tree in some future step. Note that all divided regions $R_j \forall j \in [1, J]$ would be rectangular.
In order to perform recursive binary splitting, first select the predictor $X_j$ and the cut point $s$ such that splitting the predictor space into the regions (half planes) $R_1(j,s)=\big\{ X|X_j < s \big\}$ and $R_2(j,s)=\big\{ X|X_j \ge s \big\}$ leads to the greatest possible reduction in RSS. Mathematically, we seek $j$ and $s$ that minimizes,
$\sum_{i: x_i \in R_1(j,s)} \big(y_i-\hat{y}_{R_1}\big)^2 + \sum_{i: x_i \in R_2(j,s)} \big(y_i-\hat{y}_{R_2}\big)^2$
where $\hat{y}_{R_1}$ is the mean response for the training observations in $R_1(j,s)$, and $\hat{y}_{R_2}$ is the mean response for the training observations in $R_2(j,s)$. This process is repeated, looking for the best predictor and best cut point in order to split the data further so as to minimize the RSS within each of the resulting regions. However, this time, instead of splitting the entire predictor space, only one of the two previously identified regions is split. The process continues until a stopping criterion is reached; for instance, we may continue until no region contains more than $m$ observations. Once the regions $R_1, R_2, \ldots , R_J$ have been created, the response for a given test observation is predicted using the mean of the training observations in the region to which that test observation belongs.
### Classification Trees
A classification tree is very similar to a regression tree, except that it is used to predict a qualitative response rather than a quantitative one. Recall that for a regression tree, the predicted response for an observation is given by the mean response of the training observations that belong to the same terminal node. In contrast, for a classification tree, we predict that each observation belongs to the most commonly occurring class of training observations in the region to which it belongs (i.e. the mode response of the training observations). For the purpose of classification, many a times one is not only interested in predicting the class, rather also in probabilities of being in a given class.
The task of growing a classification tree is quite similar to the task of growing a regression tree. Just as in the regression setting, recursive binary splitting is used to grow a classification tree. However, in the classification setting, RSS cannot be used as a criterion for making the binary splits. We can replace RSS by a generic definition of node impurity measure $Q_m$, a measure of the homogeneity of the target variable within the subset regions $R_1, R_2, \ldots , R_J$. In a node $m$, representing a region $R_m$ with $N_m$ observations, the proportion of training observations in the $m^{th}$ region that are from the $k^{th}$ class can be given by,
$\hat{p}_{mk} = \frac{1}{N_m} \sum_{x_i \in R_m} I\big( y_i = k \big)$
where, $I\big(y_i = k\big)$ is the indicator function that is 1 if $y_i = k$, and 0 otherwise.
A natural definition of the impurity measure $Q_m$ is the classification error rate. The classification error rate is the fraction of the training observations in that region that do not belong to the most common class:
$E = 1 - \max_{k}\hat{p}_{mk}$
Given this is not differentiable, and hence less amenable to numerical optimization. Furthermore, this is quite insensitive to changes in the node probabilities, making classification error rate quite ineffective for growing trees. Two alternative definitions of node impurity measure that are more commonly used are gini index and cross entropy.
Gini index is a measure of total variance across the $K$ classes, defined as,
$G = \sum_{k=1}^{K} \hat{p}_{mk} \big(1-\hat{p}_{mk}\big)$
A small value of $G$ indicates that a node contains predominantly observations from a single class.
In information theory, Cross Entropy is a measure of degree of disorganization in a system. For a binary system, it is 0 if system contains all from the same class , and 1 if system contains equal numbers from the two classes. Hence, similar to Gini Index, Cross Entropy too can be used as a measure of node impurity, given by,
$S = -\sum_{k=1}^{K} \hat{p}_{mk} \log\big(\hat{p}_{mk}\big)$
Similar to $G$, a small value of $S$ indicates that a node contains predominantly observations from a single class.
## Common Parameters/Concepts
Now, that we understand decision tree mathematically, let us summarize some of the most common terms used in decision trees and tree-based learning algorithms. Understanding these terms should also be helpful in tuning models based on these methods.
• Root Node Represents entire population and further gets divided into two or more sets.
• Splitting Process of dividing a node into two or more sub-nodes.
• Decision Node When a sub-node splits into further sub-nodes, then it is called decision node.
• Leaf/ Terminal Node: Nodes that do not get split.
• Branch / Sub-Tree A subsection of a tree.
• Parent and Child Node A node, which is divided into sub-nodes is called parent node of sub-nodes where as sub-nodes are the child of parent node.
• Minimum samples for a node split Minimum number of samples (or observations) which are required in a node to be considered for splitting. It is used to control over-fitting, higher values prevent a model from learning relations which might be highly specific to the particular sample. It should be tuned using cross validation.
• Minimum samples for a terminal node (leaf) The minimum number of samples (or observations) required in a terminal node or leaf. Similar to the minimum samples for a node split, this is also used to control over-fitting. For imbalanced class problems, a lower value should be used since regions dominant with samples belonging to minority class will be much smaller in number.
• Maximum depth of tree (vertical depth) The maximum depth of trees. It is used to control over-fitting, lower values prevent a model from learning relations which might be highly specific to the particular sample. It should be tuned using cross validation.
• Maximum number of terminal nodes Also referred as number of leaves. Can be defined in place of max_depth. Since binary trees are created, a depth of $n$ would produce a maximum of $2^n$ leaves.
• Maximum features to consider for split The number of features to consider (selected randomly) while searching for a best split. A typical value is the square root of total number of available features. A higher typically leads to over-fitting but is dependent on the problem as well.
## Example of Classification Tree
For demonstrating different tree based models, I will be using the US Income dataset available at Kaggle. You should be able to download the data from Kaggle.com. Let us first look at all the different features available in this data set.
In the above code, we imported all needed modules, loaded both test and training data as data-frames. We also got rid of the fnlgwt column that is of no importance in our modeling exercise.
Let us look at the first 5 rows of the training data:
We also need to do some data cleanup. First, I will be removing any special characters from all columns. Furthermore, any space or "." characters too will be removed from any str data.
As you can see, there are two columns that describe education of individuals - Education and EdNum. I would assume both of these to be highly correlated and hence remove the Education column. The Country column too should not play a role in prediction of Income and hence we would remove that as well.
Although the Age and EdNum columns are numeric, they can be easily binned and be more effective. We will bin age in bins of 10 and no. of years of education into bins of 5.
Now that we have cleaned the data, let us look how balanced out data set is:
Output:
Similarly frequency counts for the test set are:
Output:
In both training and the test data sets, we find <=50K class to be about 3 times larger than the >50K class. This is begging us to treat this problem differently as this is a problem of quite imbalanced data. However, for simplicity we will be treating this exercise as a regular problem.
### EDA
Now, let us look at distribution and inter-dependence of different features in the training data graphically.
Let us first see how Relationships and MaritalStatus features are interrelated.
Let us look at effect of Education (measured in terms of bins of no. of years of education) on Income for different Age groups.
Recently, there has been a lot of talk about effect of gender based bias/gap in the income. We can look at the effect of Education and Race for males and females separately.
Until now, we have only looked at the inter-dependence of non-numeric features. Let us now look at the effect of CapitalGain and CapitalLoss on income.
#### Tree Classifier
Now that we understand some relationship in our data, let us build a simple tree classifier model using sklearn.tree.DecisionTreeClassifier. However, in order to use this module, we need to convert all of our non-numeric data to numeric ones. This can be quite easily achieved using the sklearn.preprocessing.LabelEncoder module along with the sklearn_pandas module to apply this on pandas data-frames directly.
Now we have training as well testing data in correct format to build our first model!
The simplest possible tree classifier model with no optimization gave us an accuracy of 83.5%. In the case of classification problems, confusion matrix is a good way to judge the accuracy of models. Using the following code we can plot the confusion matrix for any of the tree-based models.
Now, we can take a look at the confusion matrix of out first model:
We find that the majority class (<=50K Income) has an accuracy of 90.5%, while the minority class (>50K Income) has an accuracy of only 60.8%.
Let us look at ways of tuning this simple classifier. We can use GridSearchCV() with 5-fold cross-validation to tune various important parameters of tree classifiers.
With the optimization, we find the accuracy to increase to 85.9%. In the above, we can also look at the parameters of the best model. Now, let us have a look at the confusion matrix of the optimized model.
With optimization, we find an increase in the prediction accuracy of both classes.
## Limitations of Decision Trees
Even though decision tree models have numerous advantages,
• Very simple to understand and easy to interpret
• Can be visualized
• Requires little data preparation. Note however that sklearn.tree module does not support missing values.
• The cost of using the tree (i.e., predicting data) is logarithmic in the number of data points used to train the tree.
These models are NOT common in use directly. Some common drawbacks of decision tree are:
• Can create over-complex trees that do not generalize the data well.
• Can be unstable because small variations in the data might result in a completely different tree being generated.
• Practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree.
• Decision tree learners create biased trees if some classes dominate. It is therefore recommended to balance the dataset prior to fitting with the decision tree.
• Certain class of functions are difficult to model using tree models, such as XOR, parity or multiplexer.
Most of these limitations can be easily overcome by using several improvements over decision trees. In the following sections, we will be looking some of these concepts, mainly bagging, and random forests.
## Bootstrap Aggregating (Bagging)
In statistics, bootstrapping is any test or metric that relies on random sampling with replacement. We saw above that decision trees suffer from high variance. This means that if we split the training data into two parts at random, and fit a decision tree to both halves, the results that we get could be quite different. Bootstrap aggregation, or bagging, is a general-purpose procedure for reducing the variance of a statistical learning method.
Given a set of $n$ independent observations $Z_1, Z_2, \ldots, Z_n$, each with variance $\sigma^2$, the variance of the mean $\bar{Z}$ of the observations is given by $\sigma^2/n$. In other words, averaging a set of observations reduces variance. Hence a natural way to reduce the variance and hence increase the prediction accuracy of a statistical learning method is to take many training sets from the population, build a separate prediction model using each training set, and average the resulting predictions. Of there is only one problem here - we do not have access to multiple training data sets. Instead, we can bootstrap, by taking repeated samples from the (single) training data set. In this approach we generate $B$ different bootstrapped training data sets. We then train our method on the $b^{th}$ bootstrapped training set to get a prediction $\hat{f}^{*b}(x)$ to obtain one aggregate prediction,
$\hat{f}_{bag} = \begin{cases} \frac{1}{B}\sum_{b=1}^{B} \hat{f}^{*b}(x) \\ \\ \mathop{\arg\max}\limits_{b=1 \ldots B} \hat{f}^{*b}(x) \end{cases}$
This is called bagging. Note that aggregating can have different meaning in regression and classification problems. While mean prediction works well in the case of regression problems, we will need to use majority vote: the overall prediction is the most commonly occurring majority class among the B predictions, as aggregation mechanism for classification problems.
### Out-of-Bag (OOB) Error
One big advantage of bagging is that we can get testing error without any cross validation!! Recall that the key to bagging is that trees are repeatedly fit to bootstrapped subsets of the observations. One can show that on average, each bagged tree makes use of around 2/3rd of the observations. The remaining 1/3rd of the observations not used to fit a given bagged tree are referred to as the out-of-bag (OOB) observations. We can predict the response for the $i^{th}$ observation using each of the trees in which that observation was OOB. This will yield around $B/3$ predictions for the $i^{th}$ observation. Now using the same aggregating techniques as bagging (average for regression and majority vote for classification), we can obtain a single prediction for the $i^{th}$ observation. An OOB prediction can be obtained in this way for each of the n observations, from which the overall OOB MSE (for a regression problem) or classification error (for a classification problem) can be computed. The resulting OOB error is a valid estimate of the test error for the bagged model, since the response for each observation is predicted using only the trees that were not fit using that observation.
### Feature Importance Measures
Bagging typically results in improved accuracy over prediction using a single tree. However, it can be difficult to interpret the resulting model. When we bag a large number of trees, it is no longer possible to represent the resulting statistical learning procedure using a single tree, and it is no longer clear which variables are most important to the procedure. Thus, bagging improves prediction accuracy at the expense of interpret-ability.
Interestingly, one can obtain an overall summary of the importance of each predictor using the RSS (for bagging regression trees) or the Gini index (for bagging classification trees). In the case of bagging regression trees, we can record the total amount that the RSS is decreased due to splits over a given predictor, averaged over all $B$ trees. A large value indicates an important predictor. Similarly, in the context of bagging classification trees, we can add up the total amount that the Gini index is decreased by splits over a given predictor, averaged over all $B$ trees.
sklearn module's different bagged tree-based learning methods provide direct access to feature importance data as properties once the training has finished.
## Random Forest Models
Even though bagging provides improvement over regular decision tress in terms of reduction in variance and hence improved prediction, it suffers from subtle drawbacks: Bagging requires us to make fully grown trees on bootstrapped samples, thus increasing the computational complexity by $B$ times. Furthermore, since trees in the base of bagging are correlated, the prediction accuracy will get saturated as a function of $B$.
Random forests provide an improvement over bagged trees by way of a random small tweak that decorrelates the trees. Unlike bagging, in the case of random forests, as each tree is constructed, only a random sample of predictors is taken before each node is split. Since at the core, random forests too are bagged trees, they lead to reduction in variance. Additionally, random forests also leads to bias reduction since a very large number of predictors can be considered, and local feature predictors can play a role in the tree construction.
Random forests are able to work with a very large number of predictors, even more predictors than there are observations. An obvious gain with random forests is that more information may be brought to reduce bias of fitted values and estimated splits.
There are often a few predictors that dominate the decision tree fitting process because on the average they consistently perform just a bit better than their competitors. Consequently, many other predictors, which could be useful for very local features of the data, are rarely selected as splitting variables. With random forests computed for a large enough number of trees, each predictor will have at least several opportunities to be the predictor defining a split. In those opportunities, it will have very few competitors. Much of the time a dominant predictor will not be included. Therefore, local feature predictors will have the opportunity to define a split.
There are three main tuning parameters of random forests:
• Node Size: Unlike in decision trees, the number of observations in the terminal nodes of each tree of the forest can be very small. The goal is to grow trees with as little bias as possible.
• Number of Trees: In practice, few hundreds trees is often a good choice.
• Number of Predictors Sampled: Typically, if there are a total of $D$ predictors, $D/3$ predictors in the case of regression and $\sqrt{D}$ predictors in the case of classification make a good choice.
### Example of Random Forest Model
Using the same income data as above, let us make a simple RandomForest classifier model with 500 trees.
Even without any optimization, we find the model to be quite close to the optimized tree classifier with a test score of 85.1%. In terms of the confusion matrix, we again find this to be quite comparable to the optimized tree classifier with a prediction accuracy of 92.1% for the majority class (<=50K Income) and a prediction accuracy of 62.6% for the minority class (>50K Income).
As discussed before, random forest models also provide us with a metric of feature importance. We can see importance of different features in our current model as below:
Now, let us try to optimize our random forest model. Again, this can be done using the GridSearchCV() apt with 5-fold cross-validation as below:
We can see this model to be significantly better than our all previous models, with a prediction rate of 86.6%. In terms of confusion matrix though, we see a significant increase in the prediction accuracy of the majority class (<= 50K Income) with slight decrease in the accuracy for the minority class (>50K Income). This is a common problem with classification problems with imbalanced data.
Finally, let us also look at the feature importance from the best model.
We can see the answer to be significantly different than the previous random forest model. This is a common issue with this class of models! In the next post, I will be talking about boosted tree that provide a significant improvement in terms of model consistency.
### Limitations of Random Forests
Apart from generic limitations of bagged trees, some of limitations of random forests are:
• Random forests don’t do well at all when you require extrapolation outside of the range of the dependent (or independent) variables - better to use other algorithms like e.g., MARS
• They are quite slow at both training and prediction.
• They don’t deal well with a large number of categories in categorical variables.
Overall, Random Forest is usually less accurate than Boosting on a wide range of tasks, and usually slower in the runtime. In the next post, we will look at the details of boosting. I hope this post has helped you understand tree based methods in more detail now. Please let me know what topics I missed or should have been more clear about. You can also let me know in the comments below if there is any particular algorithm/topic that you want me to write about! | 2021-04-19 14:49:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 86, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6170586347579956, "perplexity": 622.359433406832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038887646.69/warc/CC-MAIN-20210419142428-20210419172428-00377.warc.gz"} |
http://mathhelpforum.com/differential-geometry/78796-showing-two-infinite-sums-equal-print.html | # Showing two infinite sums are equal?
• March 15th 2009, 07:00 AM
KZA459
Showing two infinite sums are equal?
I have this problem requiring me to show two infinite sums are equal but I can't seem to figure out how to do this. No matter what I try never give any results.
Hence my question is what approach should I use toward one of these question?
This is the question in question if it can help
Show
$
\sum_{p=0}^{\infty}\frac{x^{2p}}{1+x^{4p+2}}=\sum_ {q=0}^{\infty} (-1)^{q}\frac{x^{2q}}{1-x^{4q+2}}
$
if |x|<1
• March 15th 2009, 11:16 AM
Opalg
Quote:
Originally Posted by KZA459
I have this problem requiring me to show two infinite sums are equal but I can't seem to figure out how to do this. No matter what I try never give any results.
Hence my question is what approach should I use toward one of these question?
This is the question in question if it can help
Show
$
\sum_{p=0}^{\infty}\frac{x^{2p}}{1+x^{4p+2}}=\sum_ {q=0}^{\infty} (-1)^{q}\frac{x^{2q}}{1-x^{4q+2}}
$
if |x|<1
Use the binomial series for $(1+x^{4p+2})^{-1}$ to see that $\sum_{p=0}^{\infty}\frac{x^{2p}}{1+x^{4p+2}} = \sum_{p=0}^{\infty}x^{2p}\sum_{q=0}^{\infty}(-1)^qx^{(4p+2)q} = \sum_{p=0}^{\infty}\sum_{q=0}^{\infty}(-1)^qx^{4pq+2p+2q}$. Reverse the order of summation (why is this justified?) and then unwind the double sum to get $\sum_{q=0}^{\infty} (-1)^{q}\frac{x^{2q}}{1-x^{4q+2}}$.
• March 15th 2009, 12:42 PM
KZA459
Darn, thanks a lot Opalg.
• March 15th 2009, 05:05 PM
KZA459
Sorry, me again.
I also have the show a similar result for |x|>1
However using the power serie of $\frac{1}{1+x^{4p+2}}$
does not work in this case since it has as radius of convergence 1.
I must show if |x|>1 then
$
\sum_{p=0}^{\infty}\frac{x^{2p}}{1+x^{4p+2}}= -\sum_{q=0}^{\infty} (-1)^{q}\frac{x^{2q}}{1-x^{4q+2}}
$
Unfortunately I am still as stuck and can't find a way to figure that one out even with the insight you gave me Olpag.
Anyone has an idea?
• March 16th 2009, 03:40 AM
Opalg
Quote:
Originally Posted by KZA459
I also have the show a similar result for |x|>1
However using the power serie of $\frac{1}{1+x^{4p+2}}$
does not work in this case since it has as radius of convergence 1.
I must show if |x|>1 then
$
\sum_{p=0}^{\infty}\frac{x^{2p}}{1+x^{4p+2}}= -\sum_{q=0}^{\infty} (-1)^{q}\frac{x^{2q}}{1-x^{4q+2}}
$
Unfortunately I am still as stuck and can't find a way to figure that one out
If you want a series that converges when |x| > 1 then you need it to use negative powers of x. So write $\frac{x^{2p}}{1+x^{4p+2}}= x^{-2}\frac{(x^{-1})^{2p}}{1+(x^{-1})^{4p+2}}$ (dividing top and bottom by $x^{4p+2}$). You can then apply the previous result with $x^{-1}$ in place of x, noticing that an extra minus sign will come in when you transform it back into positive powers of x.
• March 16th 2009, 11:13 AM
KZA459
Thanks :) | 2015-09-04 03:08:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9464191198348999, "perplexity": 448.4075041054114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00250-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/in-given-figure-square-dart-board-shown-length-side-larger-square-15-times-length-side-smaller-square-if-dart-thrown-lands-larger-square-what-probability-that-it-will-land-interior-smaller-square-basic-ideas-of-probability_22875 | # In the Given Figure, a Square Dart Board is Shown. the Length of a Side of the Larger Square is 1.5 Times the Length of a Side of the Smaller Square. If a Dart is Thrown and Lands on the L - Mathematics
Sum
In the given figure, a square dart board is shown. The length of a side of the larger square is 1.5 times the length of a side of the smaller square. If a dart is thrown and lands on the larger square. What is the probability that it will land in the interior of the smaller square?
#### Solution
Given: A square dart board is shown. The length of a side of the larger square is 1.5 times the length of a side of the smaller square. If a dart is thrown and lands on the larger square
To find: Probability that it will land in the interior of the smaller square Let the length of smaller square is x cm Therefore the length of side of bigger square will be 1.5x cm
Area of bigger square =(1.5x)^2
= 2.25x^2 cm^2
Area of smaller square=x^2cm^2
We know that Probability="Number of favourable event"/"Total number of event"
Hence probability that the dart will land in the interior of the smaller square is equal to =x^2/(2.25x^2)=4/9
Concept: Basic Ideas of Probability
Is there an error in this question or solution?
Share | 2022-05-18 02:31:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6659368276596069, "perplexity": 206.6080606577961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521041.0/warc/CC-MAIN-20220518021247-20220518051247-00252.warc.gz"} |
https://eprint.iacr.org/2019/1124 | ### Evolving Ramp Secret Sharing with a Small Gap
Amos Beimel and Hussien Othman
##### Abstract
Evolving secret-sharing schemes, introduced by Komargodski, Naor, and Yogev (TCC 2016b), are secret-sharing schemes in which there is no a-priory upper bound on the number of parties that will participate. The parties arrive one by one and when a party arrives the dealer gives it a share; the dealer cannot update this share when other parties arrive. Motivated by the fact that when the number of parties is known, ramp secret-sharing schemes are more efficient than threshold secret-sharing schemes, we study evolving ramp secret-sharing schemes. Specifically, we study evolving $(b(j),g(j))$-ramp secret-sharing schemes, where $g,b: N \to N$ are non-decreasing functions. In such schemes, any set of parties that for some $j$ contains $g(j)$ parties from the first parties that arrive can reconstruct the secret, and any set such that for every $j$ contains less than $b(j)$ parties from the first parties that arrive cannot learn any information about the secret. We focus on the case that the gap is small, namely $g(j)-b(j)=j^{\beta}$ for $0<\beta<1$. We show that there is an evolving ramp secret-sharing scheme with gap $t^{\beta}$, in which the share size of the $j$-th party is $\tilde{O}(j^{4-\frac{1}{\log^2 {1/\beta}}})$. Furthermore, we show that our construction results in much better share size for fixed values of $\beta$, i.e., there is an evolving ramp secret-sharing scheme with gap $\sqrt{t}$, in which the share size of the $j$-th party is $\tilde{O}(j)$. Our construction should be compared to the best known evolving $g(j)$-threshold secret-sharing schemes (i.e., when $b(j)=g(j)-1$) in which the share size of the $j$-th party is $\tilde{O}(j^4)$. Thus, our construction offers a significant improvement for every constant $\beta$, showing that allowing a gap between the sizes of the authorized and unauthorized sets can reduce the share size. In addition, we present an evolving $(k/2,k)$-ramp secret-sharing scheme for a constant $k$ (which can be very big), where any set of parties of size at least $k$ can reconstruct the secret and any set of parties of size at most $k/2$ cannot learn any information about the secret. The share size of the $j$-th party in our construction is $O(\log k\log j)$. This is an improvement over the best known evolving $k$-threshold secret-sharing schemes in which the share size of the $j$-th party is $O(k\log j)$.
Available format(s)
Category
Cryptographic protocols
Publication info
Keywords
secret sharing
Contact author(s)
amos beimel @ gmail com
hussien othman @ gmail com
History
2020-02-21: revised
See all versions
Short URL
https://ia.cr/2019/1124
CC BY
BibTeX
@misc{cryptoeprint:2019/1124,
author = {Amos Beimel and Hussien Othman},
title = {Evolving Ramp Secret Sharing with a Small Gap},
howpublished = {Cryptology ePrint Archive, Paper 2019/1124},
year = {2019},
note = {\url{https://eprint.iacr.org/2019/1124}},
url = {https://eprint.iacr.org/2019/1124}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. | 2022-06-27 02:14:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3820774257183075, "perplexity": 1407.5615498601767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00655.warc.gz"} |
https://documen.tv/question/a-local-bowling-alley-charges-3-per-game-plus-5-to-rent-a-pair-of-shoes-write-an-algebraic-epres-17669046-39/ | ## A local bowling alley charges $3 per game, plus$5 to rent a pair of shoes. Write an algebraic expression that models the cost of renting on
Question
A local bowling alley charges $3 per game, plus$5 to rent a pair of shoes. Write an algebraic expression that models the cost of renting one pair shoes and bowling g games.
in progress 0
5 months 2021-08-31T10:30:09+00:00 1 Answers 6 views 0
This is because you pay $5 for shoes and$3 per game. Since the amount of games is unknown you put g. | 2022-05-21 09:40:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18077592551708221, "perplexity": 3161.0301055151685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00638.warc.gz"} |
https://hal-centralesupelec.archives-ouvertes.fr/hal-01572144 | # Large deviation analysis of the CPD detection problem based on random tensor theory
Abstract : The performance in terms of minimal Bayes' error probability for detection of a random tensor is a fundamental under-studied difficult problem. In this work, we assume that we observe under the alternative hypothesis a noisy rank-R ten-sor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R, i.e., for 1 ≤ q ≤ Q, R, N q → ∞ with R 1/q /N q converges to a finite constant. The detection of the random entries of the core tensor is hard to study since an analytic expression of the error probability is not easily tractable. To mitigate this technical difficulty, the Chernoff Upper Bound (CUB) and the error exponent on the error probability are derived and studied for the considered tensor-based detection problem. These two quantities are relied to a key quantity for the considered detection problem due to its strong link with the moment generating function of the log-likelihood test. However, the tightest CUB is reached for the value, denoted by s , which minimizes the error exponent. To solve this step, two methodologies are standard in the literature. The first one is based on the use of a costly numerical optimization algorithm. An alternative strategy is to consider the Bhattacharyya Upper Bound (BUB) for s = 1/2. In this last scenario, the costly numerical optimization step is avoided but no guaranty exists on the optimality of the BUB. Based on powerful random matrix theory tools, a simple analytical expression of s is provided with respect to the Signal to Noise Ratio (SNR) and for low rank CPD. Associated to a compact expression of the CUB, an easily tractable expression of the tightest CUB and the error exponent are provided and analyzed. A main conclusion of this work is that the BUB is the tightest bound at low SNRs. At contrary, this property is no longer true for higher SNRs.
Document type :
Conference papers
Domain :
Cited literature [17 references]
https://hal-centralesupelec.archives-ouvertes.fr/hal-01572144
Contributor : Remy Boyer Connect in order to contact the contributor
Submitted on : Friday, August 4, 2017 - 5:47:20 PM
Last modification on : Saturday, June 25, 2022 - 10:25:51 PM
### File
Chernoff_tensor.pdf
Files produced by the author(s)
### Citation
Remy Boyer, Philippe Loubaton. Large deviation analysis of the CPD detection problem based on random tensor theory. 25th European Signal Processing Conference (EUSIPCO 2017), Aug 2017, Kos Island, Greece. ⟨10.23919/eusipco.2017.8081289⟩. ⟨hal-01572144⟩
Record views | 2022-09-27 07:30:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8300275802612305, "perplexity": 993.8360692650109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00765.warc.gz"} |
http://math.stackexchange.com/questions/279362/show-that-the-solution-to-tn-tn-1-n-is-on2 | # Show that the solution to $T(n) = T(n - 1) + n$ is $O(n^2)$
Hello and thanks for taking the time to answer my question.
The question is really the title itself. We're studying about solving recurrences using the method of substitution and induction. How can I prove that this is correct?
I would really appreciate your reasoning behind the concepts as opposed to just churning out a solution.
Thank you very much!
-
Are you seeking to merely prove the claim, or are you asking how you would discover it if it weren't given to you? What happens, incidentally, when you try the method of substitution or induction? – Hurkyl Jan 15 '13 at 16:35
The most basic method is to expand the formula. It rarely works, but it is simple enough that is worth checking before proceeding to more complicated stuff.
\begin{align} T(n) &= T(n-1) + n \\ &= T(n-2) + (n-1) + n \\ &= T(n-3) + (n-2) + (n-1) + n \\ &\vdots \\ &= T(0) + 1 + 2 + \ldots + (n-2) + (n-1) + n \\ &= T(0) + \frac{n(n+1)}{2} = O(n^2) \end{align}
Cheers!
EDIT: Added missing $T(0) = O(1)$ term (thanks to Hurkyl).
-
You mean $T(0) + n(n+1)/2$ – Hurkyl Jan 15 '13 at 16:36
This is great thanks dtldarek! I will accept your answer as correct as soon as the minimum required time elapses! How would you determine the lower bound for the same equation? – Peter Jan 15 '13 at 16:40
@Peter This is actually set of equalities, so those work both ways. In fact $T(0) + \frac{n(n+1)}{2} = \Theta(n^2)$. – dtldarek Jan 15 '13 at 16:42
oh okay.. I asked the second question in the comment because as far as I understand it O denotes the upper bound which means that T(n) <= O(n^2) and for Omega it is the opposite. Does this mean that for T(n) = (n-1) + n Big O and Omega are n^2 aswell? – Peter Jan 15 '13 at 16:46
@evinda That depends on the starting conditions, which in the original post were not given. On the other hand, it doesn't matter, as both $T(0)$ and $T(1)$ are constant. In other words, pick the one that is more convenient for you. – dtldarek Mar 7 '15 at 23:03 | 2016-02-06 21:03:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995747208595276, "perplexity": 278.80980480660673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147841.50/warc/CC-MAIN-20160205193907-00296-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://cstheory.stackexchange.com/questions/41949/are-all-turing-machines-paths-predictable/41961#41961 | # Are all turing machines paths predictable?
I was recently studying partial solutions to the halting problem and came across the problem which I discuss below. In particular I was studying when it was computable to tell if a turing machine has a certain path in terms of its movement on the tape. A positive answer to the below would give a complete characterization of the paths for which the decision problem of whether a given turing machine has said path is computable.
Define the path of a turing machine to be the sequence of left and right movements it makes on an empty input. For example, this turing machine has a path which looks like LRRRLLRRRLLLRR...
Define a turing machine $$M$$ to be predictable if there exists another turing machine $$N$$ (not necessarily computable from $$M$$) with the following properties:
1. $$N$$ has the same path as $$M$$. That is, when $$M$$ turns left, so does $$N$$. When $$M$$ turns right, so does $$N$$. Although this seems like a strong restriction at first, $$N$$ is not required to have the same states or even the same tape alphabet as $$M$$ so using both of these it may still be able to do some pretty complicated stuff relative to what $$M$$ does.
2. There is a subset of $$N$$'s states, $$Q'\subset Q$$ such that whenever $$N$$'s head reads from a square for the last time, it must also be in a state in $$Q'$$. Also, we require $$N$$ to only enter a state from $$Q'$$ if it is reading from a square for the last time. Thus, in a way, $$N$$ is able to predict the path $$M$$ and $$N$$ are simultaneously taking.
My question is: Are all turing machines predictable?
Any help or pointers to reference materials would be appreciated, especially since the terminology here is my own and I don't know what to search to get information on this subject.
• Can you make this more formal, perhaps in the language of configurations? I don’t fully understand the question. Nov 28 '18 at 6:42
• If you really want $Q' \subset Q$ and applying your definition formally, I do not think all machines are predictable. Consider a machine with states Left, Right on alphabets 0,1,2. It starts on state Right, on a tape full of 0. The transitions are : (Right, i): write i+1, move right, go to state Left. (Left, i): write i+1, move left, go to state Right. And (Right, 2) stops. I do not think this machine is predictable, though I may be mistaken. If it is, please explain me how $N$ works so it will give an example of what you are trying to prove.
– holf
Nov 28 '18 at 8:01
• @holf Any machine that halts is predictable. Simply add a state for each step taken in the computation and put the state in $Q'$ iff at the corresponding step we've entered a square for the last time. Nov 28 '18 at 19:10
• ok so $Q'$ is a subset of the states of $N$ and not the states of $M$. That is really not clear from your definition.
– holf
Nov 29 '18 at 5:21
• You really need to be careful with the quantifiers -- the question is still ill-posed, and I am going to recommend closing it in this form. Must the machine $N$ be able to predict $M$'s last-visit property on every square? A fixed, given square? Chosen by whom and at which point? Nov 29 '18 at 8:25
This is another way to prove that not all Turing machines are predictable.
First it's easy to note that:
• all halting machines are predictable;
• all machines that loop forever on a finite portion of the tape are predictable;
• all machines that expand towards both sides of the tape are predictable ($$N=M, Q' = \emptyset$$).
The interesting case is when a machine runs forever and visits an infinite number of cells expanding only in one direction.
The following $$M_{u}$$ that expands rightwards is unpredictable.
1. at the beginning it writes $$\# \langle M_0 \rangle$$ on the tape;
2. if the rightmost part of the tape contains $$...\#\langle M_i \rangle$$ then it shift it on the right adding a $$H:$$ before it:
$$...H:\#\langle M_i \rangle$$
($$...$$ is the old untouched content of the tape)
1. then it simulates $$M_i$$ on empty tape (using the rightmost part of the tape) and check if it halts in $$2^{ |M_i|^i}$$ space (number of cells);
2. if it halts in $$2^{|M_i|^i}$$ space it returns back to $$H$$, otherwise it never visits $$H$$ again;
3. at the end it clears the right part of the tape and leaves
$$...H:\# \langle M_{i+1} \rangle$$
and jump back to step 2.
Suppose that $$M_u$$ (possibly padded with some dummy states to increase its size) is predictable by $$N_u$$.
There exists $$M_k$$ that simulates the whole computation of $$M_u$$ up to $$...\# \langle M_k \rangle$$ (recursion theorem) and in parallel simulate $$N_u$$ using no more than $$2^{2|M_{k-1}|^{k-1}}$$ space. Indeed $$N_u$$ has the same path of $$M_u$$ by hypothesis, so after processing every $$M_u(\#\langle M_i \rangle)$$, $$M_k$$ can shift the whole tape to the leftmost cell and continue with $$M_u(\#\langle M_{i+1} \rangle)$$ (both $$M_u$$ and $$N_u$$ will never use space on the left of $$\#$$ again). Finally it can discover if $$M_k$$ (itself) halts in space $$2^{|M_k|^k}$$ immediately after step 2: it's enough to examine whether $$N_u$$ is in a $$Q'$$ state when the simulated $$M_u$$ writes the $$H$$ or not.
If it uses less than $$2^{|M_k|^k}$$ space then $$M_k$$ can loop right and never halt, otherwise it halts; this leads to a contradiction ($$M_k$$ would be able to diagonalize itself).
• Pardon me if I'm being dense, but could you be a little more explicit on how to construct $B'$? Nov 29 '18 at 23:53
• @exfret: the problem is more tricky than I thought :-), I tried another approach. Nov 30 '18 at 15:08
If I understood your question correctly, the answer is NO. Let $$M$$ be any TM and $$w$$ any input string, and define the TM $$M'$$ as follows: it reserves the leftmost square of the tape as "special" (e.g., by first moving all of its input 1 space over to the right) and then it interprets its input as an encoding of $$\langle M,w \rangle$$ and simulates $$M$$ on $$w$$. If $$M$$ accepts $$w$$, then $$M'$$ returns to that "special" leftmost square; otherwise it never returns. Since the general decision problem is reducible to the problem of knowing whether the machine is visiting a square for the last time, the latter is undecidable.
• Edited my question to clarify that I was only looking at empty input but this does not answer my question regardless since my question isn’t to detect if a Turing machine has a certain path but whether another Turing machine with certain properties exists. Nov 29 '18 at 4:41 | 2021-10-24 02:56:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 67, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6844461560249329, "perplexity": 246.48609007671905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00684.warc.gz"} |
https://clojure.atlassian.net/browse/CLJS-674 | # Relative paths are incorrect when source map isn't in same directory as project.clj
## Description
For example:
Chrome looks for dist/dist/out/foo.cljs instead of dist/out/foo.cljs
You can manually fix it by opening dist/main.map.js and changing all the dist/out to out.
I think the general solution is that all the paths must be relative to the parent of :source-map
Potentially related to http://dev.clojure.org/jira/browse/CLJS-591
CLJS 2030
Unassigned
George Fraser
None
None
Code
Major
Configure | 2020-10-01 06:47:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8647945523262024, "perplexity": 4880.380400249944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402124756.81/warc/CC-MAIN-20201001062039-20201001092039-00401.warc.gz"} |
https://brilliant.org/problems/convex-mirror/ | # Convex mirror
As shown above, when an object with a height of $$3\text{ cm}$$ is placed on the optical axis of a convex mirror at a distance of $$12\text{ cm}$$ from the mirror, the height of its image is $$1\text{ cm}.$$ Then what is the focal length of the convex mirror?
× | 2018-06-18 13:41:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9290517568588257, "perplexity": 131.01164152126313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860557.7/warc/CC-MAIN-20180618125242-20180618145242-00371.warc.gz"} |
http://mathhelpforum.com/differential-geometry/101349-limit-function-sine.html | # Math Help - Limit of a function with sine
1. ## Limit of a function with sine
Hello,
I am trying to calculate the following limit:
$\lim\limits_{x\rightarrow+\infty}{\frac{1}{x-\sqrt{x^2+1}}\sin{\frac{1}{x}}}$
So far I've multiplied by $\frac{x+\sqrt{x^2+1}}{x+\sqrt{x^2+1}}$, which yields the following:
$\lim\limits_{x\rightarrow+\infty}{\frac{x+\sqrt{x^ 2+1}}{-1}\sin{\frac{1}{x}}} = \lim\limits_{x\rightarrow+\infty}{-(x+\sqrt{x^2+1})\sin{\frac{1}{x}}}$
I don't really know how to continue from here. I'd appreciate any help.
2. -delete-
3. Originally Posted by thomasdotnet
I am trying to calculate the following limit:
$\lim\limits_{x\rightarrow+\infty}{\frac{1}{x-\sqrt{x^2+1}}\sin{\frac{1}{x}}}$
So far I've multiplied by $\frac{x+\sqrt{x^2+1}}{x+\sqrt{x^2+1}}$, which yields the following:
$\lim\limits_{x\rightarrow+\infty}{\frac{x+\sqrt{x^ 2+1}}{-1}\sin{\frac{1}{x}}} = \lim\limits_{x\rightarrow+\infty}{-(x+\sqrt{x^2+1})\sin{\frac{1}{x}}}$
First, $\lim_{x\to\infty}x\sin\tfrac1x = \lim_{x\to\infty}\frac{\sin\frac1x}{\frac1x} = \lim_{y\to0}\frac{\sin y}y = 1$ (where y=1/x). Then
$\lim_{x\to+\infty}-(x+\sqrt{x^2+1})\sin\tfrac1x = \lim_{x\to+\infty}-(1+\sqrt{1+x^{-2}})x\sin\tfrac1x = -2.$
4. That solves my problem. Thank you. | 2015-07-04 13:05:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9080875515937805, "perplexity": 371.8748645497488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096706.9/warc/CC-MAIN-20150627031816-00076-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://www.mathblogging.org/posts | X
# Posts
### December 21, 2014
+
We bought more Zometool pieces and have been playing around quite a bit with them lately. It is such a fun tool to help learn about 3D geometry. For example: Most of the stuff we’ve been building lately has used the blue struts, so we tried a special red strut challenge today for fun. I…
+
Japan has been well-known for its high achievement in Mathematics, particularly in the Trends in International Mathematics and Science Study (TIMSS). I had been in Japan for the past one year and had been studying their curriculum, observing classes, as well as reading elementary school textbooks (yes, I can read and understand Japanese a bit.) Since some of […] Math and Multimedia - School math, multimedia, and technology tutorials.
+
Peter Swan writes: The problem you allude to in the above reference and in your other papers on ethics is a broad and serious one. I and my students have attempted to replicate a number of top articles in the major finance journals. Either they cannot be replicated due to missing data or what might […] The post It’s Too Hard to Publish Criticisms and Obtain Data for Replication appeared first on Statistical Modeling, Causal Inference, and Social Science.
+
MIT’s magazine Technology Review has a neat section in the back called “Puzzle Corner.” The section always has clever problems, though I’ve never thought to share one with the kids until seeing the most recent issue. It was problem N/D #3 that caught my attention this week: Link to the November / December Puzzle Corner…
+
Apparently next january, on wednesday 21st, will be created a joint CNRS-IHÉS Laboratoire Alexander Grothendieck. That was fast! Several talks are being lined up, including one by Illusie titled “Remembering the SGA’S“. Registration is open until january 18th. Probably (and hopefully) all talks will then appear on the institute’s fine youtube channel.
+
Ik heb op mijn website een scriptje staan dat steeds 3 'willekeurig' gekozen plaatjes geeft. Het script kiest daarbij steeds 3 plaatjes uit een verzameling van 24. Soms komen dezelfde plaatjes voor in het drietal. Hoe groot moet je verzameling zijn als je wilt dat de kans op drie verschillende plaatjes groter of gelijk aan 0,95 is?Oplossing week 20
+
Here are the most popular posts from MrHonner.com for 2014. When Desmos Fails Calculus Gave Me a Speeding Ticket The Last Digit of Your Age Is Mathematics Unnatural? Decomposing Functions Into Even and Odd Parts Fun With Self-Referential Tests
+
12 / 2 = 1 + 1 + 4 Also: 12 * 2 * 1 * 1 = 4! Also: 12 / (2 + 1) = 1 * 4 Also: 1 + 2 + 2 = (1 * 1) + 4 Also: (1 * 2) + (2 * 1) = 1 * 4
+
تصحيح تمرين13 حول دراسة الدوال وتمثيلها المبياني في ما يلي، تصحيح التمرين رقم 13 حول درس "دراسة الدوال العددية وتمثيلها المبياني" للثانية بكالوريا علوم فيزيائية ورياضية، وهدا التمرين يشمل النهايات والاشتقاق والفروع اللانهائية ثم انشاء المنحنى. اسئلة التمرين 13 حول دراسة […]
+
An Inequality Involving Fermat Point
+
Inequalities in Triangle
+
Bounds for the Sum of Distances to the Vertices
+
The unexplained mystery..."I, for one, find Gödel's incompleteness theorems rather comforting. It means that mathematicians will never be complete. There will always be something else which is undecidable with the current axioms. Should the human species survive another few million years and continue churning out mathematics at the rate we've done for the past few thousand years, we still won't have considered it all. There will always be work for all of the future mathematicians. As always, […]
+
He preparado un documento PDF con una clave de soluciones a problemas seleccionados del tema 5 que se puede descargar de este enlace.
+
Konijn met de feestdagen? 55% kans dat een Nederlands konijn uit deze 10 gemeenten komt.Hoeveel konijnen zijn er eigenlijk in Nederland?
+
Originally posted on Tuition Database Singapore:Hi Readers, Are you looking for a printable 2015 Calendar, specially tailored for Singapore? Check out this PDF printable 2015 calendar: Calendar (Generated by http://www.calendarlabs.com/pdf-calendar.php) Happy new year! Featured book: Chicken Soup for the Soul: Think…
+
Source: Sent to me by Pritish Kamath (http://www.mit.edu/~pritish/) Problem: Have you ever played "SET"? You have to play it. http://www.setgame.com/learn_play http://www.setgame.com/sites/default/files/Tutorials/tutorial/SetTutorial.swf Even if you have not played the game, the game can be stated in a more abstract way as follows: There are 12 points presented in F34 and the first person to observe a "line" amongst the 12 given points gets a score. Then the 3 points forming […]
+
$$p / ( p-1 ) ! + \left(\frac{a}{p}\right) a^{(p-1)/2}$$Proof.Consider the equation $Ax \equiv a \bmod p$ with $A, x \in \{ 1,2, \cdots p-1\}$,Case $\left(\frac{a}{p}\right)=-1$In this case $x$ and $A$ are different members of the set $\{ 1,2, \cdots p-1\}$, there are $(p-1)/2$ distinct pairs $(A, x)$ and pairwise multiplication gives the following identity: $( p-1 ) ! = a^{(p-1)/2}$.Case $\left(\frac{a}{p}\right)=1$In this case $a$ is a quadratic residue of $p$ so there are two pairs […]
+
Hola a todo el mundo! Aquí tenéis mi aportación para esta edición del Carnaval: http://elmundoderafalillo.blogspot.com.es/2014/12/arcos-de-malaga-rebajado.html Espero que os guste ;)
+
Hello and welcome to my 19th gems post. This is where I share some of the best teaching ideas I've seen on Twitter each week. Most of my gems have come from across the Atlantic today, inspired by our mathematical friends in America. 1. Polygraph There's been a lot of buzz about Desmos' new Polygraph activities. They're mathematical versions of Guess Who, designed to 'foster the pleasure and
+
So says the what’s new page. How close have they come to yymm.9999 to warrant this? Well, in september, the post-holidays peak, it got to 1409.8676 (found this by dichotomy, maybe there’s a more clever way). Actually, in october it went even higher to 1410.8871, while in november it receded a bit to 1411.8006. All […]
+
The theory of solving equations of fifth and sixth degree didn’t end when Abel proved the impossibility of solving the general algebraic equation of fifth degree or higher. Here is Cole (for whom the Cole Prize is named) recounting the … Continue reading →
+
We are pleased to upload the Mathematical Sciences question papers for SET 2014, conducted by SLET Commission, Assam. Paper I Paper II Paper III [The member state of the Commission are Assam, Arunachal Pradesh, Mizoram, Meghalaya, Manipur, Sikkim and Tripura.]
+
Last week’s post on “Coding Christmas” has been very popular; my personal favourite use of Scratch is to demonstrate relationships between angles and polygons which I have written on before. Investigating more projects on Scratch I found What’s My Number? which could make … Continue reading →
+
Diederik Stapel’s book, “Ontsporing” has been translated into English, with some modifications. From what I’ve read, it’s interesting in a bizarre, fraudster-porn sort of way. Faking Science: A true story of academic fraud Diederik Stapel Translated by Nicholas J.L. Brown Nicholas J. L. Brown (nick.brown@free.fr) Strasbourg, France December 14, 2014 Foreword to the Dutch edition […]
+
Quando non ti ricordi cosa c'è dopo l'uno, il tre è molto lontano Fred (d'Alembert) Bene, sembra siate passati attraverso moltiplicazione e divisione senza eccessivi traumi. Ancora un passettino, dai: carta & matita. Calcolate la radice quadrata di duecentosessantaquattro. Sino al secondo decimale. Non ditemi che non vi ricordate... Incredibile! Già, anche in età più tarda ci si scontra con [...]
+
Retour sur les triangles de Steinhaus équilibrés et le problème de Molluzzo … - Les conjectures du trimestre / Piste rouge, featured
+
Long time ago, I talked about conformal changes for various geometric quantities on a given Riemannian manifold of dimension , see this post. Frequently used in conformal geometry in general, or when solving the prescribed scalar curvature equation in particular, is the conformal Laplacian, defined as follows where is the scalar curvature of the metric […]
+
Today's insights go in different directions. The first one came in large part from a remark by John Platt on Technet's Machine Learning blog who puts this into words much better than I ever could....Given the successes of deep learning, researchers are trying to understand how they work. Ba and Caruana had a NIPS paper which showed that, once a deep network is trained, a shallow network can learn the same function from the outputs of the deep network. The shallow network can’t learn […]
+
I have given you a lot of gift ideas for Christmas (Christmas Presents, for her, for him, for children), but I thought that it would be a good idea to do a DIY. Also, this is not a really maths … Continue reading → | 2014-12-22 01:20:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42150819301605225, "perplexity": 5983.107344695493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772972.2/warc/CC-MAIN-20141217075252-00071-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/623440/logarithm-and-tensor-products | # Logarithm and tensor products
We define Von Neumann Entropy for a density matrix $\rho$ (hermitian, positively defined, with trace 1) as :
$S(\rho)=-tr(\rho \ln(\rho))$
Considering $\rho = \rho_1 \bigotimes \rho_2$, I want to show that $S(\rho)=S(\rho_1)+ S(\rho_2)$.
I do not see how the following equality can be (where $\mathbb{Id}$ stands for the identity matrix with appropriate dimension):
$\ln(\rho_1\bigotimes \mathbb{Id})= \ln(\rho_1)\bigotimes\mathbb{Id}$
Actually I don't understand how the definition of ln for matrices applies here.
Could someone help me on this step ?
• The tensor product of two matrices is a new matrix. Apply the logarithm to that. – Elchanan Solomon Dec 31 '13 at 17:42
• @IsaacSolomon : Ok, so is that correct to look at this like as a (block) diagonal matrix and because of the $ln(\lambda_i \delta_{ij})=ln(\lambda_i)\delta_{ij}$ we apply similarly to the upper form and get our result ? – faero Dec 31 '13 at 17:56
• Actually, $\rho\otimes I$ is not block diagonal but $I\otimes\rho$ is. – Algebraic Pavel Jan 1 '14 at 15:52
• BTW, the fact that $\ln(\rho\otimes I)=\ln(\rho)\otimes I$ follows from the useful fact in my answer :) – Algebraic Pavel Jan 1 '14 at 15:58
First, it is important to know that $$f(\rho)$$ for a Hermitian positive definite $$\rho$$ is given by $$f(\rho)=U\mathrm{diag}(f(\lambda_1),\ldots,f(\lambda_n))U^*$$, where $$\rho=U\Lambda U^*$$ is the eigenvalue/eigenvector decomposition of $$\rho$$ with $$\Lambda=\mathrm{diag}(\lambda_1,\ldots,\lambda_n)$$.
If $$\rho_1=U_1\Lambda_1U_1^*$$ and $$\rho_2=U_2\Lambda_2U_2^*$$ are the eigenvalue/eigenvector decompositions of $$\rho_1$$ and $$\rho_2$$, respectively, then $$\rho_1\otimes\rho_2=(U_1\Lambda_1U_1^*)\otimes(U_2\Lambda_2U_2^*) =(U_1\otimes U_2)(\Lambda_1U_1^*\otimes\Lambda_2U_2^*) =(U_1\otimes U_2)(\Lambda_1\otimes\Lambda_2)(U_1^*\otimes U_2^*) =(U_1\otimes U_2)(\Lambda_1\otimes\Lambda_2)(U_1\otimes U_2)^*$$ is the eigen-decomposition of $$\rho_1\otimes\rho_2$$ (we used here the mixed-product property of the Kronecker product).
If $$\rho=U\Lambda U^*$$ is the eigen-decomposition of (the Hermitian positive definite) $$\rho$$ (with $$\Lambda=\mathrm{diag}(\lambda_1,\ldots,\lambda_n)$$), then $$\rho\ln(\rho)=(U\Lambda U^*)(U\ln(\Lambda)U^*)=U\Lambda\ln(\Lambda)U^*$$ and hence $$S(\rho)=-\sum_{i=1}^n\lambda_i\ln(\lambda_i).$$
Now consider $$\Lambda_1=\mathrm{diag}(\lambda_1^{(1)},\ldots,\lambda_n^{(1)})$$ and $$\Lambda_2=\mathrm{diag}(\lambda_1^{(2)},\ldots,\lambda_n^{(2)})$$. We have $$\begin{split} \ln(\Lambda_1\otimes\Lambda_2)&=\mathrm{diag}(\ln(\lambda_1^{(1)}\Lambda_2),\ldots,\ln(\lambda_1^{(1)}\Lambda_2))\\ &=\mathrm{diag}(\ln(\lambda_1^{(1)})I_n+\ln(\Lambda_2),\ldots,\ln(\lambda_n^{(1)})I_n+\ln(\Lambda_2))\\ &=\ln(\Lambda_1)\otimes I_n+I_n\otimes\ln(\Lambda_2) \end{split}$$ (here, we used just the "log-of-product-is-a-sum-of-logs" property of $$\ln$$). Hence (using again the mixed product property) $$\begin{split} (\Lambda_1\otimes\Lambda_2)(\ln(\Lambda_1\otimes\Lambda_2)) &= (\Lambda_1\otimes\Lambda_2)(\ln(\Lambda_1)\otimes I_n+I_n\otimes\ln(\Lambda_2))\\ &= (\Lambda_1\otimes\Lambda_2)(\ln(\Lambda_1)\otimes I_n)+(\Lambda_1\otimes\Lambda_2)(I_n\otimes\ln(\Lambda_2))\\ &= (\Lambda_1\ln(\Lambda_1))\otimes\Lambda_2+\Lambda_1\otimes(\Lambda_2\ln(\Lambda_2)) \end{split}$$ Therefore, with $$\rho=\rho_1\otimes\rho_2$$, using the useful fact, and the trace-of-a-Kronecker-product property, $$\begin{split} S(\rho)&=-\mathrm{tr}((\Lambda_1\ln(\Lambda_1))\otimes\Lambda_2+\Lambda_1\otimes(\Lambda_2\ln(\Lambda_2)))\\ &=-\mathrm{tr}(\Lambda_1\ln(\Lambda_1))\mathrm{tr}(\Lambda_2) -\mathrm{tr}(\Lambda_1)\mathrm{tr}(\Lambda_2\ln(\Lambda_2))\\ &=\mathrm{tr}(\Lambda_1)S(\rho_2)+\mathrm{tr}(\Lambda_2)S(\rho_1)\\ &=\mathrm{tr}(\rho_1)S(\rho_2)+\mathrm{tr}(\rho_2)S(\rho_1). \end{split}$$ If $$\mathrm{tr}(\rho_i)=\mathrm{tr}(\Lambda_i)=1$$ ($$i=1,2$$), then $$S(\rho)=S(\rho_1)+S(\rho_2).$$ | 2019-11-19 02:13:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846537113189697, "perplexity": 276.8563632266988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00141.warc.gz"} |
https://synapse.koreamed.org/articles/1051731 | Lee, Kim, Park, and Kong: Development and Long Term Evaluation of a Critical Pathway for the Management of Microvascular Decompression
### Purpose
In order to provide a systematic and standardized treatment course for MVD patients, a critical pathway (CP) program was developed and the results of its long term application were analyzed.
### Methods
This was a methodological study. The CP was established and applied to 75 (step I) and 1,216 (step II). Another group of 56 with similar features was used as a control group.
### Results
The application of CP turned out to be useful in many regards: the rate of hearing loss was reduced from 1.8% to 0% (step I) and 0.5% (step II), and low cranial nerve palsy was reduced from 1.8% to 1.3% and 0.7%, respectively. The length of hospitalization decreased by 2.56 days (25.2%) for step I and 3.05 days (30.0%) for step II. Days of ICU stay were reduced by 7.9% and 1.8%. The total cost per patient was reduced by 14.8% (step I). The cost per day was increased by 13.7% and 52.4%. An increase in the patient satisfaction index was noted, as shown in the ICU information guide (p=.002).
### Conclusion
The development and application of CP was found to improve the quality of medical treatment and the efficacy of hospital management in MVD patients. Well organized and efficient system and multidisciplinary teamwork are the key component of the successful application of CP.
### INTRODUCTION
Microvascular decompression (MVD) is the standard treatment for hyper-functioning disorders of the cranial nerve roots such as hemifacial spasm (HFS) and trigeminal neuralgia (TN) (Li et al., 2004; Mauriello et al., 1996; McLaughlin et al., 1999; Mustafa, Weerden, & Mooij, 2003; Wang & Jankovic, 1998). HFS is an involuntary movement disorder in which spasms occur on one half of the face. Although this disease begins in the region around the eye, involuntary muscle spasms usually progress to involve the whole face, particularly around the eye, mouth, and even the neck (Wang & Jankovic, 1998). By way of contrast, TN creates intense pain along the branches of the trigeminal nerve that control facial senses. For the accurate diagnosis and effective treatment of these conditions, great efforts have been made by medical personnel and various diagnostic tools such as 3D spin echo magnetic resonance imaging (MRI) and electromyography (EMG) have been employed in attempts to achieve a successful treatment regimen.
Prior to the implementation of the critical pathway (CP), no standard guidelines had been established and public education was insufficient. In addition, the following problems existed. When patients visited the hospital for the first time, they were often subjected to great inconveniences; for example, some patients had to wait for very long times to receive examinations or treatment, and also frequently had to visit the clinic many times. Additionally, patients occasionally lacked information regarding the treatment process and the relevant operative risks. On the other hand, discomfort levels remained high until all spasms had completely disappeared after surgery (Goto, Matsushima, Natori, Inamura, & Tobimatsu, 2002; Ishikawa, Nakanishi, Takamiya, & Namiki, 2001; Samii et al., 2002). In such cases, finding a solution and preventing the commonly-encountered discomforts might prove helpful in reducing hospitalization periods, in addition to elevating patient satisfaction levels (Isla-Guerrero et al., 2001). Some studies have reported that a post-endoscopy checklist reduced the length of stay for non-variceal upper gastrointestinal bleeding (Romagnuolo et al., 2005), and that standardized patient care using a CP reduced length of stay and complication rates following bariatric surgery (Kim, 2010a; Yeats, Wedergren, Fox, & Thompson, 2005; Van Vliet et al., 2011; Zhang & Liu, 2011).
Therefore, the principal objective of this study was to enhance management efficiency during hospitalization and patient satisfaction via the use of a standard medical treatment guide for MVD. We also evaluated the treatment steps via the continuous application of these guidelines. In particular, the long-term investigation conducted to validate the usefulness and efficacy of MVD CP.
### 1. Study Design
This study was conducted to provide an effective management protocol for MVD patients. This was a methodological study (quasi-experimental study and long term survey). We compared 56 patients treated in the period prior to CP commencement from January 2001 to December 2001, 75 patients applied during the period from July 2002 to December 2002 (step I), and 1,216 patients were treated during the period spanning January 2003 to December 2009 (step II). The samples used in this study were selected as the total number of patients that received MVD for HFS or TN during the study period.
This study was conducted at a single institute in Korea. The setting for this study was a neurosurgical unit which included an outpatient unit, general wards, and an intensive care unit (ICU). To assess the mean differences among the three groups with regard to complications (step I: 1, step II: 5, SD: 1.0) and hospital duration (step I: 2.5, step II: 3, SD: 2), the sample size of each group was required to achieve a significance level of 0.05, with a power of 90%. The quality of treatment was evaluated in terms of the frequency of complications. The efficacy of hospital management was evaluated by the length of stay in hospital, the length of stay at the ICU, the total cost per patient, and the cost per day. CP application was evaluated via variation analysis (step I and the year 2003) and measurements of patient satisfaction.
#### 1) Stage I: Development of critical pathway
In January 2002, with the objectives of establishing medical treatment flow and efficacy improvement, a team unit was comprised of 14 members (neurosurgeon 2, professor of nursing school 1, nurse managers 2, registered nurses 4 [neurosurgical ICU 1, general wards 2, and outpatient unit 1], clinical nurse specialist [CNS] 1, laboratory technicians 3, and hospital administrator 1). This team established a plan and studied the task by attending lectures and via a literature review.
After careful study, we established a standard treatment guide for MVD. The team then evaluated treatment progress by reviewing the relevant documents and charts. The medical records of 15 patients with MVD during the January-December 2000 period who fulfilled the selection criteria were reviewed. Medical record analysis consisted of 73 items (8 items on measures/observations, 3 items on activity/rest, 4 items on diet/nutrition, 23 items on medication, 14 items on laboratory test, 12 items on treatment, 2 items on interdepartmental consult and 7 items on patient education). A pilot CP was composed from the literature review, modification of preexisting CP for other diseases, and the opinion of the medical team. The x-axis of the pilot CP represented the time-frame, whereas the y-axis interpolated the treatment and nursing items. The devised CP was a systematically organized schedule that runs from the time of initial visit to the clinic to the time of discharge, and was designed to assist in decisions made concerning observation and measurement, activity, diet, medication, lab & tests, treatments & procedures, consultation, and education. That table was completed by placing the dates in rows and eight items in a column. All data were registered in the database, and the practitioners were able to easily use the package.
Each item was finalized after evaluation by staff nurses and neurosurgeon, validation by professionals group (neurosurgeon 2, professor of nursing school 1, nurse managers 2, registered nurses 4, CNS 1), and the approval of the CP development team. Five neurosurgeons and 62 nurses were recruited and educated regarding the objective of the study, the concept of CP, the development and application of CP, patient/family educating methods, and variations. In order to analyze its clinical adaptability, the pilot CP went through 17 experimental cases before its completion into a final MVD CP. To analyze the variations occurring during the application of CP, a clarification of the modified variation record was used (Beyea, 1996). The CP for patients was developed such that it could be readily understood by patients using simple words and pictures.
Additionally, we planned an intra-operative monitoring system (facial evoked EMG and brainstem auditory evoked potential [BAEP]) that was used to assess the intra-operative status of muscles and nerves.
A website was developed by the CP team to provide information. The associated educational information system was composed of an educational brochure that included a CP for patients and a website (http://facialspasm.samsunghospital.com). Education regarding disease and treatment was provided to patients at the outpatient clinic throughout the hospitalization period using an educational brochure and a website.
#### 2) Stage II: Application of critical pathway and measurement of the results
We analyzed independent variables affecting treatment and economic outcome, which included the incidence of complications, length of hospitalization, individual medical costs, the number of operations, CP variations, and patient satisfaction prior to and after CP initiation. Postoperative complications were monitored continuously by the neurosurgeon and the nurse. The hearing loss was confirmed by audiometry on the third day after surgery. Additionally, the low cranial nerve palsy was examined by ear, nose, and throat (ENT) doctors. The number of operations was defined as the total number of patients that received MVD for HFS or TN. Length of hospitalization was defined as the total number of days spent in hospital., including the ICU and excepting outpatient department (OPD) visits. ICU stay was computed from entry into the unit until transfer to the general ward. Medical costs per person were defined as the total charges incurred during the hospital stay. The daily treatment cost was defined as the cost per patient per day, and was calculated by dividing the total treatment cost by the days of stay.
Step I variation analysis was categorized into three groups: 'type', 'detailed fact', and 'grade'. The type of variation was classified into patient/family, medical attendance, and hospital. The detailed fact was classified into assessment, test, treatment, medication, diet, activity, interdepartmental cooperative treatment, nursing/education, discharge plan, record, treatment schedule, and communication. The grade category was divided into three degrees: grade 1, a slight change occurred, but was still correspondent to CP; grade 2, a slight change not correspondent to CP, but the CP still could be used; grade 3, the proper application of CP was impossible (Kim, 2010a). Step II variation analysis was recorded in terms of change content by CNS.
A patient satisfaction questionnaire was developed to evaluate the quality of care provided in the hospital. Patient satisfaction with the care provided by various health care professionals was measured with five-point questions. A score of 5 was very satisfactory and a score of 1 was not satisfactory. The tool consisted of 12 items, and the reliability as measured by Cronbach's α value was .912.
We applied and evaluated the guidelines for step I and step II. Compared with the period of step I, improvements were noted in the period of step II, such as a revision of CP, modification of contents in the educational material and website, increased research work, and strengthened teamwork. Difficulties in communicating ensued upon changes in team members. We communicated continuously about variances between team members, and CNS provided information about new members.
### 2. Ethical Considerations
The study was conducted after obtaining approval from the ethical committee of our hospital. In addition, permission to conduct this study was obtained from the board of directors of the nursing department.
### 3. Data Analysis
The statistical analysis system version 19.0 (SPSS Inc., Chicago, IL, USA) was used for analysis. Based on these data, we carried out statistical analysis using χ2 with Fisher's exact tests and one-way ANOVA with Bonferroni's correction. Variation was analyzed by frequency and percentage. Patient satisfaction was analyzed via ANOVA. Reliability was analyzed by Cronbach's α.
### 1. Stage I. Development of Critical Pathway
A barometer for the evaluation of the effects of the CP model application was designated. The x-axis represents the chronological process of 8 days of hospital stay, with details on daily treatment and nursing. The y-axis of the final CP consists of observation and measurement, activity, diet, medication, lab & tests, treatment/procedures, consultation, and education. Development and implementation took place over a total of 7 years (Figure 1).
### 2. Stage II. The Effect of Application of Critical Pathway
#### 1) Characteristics of the patients
The 56 patients before the application of CP, the 75 patients during step I, and the 1,216 patients during step II comprised the study cohort, which consisted of 402 (29.8%) men and 945 (70.2%) women, with a mean age of 49.7 years (age range 19 to 79 years). No significant differences were noted between groups in terms of (step I, step II); age (p=.620, p=.313), gender (p=.569, p=.761), diagnosis (p=1.000, p=.167), symptom location (p=.294, p=.418), symptom duration (p=.887, p=.912), hypertension history (p=.641, p=.278) and diabetes mellitus (p=.423, p=.515) (Table 1).
#### 2) Incidence of complications
We found that the incidence of hearing loss was reduced from 1 (1.8%) in the control group to 0 (0.0%) for step I and to 6 (0.5%) for step II. However, these differences were not found to be significant (p=.421, p=.272). The incidence of lower cranial nerve palsy was reduced from 1 (1.8%) in the control group to 1 (1.3%) for step I and to 8 (0.7%) for step II; these differences were not significant (p=1.000, p=.331). No decrease in delayed facial palsy (transient) was noted (p=1.000, p=.312) (Table 2).
#### 3) Cost effectiveness: Length of hospitalization and medical costs
The number of operations was 4.6 (per month) in the control group, and increased to 12.5 for step I and to 14.3 for step II. The mean number of operations for step II increased by 9.7 (210.0%) compared to 2001, averaging 173.7 patients a year. The total days of hospital stay was shortened by 2.56 days (25.2%) for step I (p<.001) and 3.05 days (30.0%) for step II (p<.001). Days of ICU stay in the control group were 1.14 days, step I was 1.05 days (p=.261), and step II was 1.12 days (p=.721). The ICU stay for step II was reduced by 0.02 days (1.8 %). The total cost per patient in step I was reduced by $738 (14.8%) (p<.001) and 2003~2005 years, step II marked an average annual increase of$310 (6.2%) (p=.022). The cost per day was increased by $69 (13.7%) for step I (p<.001) and by$264 (52.4%) for step II (p<.001) (Table 3).
#### 4) Analysis of variations
In the application of CP in 197 patients in step I (75) and step II (122, only 2003 year), variations were observed in step I 1,425, step II 1,465 incidents. Details were as follows: step I incidents 150 (10.5%) and step II incidents 696 (47.5%) involved the patient or family due to patient's condition. step I incidents 1,275 (89.5%) and step II incidents 769 (52.5%) with the medical attendance as the result of a doctor's prescription. Classifying the variations by degree, step I incidents 1,425 (100%), step II incidents 1,464 (99.9%) fell into the grade 1 category, in which the application was slightly modified although the main content of CP remained, and 1 (0.1%) incident were classified as grade 3, in which CP application was not possible. Meanwhile, variations during the 2003~2009 period were observed in 12 items: observation/measurement 1, medication 8 and lab & test 3. Town's view was added. The medications changed were pre-operative main fluid, antacid, and coagulant. The schedule shortened was intake/output, post-operative main fluid, antibiotics, osmotic diuretics, steroid, discharge medication, and blood tests (ABGA, CBC and Serum & Urine electrolyte/Osmol.). Temporal bone CT schedule was changed (Table 4).
#### 5) Patient satisfaction
The average satisfaction rates for step I and step II (the year 2003) versus the control group increased slightly in all items, but not significantly so. One item from the questionnaire that did differ significantly was increased satisfaction with the guidance and information provided when patients were hospitalized in the ICU (p=.002). This increased satisfaction implies a reduction in patient anxiety and illustrates the importance of providing adequate information beforehand.
### DISCUSSION
CPs have been defined as "systematically developed statements that assist practitioner and patient decision-making about appropriate health care in specific clinical circumstances". Some pressure is currently being exerted to develop guidelines by which the management of many medical and surgical conditions can be improved (Cheah, 2000; Mitchell et al., 2005; Park & Ro, 2000). We developed a CP for patients with HFS and TN who underwent MVD. The devised CP could alter clinical practices and improve patients' outcomes for this condition. Moreover, this application of CP reduced the incidence of complications, hospitalization duration, and patients' medical costs, and also improved patient satisfaction.
The principal complications were hearing loss and cranial nerve palsy. Previous reports demonstrated that hearing loss occurred in 0.3~4.8% of patients and that low cranial palsy occurred in 4% of patients (Acevedo, Sindou, Fischer, & Vial., 1997; Chung, Chang, Choi, Chang, & Park, 2001; Wang & Jankovic, 1998). The rate of hearing loss incidence was 1.8% in the control group. On the other hand, this rate was reduced in the application group (step I: 0%, step II: 0.5%). Similarly, the rate of low cranial nerve palsy occurrence decreased (control group: 1.8%, step I: 1.3%, step II: 0.7%). This decrease was attributed to the prompt response and continuous clinical intervention mandated by the systematic treatment plan at each stage, which was targeted toward the prevention of complications. The results of this study were consistent with a previous study (Ball & Peruzzi, 1997; Müller et al., 2009; Rotter et al., 2010).
In the CP application group, the hospital stay prior to surgery was reduced by step I 2.56 days, and step II 3.05 days. The total cost per patient in step I was reduced by 14.8%. The total cost per patient for step II marked an average annual increase of 6.2%. This was attributed to rising medical costs per person in Korea (9.3%) between the year 2000 and 2009 (Ministry of Health and Welfare, 2013). This suggested an increased profit rate of the hospital and confirmed the 1999 proposal of Rohrbach. The cost per day was increased by 13.7% (step I) and by 52.4% (step II). These results of this study were consistent with the results of a previous study (Rotter et al., 2010; Van Vliet et al., 2011; Oreja-Guevara et al., 2010; Panella, Marchisio, & Di Stanislao, 2003).
The number of operations per month was increased by 9.7 (210.0%) as compared to the year 2001, averaging 173.7 patients a year. This was attributed to the effects of education and public information including web and telephone counseling and the implementation of an effective management schedule. Moreover, it is anticipated that the performance of preliminary examinations and the management of hospitalization periods will increase the number of surgeries and improve bed-occupancy (Oreja-Guevara et al., 2010; Owen et al., 2006). Patients that undergo MVD have usually received other alternative treatment modalities including perennial treatment with Chinese medicine and ineffective physical therapy; often, these patients were originally incorrectly diagnosed or misadvised about their condition or the appropriate treatment method. Therefore, an accurate diagnosis and an active information system for this malady will be necessary. Thus, the maintenance of a website and the mailing of educational books will undoubtedly prove helpful in the creation of an information system for affiliated hospitals. Moreover, other ideas concerning information systems should also be taken into consideration (Van Vliet et al., 2011).
A total of 1,425 incidents of variations were observed among 75 patients. The severity and number of variations were insignificant relative to other proposals (Kim, 2010a, 2010b). Although an attempt was made to modify the CP based on the results of analysis of the variations, no items were modified. This was because the incidents (100.0%) were all first-degree variations, which did not affect the application of the CP. Meanwhile, variations during the period of 2003 to 2009 were observed in 12 items: observation/measurement 1, medication 8, and lab & test 3. This is a very important reason to develop CP (Cheah, 2000; Panella et al., 2003).
The value of the devised CP lies in the construction of a system based on teamwork by many health care professionals (Barbieri et al., 2009). By improving patients' education programs throughout stages of hospitalization (Owen et al., 2006; Van Vliet et al., 2011), nursing in the ICU and in general wards, and the management following discharge, the devised CP enhanced the cost-effectiveness of patient care (Kim, 2010b). The key factor underlying the success of this system was the collaborative relationship among health care professionals. However, it is by no means clear that all the positive results were attributable to the implementation of the pathway (Van Vliet et al., 2011). In fact, these results may have been attributable to the surgeon's experience. Further evaluation in this regard will be necessary, but CP has been instrumental in surgeons' experience, and has resulted in advances in the positive results of this study (Kim, 2010a). Specially, the current problem with the developed CP is not used (Lim, 2006). The advantage of this MVD CP was used with long time in practices. The significance of this paper is a long-term evaluation of eight years. This methodological study has established the effectiveness of CP.
In the future, quality improvement (QI) activity should be calibrated to ensure that continuous efforts are made to increase efficiency and teamwork, thus improving the quality of medical treatment and patient satisfaction level.
### CONCLUSION
This study verified that the development and a long term application of the CP for MVD could significantly improve the quality of medical intervention and the efficiency of hospital management by standardizing the patient care system. In particular, the key component of the successful application of CP is the active participation of responsible doctors and CNS, and their teamwork. In fact, we proposed that the role of the CNS is critical in maximizing the effect of long term CP application. The following research courses are suggested in the future. Although CP has been regarded as an inappropriate for long term application due to variation, very promising results were obtained. Therefore, in order to apply CP for long term, the role of CNS is very important. Finally, the well organized and efficient system and multidisciplinary teamwork are required to implement the system successfully.
### Figures and Tables
##### Figure 1
Critical pathway of microvascular decompression.
ABGA=Arterial blood gas analysis; A/C & D/B=Active cough & deep breathing; BST=Blood sugar test; CBC=Complete blood count; CT=Computed tomography; D/C=Discontinue; EKG=Electrocardiography; EMG=Electromyography; ENT=Ear, nose and throat; GCS=Glasgow coma scale; GW=General ward; HD=Hospital day; IAC MRI=Internal auditory canal MRI; ICU=Intensive care unit; I/O=Intake/output; iv=Intravenous; L/M=Limb movement; NCS=Nerve conduction study; NPO=Nil per os; N/S=Normal saline (Sodium chloride 0.9%); Op=Operation; opd=Outpatient department; P/S & L/R=Pupil size & Light reflex; PTA/SA=Pure tone audiometry/speech audiometry; SBP=Systolic blood pressure; SOW=Sips of water; SpO2=Pulse oximetry.
##### Table 1
General Characteristics of the Patients (N=1,347)
Cont.=Control group; DM=Diabetes mellitus; HFS=Hemifacial spasm; TN=Trigeminal neuralgia.
*Dunnett t.
##### Table 2
Comparison on the Frequency of Complications (N=1,347)
Cont.=Control group.
##### Table 3
Comparison with Length of Hospital Stay and Costs by Year (N=1,347)
*Sensitization (%): contrast from 2001; Dunnett t.
##### Table 4
Variation of Clinical Pathway of Microvascular Decompression (N=197)
A=Patient's condition; B=Decision of patient/family; C=doctor's prescription; #=Hospital day; ⓐ=Ampule; ABGA=Arterial blood gas analysis; CBC=Complete blood count; CT=Computed tomography; iv=Intravenous; KVO=Keep vein openm=minute; N/S=Normal saline (Sodium chloride 0.9%); OPD=Outpatient department; po=Per os; S/U electro & Osm=Serum/Urine electrolyte & Osmol; ⓣ=tablet.
### References
1. Acevedo JC, Sindou M, Fischer C, Vial C. Microvascular decompression for the treatment of hemifacial spasm. Retrospective study of a consecutive series of 75 operated patients-electrophysiologic and anatomical surgicalanalysis. Stereotact Funct Neurosurg. 1997; 68:260–265. http://dx.doi.org/10.1159/000099936.
2. Ball C, Peruzzi M. Case management improves congestive heart failure outcomes. Nurs Case Manag. 1997; 2:68–74.
3. Barbieri A, Vanhaecht K, Van Herck P, Sermeus W, Faggiano F, Marchisio S, et al. Effects of clinical pathways in the joint replacement: A meta-analysis. BMC Med. 2009; 7:32. http://dx.doi.org/10.1186/1741-7015-7-32.
4. Beyea SC. Critical pathways for collaborative nursing care. New York: Addison-Wesley Publishing Company;1996.
5. Cheah J. Development and implementation of a clinical pathway programme in an acute care general hospital in Singapore. Int J Qual Health Care. 2000; 12:403–412. http://dx.doi.org/10.1093/intqhc/12.5.403.
6. Chung SS, Chang JH, Choi JY, Chang JW, Park YG. Microvascular decompression for hemifacial spasm: A long-term follow-up of 1,169 consecutive cases. Stereotact Funct Neurosurg. 2001; 77:190–193. http://dx.doi.org/10.1159/000064620.
7. Goto Y, Matsushima T, Natori Y, Inamura T, Tobimatsu S. Delayed effects of the microvascular decompression on hemifacial spasm: A retrospective study of 131 consecutive operated cases. Neurol Res. 2002; 24:296–300.
8. Isla-Guerrero A, Chamorro-Ramos L, Alvarez-Ruiz F, Aranda-Armengod B, Sarmiento-Martínez MA, Pérez-Alvarez M, et al. Design, implementation, and results of the clinical pathway for herniated lumbar disk. Neurocirugia (Astur). 2001; 12:409–418.
9. Ishikawa M, Nakanishi T, Takamiya Y, Namiki J. Delayed resolution of residual hemifacial spasm after microvascular decompression operations. Neurosurgery. 2001; 49:847–854.
10. Kim JS. Development of a critical pathway and its application for the management of subarachnoid hemorrhage. J Korean Data Anal Soc. 2010; 12(1):1–16.
11. Kim JS. Development of a critical pathway of barbiturate coma therapy in the management for severe brain damage. J Korean Acad Nurs Adm. 2010; 16:59–72. http://dx.doi.org/10.11111/jkana.2010.16.1.59.
12. Li ST, Pan Q, Liu N, Shen F, Liu Z, Guan Y. Trigeminal neuralgia: What are the important factors for good operative outcomes with microvascular decompression. Surg Neurol. 2004; 62:400–404.
13. Lim YJ, Jeong KI, Jeongm HY, Sun JJ, Kim YK, Choi JK, et al. Analysis of performance on activities in critical pathway of total hip replacement surgery. J Korean Acad Adult Nurs. 2006; 18:819–827.
14. Mauriello JA Jr, Dhillon S, Pakeman B, Mostafavi R, Yepez MC. Treatment choices of 119 patients with hemifacial spasm over 11 years. Clin Neurol Neurosurg. 1996; 98:213–216. http://dx.doi.org/10.1016/0303-8467(96)00025-X.
15. McLaughlin MR, Jannetta PJ, Clyde BL, Subach BR, Comey CH, Resnick DK. Microvascular decompression of cranial nerves: Lessons learned after 4400 operations. J Neurosurg. 1999; 90:1–8.
16. Ministry of Health & Welfare. Indicators of public health care at a glance. 2013. Retrieved November 21, 2013. from http://www.mw.go.kr/front_new/al/sal0301vw.jsp?PAR_MENU_ID=04&MENU_ID=0403&CONT_SEQ=293922&page=1.
17. Mitchell EA, Didsbury PB, Kruithof N, Robinson E, Milmine M, Barry M, et al. A randomized controlled trial of an asthma clinical pathway for children in general practice. Acta Paediatr. 2005; 94:226–233. http://dx.doi.org/10.1111/j.1651-2227.2005.tb01896.x.
18. Müller MK, Dedes KJ, Dindo D, Steiner S, et al. Impact of clinical pathways in surgery. Langenbecks Arch Surg. 2009; 394:31–39. http://dx.doi.org/10.1007/s00423-008-0352-0.
19. Mustafa MK, van Weerden TW, Mooij JJ. Hemifacial spasms caused by neurovascular compression. Ned Tijdschr Geneeskd. 2003; 147:273–277.
20. Oreja-Guevara C, Miralles A, Garcia-Caballero. J, Noval S, Gabaldon L, Esteban-Vasallo MD. Clinical pathways for the care of multiple sclerosis patients. Neurologia. 2010; 25:156–162. http://dx.doi.org/10.1016/S2173-5808(10)70031-6.
21. Owen JE, Walker RJ, Edgell L, Collie J, Douglas L, Hewitson TD. Implementation of a pre-dialysis clinical pathway for patients with chronic kidney disease. Int J Qual Health Care. 2006; 18:145–151. http://dx.doi.org/10.1093/intqhc/mzi094.
22. Panella M, Marchisio S, Di Stanislao F. Reducing clinical variations with clinical pathways: Do pathways work? Int J Qual Health Care. 2003; 15:509–521. http://dx.doi.org/10.1093/intqhc/mzg057.
23. Park HO, Ro YJ. Development of case management using critical pathway of posterolateral fusion for lumbar spinal stenosis. J Korean Acad Adult Nurs. 2000; 12:727–740.
24. Romagnuolo J, Flemons WW, Perkins L, Lutz L, Jamieson PC, Hiscock CA, et al. Post-endoscopy checklist reduces length of stay for non-variceal upper gastrointestinal bleeding. Int J Qual Health Care. 2005; 17:249–254. http://dx.doi.org/10.1093/intqhc/mzi023.
25. Samii M, Gunther T, Iaconetta G, Muehling M, Vorkapic P, Samii A. Microvascular decompression to treat hemifacial spasm: Long-term results for a consecutive series of 143 patients. Neurosurgery. 2002; 50:712–718.
26. Van Vliet EJ, Bredenhoff E, Sermeus W, Kop LM, Sol JC, Van Harten WH. Exploring the relation between process design and efficiency in high-volume cataract pathways from a lean thinking perspective. Int J Qual Health Care. 2011; 23:83–93. http://dx.doi.org/10.1093/intqhc/mzq071.
27. Wang A, Jankovic J. Hemifacial spasm: Clinical findings and treatment. Muscle Nerve. 1998; 21:1740–1747. http://dx.doi.org/10.1002/(SICI)1097-4598(199812)21:12<1740::AID-MUS17>3.0.CO;2-V.
28. Yeats M, Wedergren S, Fox N, Thompson JS. The use and modification of clinical pathways to achieve specific outcomes in bariatric surgery. Am Surg. 2005; 71:152–154.
29. Zhang AH, Liu XH. Clinical pathways: Effects on professional practice, patient outcomes, length of stay and hospital costs. Int J Evid Based Healthc. 2011; 9:191–192. http://dx.doi.org/10.1002/14651858.CD006632.pub2.
TOOLS
Similar articles | 2022-05-17 08:29:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3354199528694153, "perplexity": 9095.620412696404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00298.warc.gz"} |
https://thoughtstreams.io/jtauber/versioned-literate-programming-for-tutorials/1132/ | Versioned Literate Programming for Tutorials
27 thoughts
last posted July 15, 2014, 12:48 p.m.
15 earlier thoughts
0
In tutour, I've started down the path of annotating SHAs after the fact, but another approach (almost the dual) would be to write a literate program containing diffs and then being able to generate the code at any step from there.
It's more natural in some respects but more cumbersome in others.
11 later thoughts | 2020-05-28 07:57:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539467453956604, "perplexity": 4502.622136742957}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00544.warc.gz"} |
https://answerstu.com/keywords-derivative-1 | ### How can I get a time series graph of the derivative of a data set using Graphana and InfluxDB
I have a process which loads RXBYTES and TXBYTES from a Linux server's interface info every 5 seconds... I would like to create a graph in Graphana which will show JUST the difference between each data point..I.E.: (target point - previous point)/time intervalIt looks like the derivative() function in InfluxDB should do exactly this, but I cannot get it to work. The query I built in Graphana is like this:select derivative(value) from "stats.bandwidth.home.br0.rx.gauge" where time>now() - 1h group by time(10s) order ascThe results of that que...Read more
### (Openmdao 2.4.0) difference between providing no derivatives / forcing FD on a disciplines with derivatives
this question is in line with this one but it is not the same. The objective is still for students purpose !Still playing with Sellar problem , I compared the 2 different problems :problem 1 : MDA of Sellar without derivatives information on Disciplines with Newton solver as NonlinearSolver problem 2 : MDA of Sellar with derivatives information on Disciplines with Newton solver as NonlinearSolver but with the options declare_partials('', '', method='fd') on each discipline in the problem level for both, the linearsolver is the same and both c...Read more
### derivative - (Openmdao 2.4.0) 'compute_partials' function of a Component seems to be run even when forcing 'declare_partials' to FD for this component
I want to solve MDA for Sellar using Newton non linear solver for the Group . I have defined Disciplines with Derivatives (using 'compute_partials') but I want to check the number of calls to Discipline 'compute' and 'compute_partials' when forcing or not the disciplines not to use their analytical derivatives (using 'declare_partials' in the Problem definition ). The problem is that is seems that the 'compute_partials' function is still called even though I force not to use it .Here is an example (Sellar)So for Discipline 2, I add a counter an...Read more
### openmdao - Derivative check with scalers
I have a problem that I want to scale the design variables. I have added the scaler, but I want to check the derivative to make sure it is doing what I want it to do. Is there a way to check the scaled derivative? I have tried to use check_total_derivatives() but the derivative is the exact same regardless of what value I put for scaler:from openmdao.api import Component, Group, Problem, IndepVarComp, ExecCompfrom openmdao.drivers.pyoptsparse_driver import pyOptSparseDriverclass Scaling(Component): def __init__(self): super(Scaling, s...Read more
### derivative - How to derive an angle to time in mupad
So I have a pritty nasty function with sines and cosines that represents the position of some point in a certain system. Now that I know the location of the point dependant on angle Beta. I wish to derive the function to find the speed. The problem is that mupad thinks that beta is a constant when you try to derive it to time. Obiously the derivative of Beta is the angular velocity. But how do I tell this to mupad?This is the code I have so far.reset();eq:=(a/cos(Beta))^2=(a/cos(Alpha))^2+d^2-2*a/cos(Alpha)*d*sin(Alpha);Ex:=-a+Lb*cos(Beta);a:=s...Read more
### derivative - Can I enhance one LGPL library based of implementation of another?
I was wondering if it was legal/not frowned upon to base enhancements to one LGPL library off of the functionality of another LGPL library. Note that because of the method of implementation, the source code could not be directly built off of, however the general idea is to essentially implement similar functionality in another library based off of the functionality in the original library, without copying the implementation or directly using the other library.An example of what I'm thinking of is:Both libraries are covered by the LGPL:Library 1...Read more
### Bilinear Transform (Tustin's Method) applied to the Derivative
I hope that I have not misunderstood something terribly wrong, but the continuous derivative $D=d/dt$ can be considered a transfer function in Laplace space $D(s) = s$, right?So when I try to discretize it using the bilinear transform (Tustin's method) I trivially get$D(z) = \frac{2}{T} \frac{1-z^{-1}}{1+z^{-1}}$When I apply this to a series containing one discrete impulse, the response oscillates at the Nyquist frequency. Even worse, the spectrum around $\omega=0$ is quadratic and not $\sim i\omega$ like it would be expected from the derivativ...Read more
### Matrix Representation of Softmax Derivatives in Backpropagation
I have a simple multilayer fully connected neural network for classification. At the last layer I have used softmax activation function. So I have to propagate the error through the softmax layer. Suppose, I have 3 softmax units at the output layer. Input to these 3 logits can be described by the vector $z =\begin{pmatrix}z1\\z2\\z3\end{pmatrix}$. Now let's say those 3 logits output $y = \begin{pmatrix}y1\\y2\\y3\end{pmatrix}$. Now I want to calculate $\frac{\partial y}{\partial z}$. Which is simply: \\ \frac{\partial }{\...Read more | 2020-03-29 05:16:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7109649777412415, "perplexity": 1348.4123489166516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493818.32/warc/CC-MAIN-20200329045008-20200329075008-00264.warc.gz"} |
https://math.libretexts.org/TextMaps/Algebra_TextMaps/Map%3A_Elementary_Algebra_(OpenStax)/11%3A_Systems_of_Equations_and_Inequalities/11.4%3A_Systems_of_Nonlinear_Equations_and_Inequalities_-_Two_Variables |
11.4: Systems of Nonlinear Equations and Inequalities - Two Variables
Skills to Develop
In this section, you will:
• Solve a system of nonlinear equations using substitution.
• Solve a system of nonlinear equations using elimination.
• Graph a nonlinear inequality.
• Graph a system of nonlinear inequalities.
Halley’s Comet (Figure $$\PageIndex{1}$$) orbits the sun about once every 75 years. Its path can be considered to be a very elongated ellipse. Other comets follow similar paths in space. These orbital paths can be studied using systems of equations. These systems, however, are different from the ones we considered in the previous section because the equations are not linear.
Figure $$\PageIndex{1}$$: Halley’s Comet (credit: "NASA Blueshift"/Flickr)
In this section, we will consider the intersection of a parabola and a line, a circle and a line, and a circle and an ellipse. The methods for solving systems of nonlinear equations are similar to those for linear equations.
Solving a System of Nonlinear Equations Using Substitution
A system of nonlinear equations is a system of two or more equations in two or more variables containing at least one equation that is not linear. Recall that a linear equation can take the form $$Ax+By+C=0$$. Any equation that cannot be written in this form in nonlinear. The substitution method we used for linear systems is the same method we will use for nonlinear systems. We solve one equation for one variable and then substitute the result into the second equation to solve for another variable, and so on. There is, however, a variation in the possible outcomes.
Intersection of a Parabola and a Line
There are three possible types of solutions for a system of nonlinear equations involving a parabola and a line.
POSSIBLE TYPES OF SOLUTIONS FOR POINTS OF INTERSECTION OF A PARABOLA AND A LINE
Figure $$\PageIndex{1}$$ illustrates possible solution sets for a system of equations involving a parabola and a line.
• No solution. The line will never intersect the parabola.
• One solution. The line is tangent to the parabola and intersects the parabola at exactly one point.
• Two solutions. The line crosses on the inside of the parabola and intersects the parabola at two points.
Figure $$\PageIndex{2}$$
How to: Given a system of equations containing a line and a parabola, find the solution
1. Solve the linear equation for one of the variables.
2. Substitute the expression obtained in step one into the parabola equation.
3. Solve for the remaining variable.
4. Check your solutions in both equations.
Example $$\PageIndex{1}$$: Solving a System of Nonlinear Equations Representing a Parabola and a Line
Solve the system of equations.
\begin{align} x−y &= −1\nonumber \\ y &= x^2+1 \nonumber \end{align}
Solution:
Solve the first equation for $$x$$ and then substitute the resulting expression into the second equation.
\begin{align} x−y &=−1\nonumber \\ x &= y−1 \;\; & \text{Solve for }x.\nonumber \\\nonumber \\ y &=x^2+1\nonumber \\ y & ={(y−1)}^2+1 \;\; & \text{Substitute expression for }x. \nonumber \end{align}
Expand the equation and set it equal to zero.
\begin{align} y & ={(y−1)}^2+1\nonumber \\ &=(y^2−2y+1)+1\nonumber \\ &=y^2−2y+2\nonumber \\ 0 &= y^2−3y+2\nonumber \\ &= (y−2)(y−1) \nonumber \end{align}
Solving fory y gives $$y=2$$ and $$y=1$$. Next, substitute each value for $$y$$ into the first equation to solve for $$x$$. Always substitute the value into the linear equation to check for extraneous solutions.
\begin{align} x−y &=−1\nonumber \\ x−(2) &= −1\nonumber \\ x &= 1\nonumber \\ x−(1) &=−1\nonumber \\ x &= 0 \nonumber \end{align}
The solutions are $$(1,2)$$ and $$(0,1)$$,which can be verified by substituting these $$(x,y)$$ values into both of the original equations (Figure $$\PageIndex{3}$$).
Figure $$\PageIndex{3}$$
Q&A
Could we have substituted values for $$y$$ into the second equation to solve for $$x$$ in last example?
Yes, but because $$x$$ is squared in the second equation this could give us extraneous solutions for $$x$$.
For $$y=1$$
\begin{align} y &= x^2+1\nonumber \\ y &= x^2+1\nonumber \\ x^2 &= 0\nonumber \\ x &= \pm \sqrt{0}=0 \nonumber \end{align}
This gives us the same value as in the solution.
For $$y=2$$
\begin{align} y &= x^2+1\nonumber \\ 2 &= x^2+1\nonumber \\ x^2 &= 1\nonumber \\ x &= \pm \sqrt{1}=\pm 1 \nonumber \end{align}
Notice that $$−1$$ is an extraneous solution.
Exercise $$\PageIndex{1}$$
Solve the given system of equations by substitution.
\begin{align} 3x−y &= −2\nonumber \\ 2x^2−y &= 0 \nonumber \end{align}
Solution:
$$(−\dfrac{1}{2},\dfrac{1}{2})$$ and $$(2,8)$$
Intersection of a Circle and a Line
Just as with a parabola and a line, there are three possible outcomes when solving a system of equations representing a circle and a line.
POSSIBLE TYPES OF SOLUTIONS FOR THE POINTS OF INTERSECTION OF A CIRCLE AND A LINE
Figure $$\PageIndex{4}$$ illustrates possible solution sets for a system of equations involving a circle and a line.
• No solution. The line does not intersect the circle.
• One solution. The line is tangent to the circle and intersects the circle at exactly one point.
• Two solutions. The line crosses the circle and intersects it at two points.
Figure $$\PageIndex{4}$$
How to: Given a system of equations containing a line and a circle, find the solution
1. Solve the linear equation for one of the variables.
2. Substitute the expression obtained in step one into the equation for the circle.
3. Solve for the remaining variable.
4. Check your solutions in both equations.
Example $$\PageIndex{2}$$: Finding the Intersection of a Circle and a Line by Substitution
Find the intersection of the given circle and the given line by substitution.
\begin{align} x^2+y^2 &= 5\nonumber \\ y &= 3x−5 \nonumber \end{align}
Solution:
One of the equations has already been solved for $$y$$. We will substitute $$y=3x−5$$ into the equation for the circle.
\begin{align} x^2+{(3x−5)}^2 &= 5\nonumber \\ x^2+9x^2−30x+25 &= 5\nonumber \\ 10x^2−30x+20 &= 0 \end{align}
Now, we factor and solve for $$x$$.
\begin{align} 10(x2−3x+2) &= 0\nonumber \\ 10(x−2)(x−1) &= 0\nonumber \\ x &= 2\nonumber \\ x &= 1 \nonumber \end{align}
Substitute the two x-values into the original linear equation to solve for $$y$$.
\begin{align} y &= 3(2)−5\nonumber \\ &= 1\nonumber \\ y &= 3(1)−5\nonumber \\ &= −2 \nonumber \end{align}
The line intersects the circle at $$(2,1)$$ and $$(1,−2)$$,which can be verified by substituting these $$(x,y)$$ values into both of the original equations (Figure $$\PageIndex{1}$$).
Figure $$\PageIndex{5}$$
Exercise $$\PageIndex{2}$$
Solve the system of nonlinear equations.
\begin{align} x^2+y^2 &= 10\nonumber \\ x−3y &= −10 \nonumber \end{align}
Solution:
$$(−1,3)$$
Solving a System of Nonlinear Equations Using Elimination
We have seen that substitution is often the preferred method when a system of equations includes a linear equation and a nonlinear equation. However, when both equations in the system have like variables of the second degree, solving them using elimination by addition is often easier than substitution. Generally, elimination is a far simpler method when the system involves only two equations in two variables (a two-by-two system), rather than a three-by-three system, as there are fewer steps. As an example, we will investigate the possible types of solutions when solving a system of equations representing a circle and an ellipse.
POSSIBLE TYPES OF SOLUTIONS FOR THE POINTS OF INTERSECTION OF A CIRCLE AND AN ELLIPSE
Figure $$\PageIndex{6}$$ illustrates possible solution sets for a system of equations involving a circle and an ellipse.
• No solution. The circle and ellipse do not intersect. One shape is inside the other or the circle and the ellipse are a distance away from the other.
• One solution. The circle and ellipse are tangent to each other, and intersect at exactly one point.
• Two solutions. The circle and the ellipse intersect at two points.
• Three solutions. The circle and the ellipse intersect at three points.
• Four solutions. The circle and the ellipse intersect at four points.
Figure $$\PageIndex{6}$$
Example $$\PageIndex{3}$$: Solving a System of Nonlinear Equations Representing a Circle and an Ellipse
Solve the system of nonlinear equations.
\begin{align} x^2+y^2 &= 26 &(1)\nonumber \\ 3x^2+25y^2 &= 100 & (2) \nonumber \end{align}
Solution:
Let’s begin by multiplying equation (1) by $$−3$$,and adding it to equation (2).
\begin{align} (−3)(x^2+y^2) = (−3)(26)&\nonumber \\ −3x^2−3y^2 = −78 &\nonumber \\ \underline{3x^2+25y^2=100}&\nonumber \\ 22y^2=22& \nonumber \end{align}
After we add the two equations together, we solve for $$y$$.
\begin{align} y^2 &= 1\nonumber \\ y &= \pm \sqrt{1}=\pm 1 \nonumber \end{align}
Substitute $$y=\pm 1$$ into one of the equations and solve for $$x$$.
\begin{align} x^2+{(1)}^2 &= 26\nonumber \\ x^2+1 &= 26\nonumber \\ x^2 &= 25\nonumber \\ x &= \pm \sqrt{25}=\pm 5\nonumber \\ x^2+{(−1)}^2 &= 26\nonumber \\ x^2+1 &= 26\nonumber \\ x^2 &= \pm \sqrt{25}=\pm 5 \nonumber \end{align}
There are four solutions: $$(5,1)$$, $$(−5,1)$$, $$(5,−1)$$,and $$(−5,−1)$$. See Figure $$\PageIndex{7}$$.
Figure $$\PageIndex{7}$$
Exercise $$\PageIndex{3}$$
Find the solution set for the given system of nonlinear equations.
\begin{align} 4x^2+y^2 &= 13\nonumber \\ x^2+y^2 &= 10 \nonumber \end{align}
Solution:
$${(1,3),(1,−3),(−1,3),(−1,−3)}$$
Graphing a Nonlinear Inequality
All of the equations in the systems that we have encountered so far have involved equalities, but we may also encounter systems that involve inequalities. We have already learned to graph linear inequalities by graphing the corresponding equation, and then shading the region represented by the inequality symbol. Now, we will follow similar steps to graph a nonlinear inequality so that we can learn to solve systems of nonlinear inequalities. A nonlinear inequality is an inequality containing a nonlinear expression. Graphing a nonlinear inequality is much like graphing a linear inequality.
Recall that when the inequality is greater than, $$y>a$$,or less than, $$y<a$$,the graph is drawn with a dashed line. When the inequality is greater than or equal to, $$y≥a$$,or less than or equal to, $$y≤a$$,the graph is drawn with a solid line. The graphs will create regions in the plane, and we will test each region for a solution. If one point in the region works, the whole region works. That is the region we shade. See Figure $$\PageIndex{8}$$.
Figure $$\PageIndex{8}$$: (a) an example of $$y>a$$; (b) an example of $$y≥a$$; (c) an example of $$y<a$$; (d) an example of $$y≤a$$
How to: Given an inequality bounded by a parabola, sketch a graph
1. Graph the parabola as if it were an equation. This is the boundary for the region that is the solution set.
2. If the boundary is included in the region (the operator is $$≤$$ or $$≥$$), the parabola is graphed as a solid line.
3. If the boundary is not included in the region (the operator is $$<$$ or $$>$$), the parabola is graphed as a dashed line.
4. Test a point in one of the regions to determine whether it satisfies the inequality statement. If the statement is true, the solution set is the region including the point. If the statement is false, the solution set is the region on the other side of the boundary line.
5. Shade the region representing the solution set.
Example $$\PageIndex{4}$$: Graphing an Inequality for a Parabola
Graph the inequality $$y>x^2+1$$.
Solution:
First, graph the corresponding equation $$y=x^2+1$$. Since $$y>x^2+1$$ has a greater than symbol, we draw the graph with a dashed line. Then we choose points to test both inside and outside the parabola. Let’s test the points
$$(0,2)$$ and $$(2,0)$$. One point is clearly inside the parabola and the other point is clearly outside.
\begin{align} y &> x^2+1\nonumber \\ 2 &> (0)^2+1\nonumber \\ 2 &>1 & \text{True}\nonumber \\\nonumber \\\nonumber \\ 0 &> (2)^2+1\nonumber \\ 0 &> 5 & \text{False} \nonumber \end{align}
The graph is shown in Figure $$\PageIndex{9}$$. We can see that the solution set consists of all points inside the parabola, but not on the graph itself.
Figure $$\PageIndex{9}$$
Graphing a System of Nonlinear Inequalities
Now that we have learned to graph nonlinear inequalities, we can learn how to graph systems of nonlinear inequalities. A system of nonlinear inequalities is a system of two or more inequalities in two or more variables containing at least one inequality that is not linear. Graphing a system of nonlinear inequalities is similar to graphing a system of linear inequalities. The difference is that our graph may result in more shaded regions that represent a solution than we find in a system of linear inequalities. The solution to a nonlinear system of inequalities is the region of the graph where the shaded regions of the graph of each inequality overlap, or where the regions intersect, called the feasible region.
How to: Given a system of nonlinear inequalities, sketch a graph
1. Find the intersection points by solving the corresponding system of nonlinear equations.
2. Graph the nonlinear equations.
3. Find the shaded regions of each inequality.
4. Identify the feasible region as the intersection of the shaded regions of each inequality or the set of points common to each inequality.
Example $$\PageIndex{5}$$: Graphing a System of Inequalities
Graph the given system of inequalities.
\begin{align} x^2−y &≤ 0\nonumber \\ 2x^2+y &≤ 12 \nonumber \end{align}
Solution:
These two equations are clearly parabolas. We can find the points of intersection by the elimination process: Add both equations and the variable $$y$$ will be eliminated. Then we solve for $$x$$.
\begin{align} x^2−y = 0&\nonumber \\ \underline{2x^2+y=12}&\nonumber \\ 3x^2=12&\nonumber \\ x^2=4 &\nonumber \\ x=\pm 2 & \nonumber \end{align}
Substitute the x-values into one of the equations and solve for $$y$$.
\begin{align} x^2−y &= 0\nonumber \\ {(2)}^2−y &= 0\nonumber \\ 4−y &= 0\nonumber \\ y &= 4\nonumber \\\nonumber \\ {(−2)}^2−y &= 0\nonumber \\ 4−y &= 0\nonumber \\ y &= 4 \nonumber \end{align}
The two points of intersection are $$(2,4)$$ and $$(−2,4)$$. Notice that the equations can be rewritten as follows.
\begin{align} x^2-y & ≤ 0\nonumber \\ x^2 &≤ y\nonumber \\ y &≥ x^2\nonumber \\\nonumber \\\nonumber \\ 2x^2+y &≤ 12\nonumber \\ y &≤ −2x^2+12 \nonumber \end{align}
Graph each inequality. See Figure $$\PageIndex{10}$$. The feasible region is the region between the two equations bounded by $$2x^2+y≤12$$ on the top and $$x^2−y≤0$$ on the bottom.
Figure $$\PageIndex{10}$$
Exercise $$\PageIndex{5}$$
Graph the given system of inequalities.
\begin{align} y &≥ x^2−1\nonumber \\ x−y &≥ −1 \nonumber \end{align}
Solution:
Shade the area bounded by the two curves, above the quadratic and below the line.
Media
Access these online resources for additional instruction and practice with nonlinear equations.
Key Concepts
• There are three possible types of solutions to a system of equations representing a line and a parabola: (1) no solution, the line does not intersect the parabola; (2) one solution, the line is tangent to the parabola; and (3) two solutions, the line intersects the parabola in two points. See Example.
• There are three possible types of solutions to a system of equations representing a circle and a line: (1) no solution, the line does not intersect the circle; (2) one solution, the line is tangent to the parabola; (3) two solutions, the line intersects the circle in two points. See Example.
• There are five possible types of solutions to the system of nonlinear equations representing an ellipse and a circle:
(1) no solution, the circle and the ellipse do not intersect; (2) one solution, the circle and the ellipse are tangent to each other; (3) two solutions, the circle and the ellipse intersect in two points; (4) three solutions, the circle and ellipse intersect in three places; (5) four solutions, the circle and the ellipse intersect in four points. See Example.
• An inequality is graphed in much the same way as an equation, except for > or <, we draw a dashed line and shade the region containing the solution set. See Example.
• Inequalities are solved the same way as equalities, but solutions to systems of inequalities must satisfy both inequalities. SeeExample. | 2018-01-22 16:13:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 23, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000040531158447, "perplexity": 500.35933688910836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891485.97/warc/CC-MAIN-20180122153557-20180122173557-00567.warc.gz"} |
https://bigelectrons.com/post/crypto/blockchain-documentation/ | # blockchain-documentation
This is a summary of the concepts around Blockchain technology!
1. There are two types of Software architectures exists - Centralized & Distributed
2. A Hybrid architecture is also possible Centrally Distributed architectures
3. Blockchain simply put is a tool to maintain Trust & Integrity in a peer-to-peer system
4. The challenge is to solve the Trust & Integrity in the worst of all situations
# The Blockchain challenge - The idea behind Mining or Proof-Of-Work:
Here is a very simplistic view of what the actual miners do when mining for the Bitcoin, or rather to put it in mathematical terms, solving a puzzle - The puzzle is always to find the leading zeros in the resulting hash of a new block. For example., take a look at the following function written in Scala:
1def sha256Hash(text: String) : String = String.format("%064x", new java.math.BigInteger(1, java.security.MessageDigest.getInstance("SHA-256").digest(text.getBytes("UTF-8"))))
2
3@scala.annotation.tailrec
4def mineSomeShit(str: String, appender: String, difficulty: String): String = {
5 val hashed = sha256Hash(str)
6 println(hashed)
7 if (hashed.take(difficulty.length).startsWith(difficulty))
8 hashed
9 else {
10 val newString = str + appender
11 (mineSomeShit(newString, appender + appender, difficulty))
12 }
13}
We will hash the String "Hello" and see if the resulting hash contains a leading zero's (the difficulty we set). If we do not find the hash with the expected number of leading zeros (the difficulty we set) we will append the String "Hello" with 1 to the end and re-compute the hash and check if we have the needed number of leading zeros. So here our appender that we pass to the mineSomeShit method is the nonce!
1mineSomeShit("Hello", "1", "0")
See I'm setting the difficulty to single zero which is what I expect to see at the staring position of the resulting hash. If I find a match from the resulting hash, I return, but if not, I keep appending a 1 to the end of "Hello" and continue hashing the resulting new String. This way of adding arbitrary String in our case "1" in Bitcoin terms is called a nonce! So a test run on my Mac would look this:
1scala> mineSomeShit("Hello", "1", "0")
2185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969
3948edbe7ede5aa7423476ae29dcd7d61e7711a071aea0d83698377effa896525
57c01a6b99ca7d331c85b2f6826cf5c6429613d617ced1a761e36298de903c1a5
79e77cc0f3906d514f79889ec8d49b94488f82178fb368fef286b26f3964aa077
88e8a20333f0fc59553b7a14a269206688508e653f5058e891711eba61cd5df17
93740a40fb9b1f71f6e69b8268335aaf451b077c0ffe0df94d46c229175f21a16
10+ 0xc547c99864a134db0c95e459b885e34b5e3ecd70f134e574c593e7fb113ef3
So there we go! We found out the hash with a single leading zero - I solved this puzzle - I get a ShitCoin for my "Proof of Work". Let us now notch it up a little by setting the difficulty to finding 2 leading zeros in our resulting hash. A test run on my Mac is as below:
1scala> mineSomeShit("Hello", "1", "00") // Setting the difficulty to two leading zeros
2185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969
3948edbe7ede5aa7423476ae29dcd7d61e7711a071aea0d83698377effa896525
57c01a6b99ca7d331c85b2f6826cf5c6429613d617ced1a761e36298de903c1a5
79e77cc0f3906d514f79889ec8d49b94488f82178fb368fef286b26f3964aa077
88e8a20333f0fc59553b7a14a269206688508e653f5058e891711eba61cd5df17
93740a40fb9b1f71f6e69b8268335aaf451b077c0ffe0df94d46c229175f21a16
10+ 00c547c99864a134db0c95e459b885e34b5e3ecd70f134e574c593e7fb113ef3
Ok! I get another ShitCoin - Glad that I can do this with my Mac! Let us notch it up even higher, this time around, the puzzle to solve is to find a hash with 3 leading zeros - Guess what, my Mac could not handle it, the JVM could not handle it! Here it is:
1scala> mineSomeShit("Hello", "1", "000") // I'm setting the difficulty to 3 leading zeros
2185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969
3948edbe7ede5aa7423476ae29dcd7d61e7711a071aea0d83698377effa896525
57c01a6b99ca7d331c85b2f6826cf5c6429613d617ced1a761e36298de903c1a5
79e77cc0f3906d514f79889ec8d49b94488f82178fb368fef286b26f3964aa077
88e8a20333f0fc59553b7a14a269206688508e653f5058e891711eba61cd5df17
93740a40fb9b1f71f6e69b8268335aaf451b077c0ffe0df94d46c229175f21a16
1000c547c99864a134db0c95e459b885e34b5e3ecd70f134e574c593e7fb113ef3
12fe6b9a3f791c9b5b8afc66b7d9975581e197575de38bb51e354dd3f537e58796
13d94210fea07e7b95235e6a544338fa536751db1a941ee6659915e5ec8ae4c23f
141e09d7ac0a3f950d8a244ec50362a52c2d8b4f97b2d6703cf2af787f8836d82c
17f56f777268162d1a149a69fa8aab070ee00e2215b3fea2123c016ea997185632
183b5b867d31ab7daf63c0a66eefd86a5cc0801b5044f2a21bd50f2e1cee671ec4
21e69f3899145b9e4c40802235183ea90f3cb21051fabd05dfe341166a2f4fdb4c
227c16a5ed55c6bda49bf5ef8cd72cfa45f24d4189ee987d36973642a1dbebaffe
2572d47bce4c4b90e3a2f9a5d29a2acb377aa5d18c4da1c611fd934764e7ba2c0d
26java.lang.OutOfMemoryError: Java heap space
This is exactly what happens when mining BitCoins. You need computational power as the difficulty goes higher and higher with every batch of Blocks! There is no other way to find the hash with leading zeros, other than doing a brute force trial and error by adjusting a portion of the input block and calculating hashes over and over again until one of the hashes fulfills the puzzle by chance. Well, finding a hash with leading zero's is one thing while setting a target on the found hash with leadin zeros is what makes the mining even harder!
Roughly every 14 days the Bitcoin difficulty is adjusted such that the time between successive blocks remains constant at 10 minutes.
The screenshot below shows the latest block info (as of 3-Nov-2017) from the Bitcoin network:
The puzzle is to find the hash with 18 leading zeros as it can be seen in the Hash! So you can imagine now why a Million dollar is needed to solve a single puzzle!
If you then look at the following URL, you can figure out the latest BlockInfo and from there you can figure out the difficulty that is current when mining a block!
https://blockchain.info/block
500 peta hashes per second is produced by the Blockchain network. The average of 10 minutes (time taken to mine a single block) is maintained dynamically by the Blockchain network!
SHA256 is just about flipping bits - flipping bits need energy - heat - with the heat generated from this hardware, I can heat my household, toast a bread, this hardware is specifically designed to do hashes - If you do not cool them they melt! - Doing this in Chennai, my hometown which never has Winter, would not be economical!
While at it, I wanted to create a simple private ethereum network, and I did manage to do it and run them as Docker container's. I just ran two nodes on my Mac and I never saw my battery draining so fast! These Blockchain networks are certainly power thirsty! Take a look here for the Demo: https://github.com/joesan/lab-chain
Apart from this, if you can grasp the ideas behind the following topics, you have understood somewhat technically what Blockchain is and how it works in the Bitcoin setup.
1. Transactions
2. Difficulty
3. Hash
4. Nonce
5. Double Spending Problem
6. Byzantine General's problem
You can find more details on the above mentioned topics at this page: https://en.bitcoin.it/wiki/Main_Page | 2022-12-05 02:11:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20241977274417877, "perplexity": 1640.4973578260144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00388.warc.gz"} |
https://inquiryintoinquiry.com/2018/10/08/information-comprehension-x-extension-%E2%80%A2-comment-8/ | ## { Information = Comprehension × Extension } • Comment 8
So what is all this fuss about the relation between inquiry and signs, as analyzed in Peirce’s theories of their structure and function and synthesized in his theory of information?
The best way I’ve found to see where the problem lies is to run through a series of concrete examples of the sort Peirce used to illustrate his notions of information, inquiry, and signs, examples just complex enough to show the interplay of main ideas.
There is an enlightening set of examples in Peirce’s early lectures on the Logic of Science. Here is the blog post I wrote to set up their discussion:
Another angle from which to approach the incidence of signs and inquiry is by way of C.S. Peirce’s “laws of information” and the corresponding theory of information he developed from the time of his lectures on the “Logic of Science” at Harvard University (1865) and the Lowell Institute (1866).
When it comes to the supposed reciprocity between extensions and intensions, Peirce, of course, has another idea, and I would say a better idea, partly because it forms the occasion for him to bring in his new-fangled notion of “information” to mediate the otherwise static dualism between the other two. The development of this novel idea brings Peirce to enunciate the formula:
$\mathrm{Information} = \mathrm{Comprehension} \times \mathrm{Extension}$
But comprehending what in the world that might mean is a much longer story, the end of which your present teller has yet to reach. So, this time around, I will take up the story near the end of the beginning of Peirce’s own telling of it, for no better reason than that’s where I myself initially came in, or, at least, where it all started making any kind of sense to me. And from this point we will find it easy enough to flash both backward and forward, to and fro, as the occasions arise for doing so. | 2018-10-18 18:22:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4453302025794983, "perplexity": 878.3768175807156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511897.56/warc/CC-MAIN-20181018173140-20181018194640-00489.warc.gz"} |
http://tex.stackexchange.com/questions/59313/how-to-change-the-fontsize-of-the-part-command | # How to change the fontsize of the \part command
I am using the report style and I would just like to change the fontsize of the \part command. How do I do it ?
-
You can patch the relevant command by the »etoolbox« package. The \@part macro is responsible for the formatting of the part prefix and the heading itself. It contains two commands that determine the font sizes. Hence you have to patch it two times.
\documentclass[11pt]{report}
\usepackage{etoolbox}
\makeatletter
\makeatother
\begin{document}
\part{Foo}
\end{document}
Choose the font size that you need as remarked in the comments.
As remarked in the comments, I added a patch for the \@spart macro that is responsible for unnumbered part headings.
-
+1, but consider to add a patch for \@spart (unnumbered parts). – lockstep Jun 10 '12 at 13:14
@lockstep: Thanks for the hint. I have supplemented my example correspondingly. – Thorsten Donig Jun 10 '12 at 13:17
Here's another option, using the sectsty package (instead of \large, use the font size that best suit your needs):
\documentclass{report}
\usepackage{sectsty}
\partfont{\large}
\begin{document}
\part*{A test unnumbered part}
\part{A test numbered part}
\end{document}
- | 2014-11-26 14:22:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989967703819275, "perplexity": 3409.2597960760713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006885.62/warc/CC-MAIN-20141125155646-00211-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://lambda-the-ultimate.org/node/2494 | grammars as a shared executable specification for language processing tools
Parsers written using our library are remarkably similar to BNF; they are almost entirely free of solutionspace (i.e., programming language) artifacts. Our system allows the grammar to be specified as a separate class or mixin, independent of tools that rely upon it such as parsers, syntax colorizers etc. Thus, our grammars serve as a shared executable specification for a variety of language processing tools. This motivates our use of the term executable grammar. We discuss the language features that enable these pleasing results, and, in contrast, the challenge our library poses for static type systems.
Executable Grammars in Newspeak (pdf) Gilad Bracha
Executable Grammars in Newspeak JAOO 2007 slides (pdf)
Comment viewing options
Need for explanation and a short rant
I wonder about two things. First: why isn't those article presented on the homepage but in the discussion forum instead?
Secondly, lots of annyoing complaints.
I miss a rationale of this work but maybe it's just that I don't get this new trend of writing all kinds of handcrafted parsers in a functional style. My consciousness obviously misses an information about the subconscious needs of other programmers.
IMO this trend towards parser combinators or the OO-style interpreter design pattern indicates a regression behind parser generators ( at least for CFGs ) and the clean separation of grammar descriptions and actions. However one does not find many usefull grammars in the infosphere anyway. ANTLR has kind of a repository but even those are cluttered with all kinds of parser actions and are mostly useless for other tools but ANTLR. Now things become even worse with ideosyncratic embedded formats that require a very particular general purpose language to be interpreted. The public description of a language becomes an implementation detail. Upside down evolution into programming language provincialism.
Fortunately there is still XML for the one thing it did right.
So my naive question is: what exactly is wrong with EBNF or PEG? I'm completely stunned.
It might very well have been
It might very well have been on the front page somewhere, this seems like a dupe to me but I might be mistaken.
I'm not exactly thrilled about parser combinators either, at least from the perspective of building non-toy parsers. Related to my own field, a parser combinator seems impossible to make incremental. Why would anyone want to do that when they could just crank out some quick ANTLR code instead?
As for EBNF, its hardly sufficient: such standard grammars are merely implementations themselves and don't capture the pure essence of the language's syntax. For example, there is a way to translate precedence into an EBNF, but reverse engineering precedence from an EBNF is impossible. When considering performance, error recovery, and being incremental, you need more than an EBNF.
Actually, I'm still seeing a lot of hand coded recursive descent parsers, which doesn't look half bad if done in a decent language like Scala. For what I'm doing, there are even pretty straightforward techniques for making these parsers incremental.
It should definitely be
It should definitely be possible to make at least some classes of parsing combinator incremental. There's more than one possible implementation strategy, and one is to 'compile' via an explicit grammar representation - you don't even have to stick to just the one parsing algorithm once you've done that.
For example, there is a way
For example, there is a way to translate precedence into an EBNF, but reverse engineering precedence from an EBNF is impossible.
I do not understand this claim. A usual way to translate precedence in EBNF is to split rules and organize them within a hierarchical fashion:
sum_expr: mul_expr (('+'|'-') mul_expr)* mul_expr: atom (('*'|'/') atom)* atom: NAME | NUMBER
What kind of use case do you have in mind where this is unfeasible or hard to perform?
I've not completely made up my mind about the unnaturalness of the representation. I guess you refer to the AST / CST mismatch? There are tons of libraries that address the object / relation mismatch for any mainstream programming language and even more literature about it but the AST / CST mismatch seems to be only subconsciously present and you see all kinds of hald baked workarounds. In discussion forums people come up with ideas like translating parse trees to XML to be able to deal with them. It's grotesque but there is very little analysis.
Maybe this paper is an
Maybe this paper is an interesting read in the context of the above comments.
Describing parsers written
Describing parsers written with parser combinators as 'hand-crafted' is IMO misleading. The combinators are generally an embedding of an appropriate metalanguage much as a parser generator would work with, and where the combinators aren't more powerful than a sane parser generator would be (Parsec would be such an example, with support for all the context-sensitivity you could ever want) they can actually create a representation of the grammar and optimise it before an actual parser is created.
It's true that not everyone is going for a clean separation of grammars and actions, but in many cases this is actually the right thing - providing that separation requires additional code and possibility for error. If you really care that much you can, of course, parameterise the parser on the actions!
As for EBNF and PEG, they're not necessarily powerful enough and they're certainly no good for factoring a grammar. Despite this, a good many parsing combinator libraries take one of the two as a starting point.
The combinators are generally an embedding of an appropriate metalanguage much as a parser generator would work with
I think this is a big part of the understandable reaction to them that Kay expressed. Work on parser combinators has been driven by the Haskell community, where embedded domain-specific languages are a preferred approach. Ease of development and the other advantages provided by the embedding are traded off against the advantages of using an independent DSL with its own custom syntax. Choosing the DSEL side of this tradeoff is part of the Haskell philosophy, which I think was first explicitly expressed in Hudak's Building Domain-Specific Embedded Languages and later in Modular Domain Specific Languages and Tools.
However, those not steeped in that philosophy often find these embedded DSLs rather clunky (technical PLT term), particularly when compared to existing non-embedded DSL equivalents. Parsers are a good example of that.
There're definite tradeoffs - myself I've found the non-embedded DSLs clunkier for my own purposes. With Haskell at least I suspect the clunkiness tends to be a small amount of constant overhead and some metalanguage tokens that could be shorter if they weren't competing with the rest of the host language for namespace - I'd welcome good counterexamples though, even if I'd be likely to take them as a challenge to match!
edit: There's an important disclaimer here - my preferred flavours of combinator have existing sugar for them in the form of things like the do notation. I've talked before about looking for more general notions of binding to sugar, this would be somewhere it could be applied.
sugar
Could you say more about what you mean by
"...my preferred flavours of combinator have existing sugar for them in the form of things like the do notation. I've talked before about looking for more general notions of binding to sugar..."
oh, honey honey
Here's a recipe for enhancing support for embedded domain specific languages in a host language, without resorting to redefinable syntax (macros):
1. Come up with a way of translating from a source language (a standalone DSL) to code in the host language. Under some circumstance, and many more if you squint, this translation is known as "denotational semantics". (Explaining that further would require a few more paragraphs, available upon request.)
2. If your host language is a pure functional language, you're going to want to define a generic data type which is capable of tractably expressing the translated code from point #1, otherwise your translated code is going to be pretty unmanageable. In Haskell, this data type is known as a warm fuzzy thing, which for historical reasons is abbreviated as "monad".
3. Define a syntax which exploits the warm fuzzy data type's genericity to allow the translated output from multiple source languages to be expressed consistently using the same syntactic construct. In Haskell, the necessary genericity is provided by the typeclass feature, and the syntactic construct is 'do' notation.
So, whenever you write code using Haskell's do notation, if you're perverse (like me) you can think of what you're doing as manually expressing that part of the program as the output of a denotational compiler from some source language(s). The source language may not explicitly exist, but it already has a ready-made semantics — the monadic code you're writing is the output of that semantics — and it'd be easy enough to make up a syntax for it.
Any of steps 1 through 3 could be done in a different way. Each of them involve a slew of choices and deliberate restrictions. It's easy to imagine using a similar approach with different details, although it'd require some good language design skills to do it as generally and well.
Critique of this characterization welcome. I hope Philippa will chime in if I missed anything.
Critique of this characterization welcome.
I'm trying to decide where the approach discussed in this thread/paper fits into this characterization.
I can see where point 1 applies, but to my naive eye it doesn't seem to mesh with points 2 and 3 quite as well. I'm open to arguments though. ;-)
and Hot Dog, too.
That's a thought-provoking connection, thanks. I'd say that my points 2 and 3 are outside the scope of the paper, but if you were going to use the paper's approach to embed realistic DSLs, those points could become relevant.
For example, if you used the paper's approach to embed unusual control handling via CPS, or mutable state, then something like monads and/or 'do' syntax could come in handy. It's also quite possible that you could come up with some general syntactic shortcuts specifically tuned to the paper's approach, which would be a point 3 feature (and could be interesting!)
The paper states that it "leaves aside the solved problem of writing a parser/type-checker, for embedding object language objects into the metalanguage". My points 2 & 3 are about addressing that problem without resorting to a special parser, other than for some kind of fixed multi-DSL syntax such as 'do' notation.
Personally, once I
Personally, once I encountered parser combinators, I never understood why you'd want to deal with an entirely separate language for them. A whole new language and semantics to learn seems like an unwarranted additional level of complexity. Of course, this depends on the parser combinator library's friendliness, ie. flexible error propagation, etc.
Homepage vs. forum
why isn't those article presented on the homepage but in the discussion forum instead?
Isaac is not an official contributing editor, so cannot post to the homepage. If he'd like to be one, he should ask Ehud (who I assume is fairly busy with his move right now, btw).
[Edit: I've promoted this item to the front page, and I'm off to spend my bingo winnings.]
Bingo!
Editing
I'm well aware that what I find interesting for a moment might not be that interesting to others, and in the past LtU editors have usually managed to push forum topics they considered generally interesting to the front page within a short time.
You quoted a research paper
You quoted a research paper and also from it. These are real news and quite different in character from making opinioted statements like mine in the subsequent comment section that gives rise to a true forum topic of the kind Parser combinators vs Parser generators.
Just a quick note, since I
Just a quick note, since I am still buried here...
If I am not mistaken I invited Isaac to beocome a CE, but he prefers the current situaiton (as he explains above). I usually promote to the home items that seem to me worth the attention.
However, one of the nice things about LtU is that people share things they find interesting, if there's a chance they will interest others. So, Isaac, the invitation is still open, as a CE you can decide which items to post to the home page and which you prefer to put in the discussion group. | 2017-11-19 08:49:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43307340145111084, "perplexity": 1522.3559372968878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805466.25/warc/CC-MAIN-20171119080836-20171119100836-00755.warc.gz"} |
http://kb.osu.edu/dspace/handle/1811/19242 | # LASER ABSORPTION SPECTROSCOPY OF HYDROCARBON FLAMES
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/19242
Files Size Format View
1999-MF-04.jpg 107.2Kb JPEG image
Title: LASER ABSORPTION SPECTROSCOPY OF HYDROCARBON FLAMES Creators: Cheskis, Sergey; Derzy, Igor; Lozovsky, Vladimir A. Issue Date: 1999 Publisher: Ohio State University Abstract: Intracavity Laser Absorption Spectroscopy (ICLAS) and Cavity Ring-Down Spectroscopy (CRDS) were used to detect absorption spectra of $CH(C^{2}\Sigma^{-}\leftarrow X^{2}\Pi)$ at 314 nm, $^{1}CH_{2} (\bar{b}^{1}B_{1}\leftarrow \tilde{a}^{1}A_{1})$ at $590$ and $620 nm, NH (A^{3}\Pi,\leftarrow X^{3}\Sigma^{-})$ at $336 nm$, and $NH_{2} (\bar{A}^{2}A_{1}\leftarrow \bar{X}^{2}B_{1})$ at $598 nm$ in a low-pressure (30 Torr) stoichiometric methane/oxygen/nitrogen flat flame doped with a small amount of nitrous oxide. The CH and NH radicals were monitored by CRDS whereas $^{1}CH_{2}$ and $NH_{2}$ were monitored by ICLAS. The absolute concentration profiles of those radicals were measured. The radical absorption spectra were recorded with good signal-to-noise ratio. The spectra of the $^{1}CH_{2}$ radical were measured in different spectral ranges that allowed us better determination of its absorption cross section. For the first time the absolute concentrations of NH and $NH_{2}$ were measured in the flames of this kind. The agreement between experimental results and model predictions based on the GRI-Mech 2.11 mechanism is discussed. Description: Author Institution: School of Chemistry, Tel Aviv University; Semenov Institute of Chemical Phisics, Russian Academy of Sciences URI: http://hdl.handle.net/1811/19242 Other Identifiers: 1999-MF-04 | 2016-10-27 09:09:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5867003798484802, "perplexity": 5025.840785874913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721174.97/warc/CC-MAIN-20161020183841-00099-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=ufa&paperid=469&option_lang=eng | Ufimskii Matematicheskii Zhurnal
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Ufimsk. Mat. Zh.: Year: Volume: Issue: Page: Find
Ufimsk. Mat. Zh., 2019, Volume 11, Issue 2, Pages 19–35 (Mi ufa469)
Difference schemes for partial differential equations of fractional order
A. K. Bazzaevab, I. D. Tsopanovb
a Khetagurov North-Ossetia State University, Vatutina str., 44-46, 362025, Vladikavkaz, Russia
Abstract: Nowadays, fractional differential equations arise while describing physical systems with such properties as power nonlocality, long-term memory and fractal property. The order of the fractional derivative is determined by the dimension of the fractal. Fractional mathematical calculus in the theory of fractals and physical systems with memory and non-locality becomes as important as classical analysis in continuum mechanics.
In this paper we consider higher order difference schemes of approximation for differential equations with fractional-order derivatives with respect to both spatial and time variables. Using the maximum principle, we obtain apriori estimates and prove the stability and the uniform convergence of difference schemes.
Keywords: initial-boundary value problem, fractional differential equations, Caputo fractional derivative, stability, slow diffusion equation, difference scheme, maximum principle, stability, uniform convergence, apriori estimate, heat capacity concentrated at the boundary.
Full text: PDF file (473 kB)
References: PDF file HTML file
English version:
Ufa Mathematical Journal, 2019, 11:2, 19–33 (PDF, 404 kB); https://doi.org/10.13108/2019-11-2-19
Bibliographic databases:
UDC: 519.633
MSC: 65M12
Citation: A. K. Bazzaev, I. D. Tsopanov, “Difference schemes for partial differential equations of fractional order”, Ufimsk. Mat. Zh., 11:2 (2019), 19–35; Ufa Math. J., 11:2 (2019), 19–33
Citation in format AMSBIB
\Bibitem{BazTso19} \by A.~K.~Bazzaev, I.~D.~Tsopanov \paper Difference schemes for partial differential equations of fractional order \jour Ufimsk. Mat. Zh. \yr 2019 \vol 11 \issue 2 \pages 19--35 \mathnet{http://mi.mathnet.ru/ufa469} \transl \jour Ufa Math. J. \yr 2019 \vol 11 \issue 2 \pages 19--33 \crossref{https://doi.org/10.13108/2019-11-2-19} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000511171600002} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85078655530}
• http://mi.mathnet.ru/eng/ufa469
• http://mi.mathnet.ru/eng/ufa/v11/i2/p19
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. V. I. Vasilev, A. M. Kardashevskii, “Iteratsionnaya identifikatsiya koeffitsienta diffuzii v nachalno-kraevoi zadache dlya uravneniya subdiffuzii”, Sib. zhurn. industr. matem., 24:2 (2021), 23–37
• Number of views: This page: 265 Full text: 211 References: 19 | 2021-10-28 02:14:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19688306748867035, "perplexity": 13838.907805667794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00365.warc.gz"} |
https://cscheid.net/2011/12/13/the-beauty-of-roots-a-facet-demo.html | John Baez over at his new blog Azimuth has a post with an amazing looking fractal: the set of all roots of all polynomials with coefficients -1 or 1. Since it’s just’’ a set of points, it seemed like the perfect opportunity to try Facet on a large, good-looking dataset, and here is the result. I think it looks pretty nice. If you want to know more about the mathematics behind it, read Baez’s post. If you care about the visualization details of this, read on!
The original dataset used by Baez in the pictures is the set of all roots of polynomials of degree up to 24. That gives about 400 million points, and at 8 bytes per point, we’re talking 3.2GB of data. Not a good idea :) What I show here are the roots of polynomials of degree up to 15. It’s still fairly large, clocking at almost two million points. Still, the total amount of data being fetched from the server is only about 15MB. This would be hard to do in anything but WebGL, and would be painful to write in anything but Facet.
It’s worth mentioning that the whole thing is 180 lines of Javascript, of which about half is jQuery and GUI-related cruft, and the other half is Facet. The actual rendering is done in two passes. The first pass splats additive Gaussian blobs of adjustable size and weight onto a floating-point texture (so that we don’t get too much accumulation error). The shape of the gaussian blobs is computed in a fragment shader. Then, we read back the texture and pass it through a simple tonemapping and colormap on another shader. If you read the source, however, you’ll see that there’s no shaders being written anywhere: they’re all synthesized from the Javascript expressions.
The bit that took a lot of ugly parameter hacking was getting a pleasant tradeoff between a global look of the fractal structure, while still seeing details when zooming in. A fixed screen-space width for each blob looks bad (You can’t really see the points when deep zooming, they become too small), but a fixed world-space width for each blob looks bad too (the blobs never resolve into roots). The solution is to use, essentially, the geometric mean between those two sizes. It works well in practice, but I can’t really justify it theoretically. | 2018-10-22 15:59:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28250691294670105, "perplexity": 763.1836946398332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515352.63/warc/CC-MAIN-20181022155502-20181022181002-00252.warc.gz"} |
https://docs.arnoldrenderer.com/display/A5AFHUG/Volume+Implicit | # Volume Implicit
Volume implicit nodes can be used to load OpenVDB files for rendering volumes and implicit surfaces respectively.
#### Field Channel
Volume channel used as the field to define the implicit surface.
#### Solver
The uniform solver may be used for arbitrary fields. It works by taking small steps through the field to find the surface. This makes it relatively slow, but suitable for arbitrary fields generated by a procedural texture shader for example. When the field is a level set, for example from a level set grid in a VDB file, the levelset solver may be used instead for better performance and quality. Level sets guide the solver towards the surface, to converge quickly with few steps.
#### Threshold
The surface is defined where the field value equals the threshold. The surface is defined by the implicit equation:
$field(x) = threshold$
#### Samples
The number of samples used to find intersection points. Increase this value to avoid artifacts such as holes.
VDB volume rendered as Volume Implicit
• No labels | 2021-10-28 20:45:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42881858348846436, "perplexity": 1525.361685483955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00415.warc.gz"} |
https://www.neetprep.com/question/71332-gas-equationPVT-constant-true-constant-mass-ideal-gasundergoing-Isothermal-change-Adiabatic-change-Isobaric-change-type-change/126-Physics--Kinetic-Theory-Gases/688-Kinetic-Theory-Gases | The gas equation $\frac{\mathrm{PV}}{\mathrm{T}}=$ constant is true for a constant mass of an ideal gas undergoing
1. Isothermal change | 2019-10-18 16:43:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6879294514656067, "perplexity": 1002.5651867845721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00514.warc.gz"} |
https://deep-and-shallow.com/2021/02/25/neural-oblivious-decision-ensemblesnode-a-state-of-the-art-deep-learning-algorithm-for-tabular-data/ | Deep Learning brought about revolutions in many machine learning problems from the field of Computer Vision, Natural Language Processing, Reinforcement Learning, etc. But tabular data still remains firmly under classical machine learning algorithms, namely the gradient boosting algorithms(I have a whole series on different Gradient Boosting algorithms, if you are interested).
Intuitively, this is strange, isn’t it? Neural networks are universal approximators and ideally, they should be able to approximate the function even in the tabular data domain. Or may be they can, but need humungous amount of data to properly learn that function? But how does the Gradient Boosted Trees do it so well? May the inductive bias of a decision tree is well suited for tabular data domain?
If they are so good, why don’t we use decision trees in neural networks? Just like we have Convolution Operation for images, and Recurrent Networks for Text, why can’t we use Decision Trees as a basic building block for Tabular data?
The answer is pretty straightforward- Trees are not differentiable and without the gradients flowing through the network, back-prop bombs. But this is where researchers started to bang their heads. How do we make Decision Trees differentiable.
## Differentiable Decision Trees
In 2015, Kontschieder et al., presented Deep Neural Decision Forests[1], which had a decision tree-like structure, but differentiable.
Let’s take a step back and think about a Decision Tree.
A typical Decision Tree looks something like the picture above. Simplistically, it is a collection of decision nodes($d_i$) and leaf nodes($\pi_i$) which together acts as a function,
$y = \mathcal{F}(\theta, X)$, where $\mathcal{F}$ is the decision tree, parametrized by $\theta$, which maps input $X$ to output $y$.
Let’s look at the leaf nodes first, because it’s easier. In traditional Decision Trees, the leaf nodes is typically a distribution over the class labels. This is right up the alley of a Sigmoid or a Softmax activation. So we could really replace the leaf nodes with a SoftMax layer and make that node differentiable.
Now, let’s take a deeper look at a decision node. The core purpose of a decision node is to decide whether to route the sample to the left or right. Let’s call these decisions, $d_i \text{ and } \bar{d_i}$. And for this decision, it uses a particular feature($f$) and a threshold($b$)- these are the parameters of the node.
In traditional Decision Trees, this decision is a binary decision; it’s either right or left, 0 or 1. But this is deterministic and not differentiable. Now, what if we relax this and make the routing stochastic. Instead of a sharp 1 or 0, it’s going to be a number between 0 and 1. This feels like familiar territory, doesn’t it? Sigmoid function?
That’s exactly what Kontschieder et al. proposed. If we relax the strict 0-1 decision to a stochastic one with a sigmoid function, the node becomes differentiable.
Now we know how a single node(decision or a leaf node) works. Let’s put them all together. The red path in the diagram above is one of the path in the decision tree. In the deterministic version, a sample either goes through this route or it doesn’t. If we think about the same process in probabilistic terms, we know that the probability of the sample to go in the path should be 1 for every node in that path for the sample to reach the leaf node at the end of the path.
In the probabilistic paradigm, we find the probability that a sample goes left or right($d_i \text{ and } \bar{d_i}$) and multiply all of those along the path to get the probability that a sample reaches the leaf node.
Probability of the sample reaching the highlighted leaf node would be ($d_1 \cdot \bar{d_2} \cdot \bar{d_5}$).
Now, we just need to take the expected value of all the leaf nodes using the probabilities of each of the decision paths to get the prediction for a sample.
Now that you’ve got an intuition about how Decision Tree-like structures were derived to be used in Neural Networks, let’s talk about the NODE model[3].
## Neural Oblivious Trees
An Oblivious Tree is a decision tree which is grown symmetrically. These are trees the same features are responsible in splitting learning instances into the left and the right partitions for each level of the tree. CatBoost, a prominent gradient boosting implementation, uses oblivious trees. Oblivious Trees are particularly interesting because they can be reduced to a Decision Table with $2^d$ cells, where $d$ is the depth of the tree. This simplifies things up pretty neatly.
Each Oblivious Decision Tree(ODT) outputs one of $2^d$ responses, where $d$ is the depth of the tree. This is done by using $d$ feature-threshold combinations, which are the parameters of the ODT.
Formally, the ODT can be defined as :
$h(x) = R[\mathbb{I}(f_1(x)-b_1), ..., \mathbb{I}(f_d(x)-b_d)]$, where $\mathbb{I}$ denotes the Heaviside function(which is a step function which is 0 for negative or 1 for positive)
Now to make the tree output differentiable, we should replace the splitting feature choice($f$) and the comparison operator using the threshold($b$), but their continuous counterparts.
In Traditional Trees, the choice of a feature to split a node by is a deterministic decision. But for differentiability, we choose a softer approach, i.e. A weighted sum of the features, where the weights are learned. Normally, we would think of a Softmax choice over the features, but we want to have sparse feature selection, i.e. we want the decision to be made on only a handful(preferably 1) features. So, to that effect, NODE uses $\alpha$-entmax transformation (Peters et al., 2019) over a learnable feature selection matrix $F \epsilon \mathcal{R}^{d \times n}$
Similarly, we relax the Heaviside function as a two-class entmax. As different features can have different characteristic scales, we scale the entmax with a parameter $\tau$
$c_i(x) = \sigma_{\alpha}\left( \frac{f_i{x}-b_i}{\tau_i} \right )$, where $b_i$ and $\tau_i$ are learnable parameters for thresholds and scales respectively.
We know that a tree has two sides and by $c_i$, we have only defined one side. So to complete the tree, we stack $c_i \text{ and } (1-c_i)$ one on top of the other. Now we define a “choice” tensor $C$ as the outer product of all the trees:
$C(x) = \begin{bmatrix} c_1(x)) \\ 1-c_1(x) \end{bmatrix}\bigotimes \begin{bmatrix} c_2(x)) \\ 1-c_2(x) \end{bmatrix} \bigotimes ... \bigotimes \begin{bmatrix} c_d(x)) \\ 1-c_d(x) \end{bmatrix}$
This gives us the choice weights, or intuitively the probabilities of each of the $2^d$ outputs, which is in the Response tensor. So now it reduced into a weighted sum of Response tensor, weighted by the Choice tensor.
$h(x)= \sum_{i_i, . . . i_d \epsilon /{0,1/}^d} R_{i_i, . . . i_d} \cdot C{i_i, . . . i_d}(x)$
The entire setup looks like the below diagram:
## Neural Oblivious Decision Ensembles
The jump from an individual tree to a “forest” is pretty simple. If we have $m$ trees in the ensemble, the final output is the concatenation of m individual trees $\left[ \hat{h_1}(x), . . . , \hat{h_m}(x) \right]$
## Going Deeper with NODE
In addition to developing the core module(NODE layer), they also propose a deep version, where we stack multiple NODE layers on top of each other, but with residual connections. The input features and the outputs of all previous layers are concatenated and fed into the next NODE Layer and so on. And finally, the final output from all the layers are averaged(similar to the RandomForest).
## Training and Experiments
### Data Processing
In all the experiments in the paper[3], they transformed each of the features to follow a normal distribution using a Quantile Transformation. This step was important for stable training and faster convergence.
### Initialization
Before training the network, they propose to do data-aware initialization to get good initial parameters. They initialize the Feature Selection matrix($F$) uniformly, while the thresholds($b$) are initialize with random feature values $f_i(x)$. The scales $\tau_i$ are initialize in such a way that all the samples in he first batch fall in the linear region of the \$latex two-sided entmax and hence receive non-zero gradients. And finally, the response tensor are initialized with a standard normal distribution.
### Experiments and Results
The paper performs experiments with 6 datasets – Epsilon, YearPrediction, Higgs, Microsoft, Yahoo, and Click. They compared NODE with CatBoost, XGBoost and FCNN.
First they compared with default hyperparameters across all the algorithms. the default architecture of NODE was set as below: Single layer of 2048 trees of depth 6. These parameters were inherited from CatBoost default parameters.
Then, they tuned all the algorithms and then compared.
## Code and Implementation
The authors have made the implementation available in a ready to use Module in PyTorch here.
It is also implemented in the new library I released, PyTorch Tabular, along with a few other State of the Art algorithms for Tabular data. Check it out here:
## References
1. P. Kontschieder, M. Fiterau, A. Criminisi and S. R. Bulò, “Deep Neural Decision Forests,” 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1467-1475, doi: 10.1109/ICCV.2015.172.
2. Ben Peters, Vlad Niculae, André F. T. Martins, “Sparse Sequence-to-Sequence Models,” 2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics(ACL), pp. 1504-1519, doi: 10.18653/v1/P19-1146.
3. Sergei Popov, Stanislav Morozov, Artem Babenko, “Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data,” arXiv:1909.06312 [cs.LG] | 2023-02-03 10:12:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5806233882904053, "perplexity": 847.535137520436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.66/warc/CC-MAIN-20230203091020-20230203121020-00484.warc.gz"} |
http://tex.stackexchange.com/questions/11616/make-the-that-opens-a-math-environment-replace-preceding-spaces-with-a-single | # Make the $that opens a math environment replace preceding spaces with a single non-breakable? I find myself spending effort in placing the tilde character before every inlined math I type, e.g., On 1899, G. Pick proved that the area of a simple polygon~$P$whose vertices are located on the integer grid is~$i + b/2 -1$, where~$i$is the number of grid points in the interior of% ~$P$and~$b$is the number of grid points on the the boundary of~$P$. Can anyone think of a way of redefining the opening$ character so that it would do that automatically. Many will find this useful, methinks. Ideally, I would write instead just
On 1899, G. Pick proved that
the area of a simple polygon $P$ whose vertices are
located on the integer grid is $i + b/2 -1$,
where $i$ is the number of grid points in the interior of
$P$ and $b$ is the number of grid points on the
the boundary of $P$.
I think, but not sure, that this is doable with the following method:
1. Use character other than $for inline math (or refer internally to $$and$$) 2. Define$ as an active character which \unskips previous spaces, replacing them with non-breakable space.
3. After un-skipping, let $redefine itself to do the closing part. 4. The closing$ will invoke \) and then will redefine $as in 2. An obvious bug of this method would be that displayed math wrapped with $$would misbehave. I have an unpleasant feeling that there might be more. - Yes, 2. and then 1. popped into my mind too. One problem with global catcode changes is that they can mess with other packages which authors really didn't thought that would ever by anything then catcode 3. My tikz-timing package for example would stop supporting characters in timing strings. You might just do search and replace: s/ \/~/g instead over your documents. – Martin Scharrer Feb 21 '11 at 9:00 The catcode changes will also not work inside macro arguments and environments which collect their body (e.g. tabularx, environ envionments, ...). – Martin Scharrer Feb 21 '11 at 9:03 @Martin — Hmmm, I wonder if environ should use \scantokens internally to help overcome this problem (doesn't fix it entirely, of course) – Will Robertson Feb 21 '11 at 9:21 @Yossi: I'd definitely not put a ~ in front of i + b/2 -1. So I'd make the question harder and ask for a ~ only in front of <single token>! – Hendrik Vogt Feb 21 '11 at 10:03 I think I'd go for a new command, \m that expanded to \unskip~$$#1$$ (or whatever the correct syntax is) since, as others have pointed out, you don't actually want to do this all the time. – Loop Space Feb 21 '11 at 10:59 ## 2 Answers Here's an example using the active characters approach. I'm not against this sort of idea for your own documents, but do note that it's probably not the most robust approach (like Martin comments, this could affect the inner workings of other packages—doing it \AtBeginDocument is probably a good idea). \documentclass{article} \makeatletter \let\mathshift= \catcode`\=\active \protected\def#1{% \ifdim\lastskip>\z@ \unskip\textvisiblespace \fi \mathshift #1\mathshift } \DeclareRobustCommand$${% from fixltx2e \ifdim\lastskip>\z@ \unskip\textvisiblespace \fi \ifmmode \@badmath \else \mathshift\fi } \makeatother \begin{document} \tableofcontents \section{Introduction} Hello a+b \section{Not sure x=y. x=z though!} And \( f(x) = x^2$$ too. We'd better ensure no space gets added if there's nothing before the maths: \begin{tabular}{|c|c|} \hline a & b \\ c & d \\ \hline \end{tabular} \end{document} In order to make the example actually show the idea, I've used a visible space instead of a hard space. Remove \textvisiblespace above with ~ for your actual work :) Also, I've assumed above that you're not using$$...$$for display math, as strictly speaking it's not valid LaTeX syntax. - One way for not breaking other people's packages is to rely on the fact that package writers tend to be conservative, and adhere to ASCII. So, I would redefine the same set of macros that Will did, but have these parameterized by the math character, which can default, or not default, to$.
I am clueless as to how one can do this, but just imagine you could write
\usepackage[begin=〈, end=〉]{nbsm}
and then proceed, wrapping your inlined math with and . Or if you are the daring type,
\usepackage[begin=$, end=$]{nbsm}
-
Puh, I'm not sure, but is this really an answer? – Hendrik Vogt Feb 21 '11 at 11:43
@Hendrik, call it a partial answer if you like. Or, think of it as a full answer (rewrite Will's code with 〈 and 〉), which raises, on its way, another small question. – Yossi Gil Feb 21 '11 at 14:55 | 2015-08-05 10:31:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196639060974121, "perplexity": 1543.153700704422}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438044160065.87/warc/CC-MAIN-20150728004240-00183-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://73657-843222-gh.circle-artifacts.com/0/doc/modules/generated/sklearn.utils.extmath.weighted_mode.html | # sklearn.utils.extmath.weighted_mode¶
sklearn.utils.extmath.weighted_mode(a, w, axis=0)[source]
Returns an array of the weighted modal (most common) value in a
If there is more than one such value, only the first is returned. The bin-count for the modal bins is also returned.
This is an extension of the algorithm in scipy.stats.mode.
Parameters
aarray_like
n-dimensional array of which to find mode(s).
warray_like
n-dimensional array of weights for each value
axisint, optional
Axis along which to operate. Default is 0, i.e. the first axis.
Returns
valsndarray
Array of modal values.
scorendarray
Array of weighted counts for each mode.
Examples
>>> from sklearn.utils.extmath import weighted_mode
>>> x = [4, 1, 4, 2, 4, 2]
>>> weights = [1, 1, 1, 1, 1, 1]
>>> weighted_mode(x, weights)
(array([4.]), array([3.]))
The value 4 appears three times: with uniform weights, the result is simply the mode of the distribution.
>>> weights = [1, 3, 0.5, 1.5, 1, 2] # deweight the 4's
>>> weighted_mode(x, weights)
(array([2.]), array([3.5]))
The value 2 has the highest score: it appears twice with weights of 1.5 and 2: the sum of these is 3.5. | 2020-04-09 01:54:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5175834894180298, "perplexity": 1949.1129555661505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371826355.84/warc/CC-MAIN-20200408233313-20200409023813-00220.warc.gz"} |
https://www.scipedia.com/public/Kadhim-Hussein_et_al_2016a | ## Abstract
Numerical computation of unsteady laminar three-dimensional natural convection and entropy generation in an inclined cubical trapezoidal air-filled cavity is performed for the first time in this work. The vertical right and left sidewalls of the cavity are maintained at constant cold temperatures. The lower wall is subjected to a constant hot temperature, while the upper one is considered insulated. Computations are performed for Rayleigh numbers varied as 103 ⩽ Ra ⩽ 105, while the trapezoidal cavity inclination angle is varied as 0° ⩽ Φ ⩽ 180°. Prandtl number is considered constant at Pr = 0.71. Second law of thermodynamics is applied to obtain thermodynamic losses inside the cavity due to both heat transfer and fluid friction irreversibilities. The variation of local and average Nusselt numbers is presented and discussed, while, streamlines, isotherms and entropy contours are presented in both two and three-dimensional pattern. The results show that when the Rayleigh number increases, the flow patterns are changed especially in three-dimensional results and the flow circulation increases. Also, the inclination angle effect on the total entropy generation becomes insignificant when the Rayleigh number is low. Moreover, when the Rayleigh number increases the average Nusselt number increases.
## Keywords
Transient natural convection; Trapezoidal cavity; Three-dimensional flow; Entropy generation; Second law analysis
## Nomenclature
Symbol- Description (unit)
g- gravitational acceleration (m/s2)
H- height of the trapezoidal cubical cavity (m)
k- fluid thermal conductivity (W/m °C)
n- unit vector normal to the wall
L- length of the cubical trapezoidal cavity (m)
Nu- Nusselt number
${\textstyle N_{s}}$- dimensionless local generated entropy
Pr- Prandtl number
${\textstyle {\overset {\rightarrow }{q}}}$- heat flux (W/m2)
Ra- Rayleigh number
S- dimensionless entropy generation
${\textstyle S_{\mbox{gen}}^{'}}$- generated entropy (Kj/kg K)
T- dimensionless temperature ${\textstyle \left[(T^{'}-T_{c}^{'})/(T_{h}^{'}-T_{c}^{'})\right]}$
To- bulk temperature ${\textstyle \left[T_{o}=(T_{c}^{'}+T_{h}^{'})/2\right]}$
t- dimensionless time (${\textstyle t^{'}\cdot \alpha /L^{2}}$)
${\textstyle {\overset {\rightarrow }{V}}}$- dimensionless velocity vector ${\textstyle \left({\overset {\rightarrow }{V}}^{'}\cdot L/\alpha \right)}$
x- dimensionless Cartesian coordinate in x -direction (${\textstyle x^{'}/L}$)
y- dimensionless Cartesian coordinate in y -direction (${\textstyle y^{'}/L}$)
z- dimensionless Cartesian coordinate in z -direction (${\textstyle z^{'}/L}$)
### Greek symbols
α- thermal diffusivity (m2/s)
β- thermal expansion coefficient (K−1)
${\textstyle \Delta T}$- dimensionless temperature difference
${\textstyle {\phi }^{'}}$- dissipation function
${\textstyle \Phi }$- trapezoidal cavity inclination angle (degree)
υ- fluid kinematic viscosity (m2/s)
${\textstyle {\overset {\rightarrow }{\psi }}}$- dimensionless vector potential (${\textstyle {\overset {\rightarrow }{\psi }}^{'}/\alpha }$)
${\textstyle \mu }$- dynamic viscosity (kg/m s)
${\textstyle {\overset {\rightarrow }{\omega }}}$- dimensionless vorticity (${\textstyle {\overset {\rightarrow }{\omega }}^{'}\cdot \alpha /L^{2}}$)
### Subscripts
av- average
c- cold
fr- friction
h- hot
Loc- local
th- thermal
tot- total
x, y, z- Cartesian coordinates
### Superscripts
${\textstyle {'}}$- dimensional variable
## 2. Mathematical model
### 2.1. Definition of geometrical configuration
The three-dimensional natural convection and entropy generation problem inside an inclined cubical trapezoidal cavity of height (H) and length (L) filled with air [Pr = 0.71] are investigated in the present work as shown in Fig. 1. The vertical right and left sidewalls of a trapezoidal cavity are kept at isothermal cold temperatures (Tc). The upper and lower walls are considered insulated and subjected to an isothermal hot temperature (Th) respectively. The flow field inside the cubical cavity is considered three-dimensional, Newtonian, unsteady, incompressible and laminar. From the other side, the heat transfer due to thermal gradients and friction effects causes that the cubical trapezoidal cavity is subjected to a loss in energy and as a result an entropy generation is induced. The Rayleigh number is varied as 103 ⩽ Ra ⩽ 105, while the trapezoidal cavity inclination angle is varied as 0° ⩽ Φ ⩽ 180°. The Rayleigh number relates the relative magnitude of viscous and buoyancy forces in the fluid. The natural convection occurs when buoyancy forces are greater than viscous forces. The fluid inside the cubical cavity is assumed to have constant thermo-physical properties and Boussinesq approximation is used to model the density variation.
Figure 1. Schematic diagram of the present problem.
### 2.2. Mathematical model and the numerical solution
To start the numerical approach, the vorticity-vector potential formalism ${\textstyle \left({\overset {\rightarrow }{\psi }}-{\overset {\rightarrow }{\omega }}\right)}$ is considered in the present work which allows in a three-dimensional geometry together with the elimination of the pressure term. Vector potential and the vorticity are defined respectively by the following two equations:
${\displaystyle {\overset {\rightarrow }{\omega }}^{'}={\overset {\rightarrow }{\nabla }}\times {\overset {\rightarrow }{V}}^{'}\quad \quad {\mbox{and}}\quad \quad {\overset {\rightarrow }{V}}^{'}=}$${\displaystyle {\overset {\rightarrow }{\nabla }}\times {\overset {\rightarrow }{\psi }}^{'}}$
(1)
The construction of these equations is described in more details in Ghachem et al. [29]. The dimensionless governing equations can be written as follows:
${\displaystyle -{\overset {\rightarrow }{\omega }}={\nabla }^{2}{\overset {\rightarrow }{\psi }}}$
(2)
${\displaystyle {\frac {\partial {\overset {\rightarrow }{\omega }}}{\partial t}}+}$${\displaystyle ({\overset {\rightarrow }{V}}\cdot \nabla ){\overset {\rightarrow }{\omega }}-}$${\displaystyle ({\overset {\rightarrow }{\omega }}\cdot \nabla ){\overset {\rightarrow }{V}}=}$${\displaystyle \Delta {\overset {\rightarrow }{\omega }}+{\mbox{Ra}}\cdot Pr\cdot \left[{\begin{array}{l}{\frac {\partial T}{\partial z}}cos\Phi \\-{\frac {\partial T}{\partial z}}sin\Phi \\-{\frac {\partial T}{\partial x}}cos\Phi +{\frac {\partial T}{\partial y}}sin\Phi \end{array}}\right]}$
(3)
${\displaystyle {\frac {\partial T}{\partial t}}+{\overset {\rightarrow }{V}}\cdot \nabla T=}$${\displaystyle {\nabla }^{2}T}$
(4)
where
${\displaystyle Pr={\frac {\nu }{\alpha }}\quad {\mbox{and}}\quad {\mbox{Ra}}=}$${\displaystyle {\frac {g\beta (T_{h}-T_{C})L^{3}}{\nu \cdot \alpha }}}$
Moreover, the dimensionless temperature, dimensionless time, dimensionless velocity, dimensionless vector potential and dimensionless vorticity are defined respectively as
${\displaystyle T={\frac {\left(T^{'}-T_{c}^{'}\right)}{\left(T_{h}^{'}-T_{c}^{'}\right)}}\quad t=}$${\displaystyle {\frac {t^{'}\cdot \alpha }{L^{2}}}\quad {\overset {\rightarrow }{V}}=}$${\displaystyle {\frac {{\overset {\rightarrow }{V}}^{'}\cdot L}{\alpha }}\quad {\overset {\rightarrow }{\psi }}=}$${\displaystyle {\frac {{\overset {\rightarrow }{\psi }}^{'}}{\alpha }}\quad {\overset {\rightarrow }{\omega }}=}$${\displaystyle {\frac {{\overset {\rightarrow }{\omega }}^{'}\cdot \alpha }{L^{2}}}}$
while, the dimensionless Cartesian coordinates in x, y and z directions are defined as
${\displaystyle x={\frac {x^{'}}{L}}\quad y={\frac {y^{'}}{L}}\quad z=}$${\displaystyle {\frac {z^{'}}{L}}}$
#### 2.2.1. Boundary conditions
The boundary conditions for the present problem are given as follows:
Temperature:
${\displaystyle T=1\quad {\mbox{at}}\quad x=0\quad [{\mbox{at bottom wall}}]{\mbox{,}}\quad T=}$${\displaystyle 0\quad {\mbox{at the inclined sidewalls}}}$
${\displaystyle {\frac {\partial T}{\partial n}}=0\quad {\mbox{on all other walls}}\quad ({\mbox{adiabatic}})}$
Vorticity:
${\displaystyle {\omega }_{x}=0{\mbox{,}}\quad {\omega }_{y}=-{\frac {\partial V_{z}}{\partial x}}{\mbox{,}}\quad {\omega }_{z}=}$${\displaystyle {\frac {\partial V_{y}}{\partial x}}\quad {\mbox{at}}\quad x=}$${\displaystyle 0\quad {\mbox{and}}\quad 1}$
${\displaystyle {\omega }_{x}={\frac {\partial V_{z}}{\partial y}}{\mbox{,}}\quad {\omega }_{y}=}$${\displaystyle 0{\mbox{,}}\quad {\omega }_{z}=-{\frac {\partial V_{x}}{\partial y}}\quad {\mbox{at inclined sidewalls}}}$
${\displaystyle {\omega }_{x}=-{\frac {\partial V_{y}}{\partial z}}{\mbox{,}}\quad {\omega }_{y}=}$${\displaystyle {\frac {\partial V_{x}}{\partial z}}{\mbox{,}}\quad {\omega }_{z}=}$${\displaystyle 0\quad {\mbox{at}}\quad z=0\quad {\mbox{and}}\quad 1}$
Vector potential:
${\displaystyle {\frac {\partial {\Psi }_{x}}{\partial x}}={\Psi }_{y}={\Psi }_{z}=}$${\displaystyle 0\quad {\mbox{at}}\quad x=0\quad {\mbox{and}}\quad 1}$
${\displaystyle {\Psi }_{x}={\frac {\partial {\Psi }_{y}}{\partial y}}={\Psi }_{z}=}$${\displaystyle 0\quad {\mbox{at inclined sidewalls}}}$
${\displaystyle {\Psi }_{x}={\Psi }_{y}={\frac {\partial {\Psi }_{z}}{\partial z}}=}$${\displaystyle 0\quad {\mbox{at}}\quad z=0\quad {\mbox{and}}\quad 1}$
Velocity:
${\displaystyle V_{x}=V_{y}=V_{z}=0\quad {\mbox{on all cubical trapezoidal cavity walls}}}$
The generated entropy (${\textstyle S_{\mbox{gen}}^{'}}$) is written by Kolsi et al. [30] in the following form:
${\displaystyle S_{\mbox{gen}}^{'}=-{\frac {1}{T^{{'}2}}}\cdot {\overset {\rightarrow }{q}}\cdot {\overset {\rightarrow }{\nabla }}T^{'}+}$${\displaystyle {\frac {\mu }{T^{'}}}\cdot {\phi }^{'}}$
(5)
The first term represents the generated entropy due to the temperature gradient, while the second one is due to the friction effects. The heat flux vector is given by
${\displaystyle {\overset {\rightarrow }{q}}=-k\cdot gra{\overset {\rightarrow }{d}}T}$
(6)
The dissipation function (${\textstyle {\phi }^{'}}$) is written as follows:
${\displaystyle {\phi }^{'}=2\left[{\left({\frac {\partial V_{x}^{'}}{\partial x^{'}}}\right)}^{2}+\right.}$${\displaystyle \left.{\left({\frac {\partial V_{y}^{'}}{\partial y^{'}}}\right)}^{2}+\right.}$${\displaystyle \left.{\left({\frac {\partial V_{z}^{'}}{\partial z^{'}}}\right)}^{2}\right]+}$${\displaystyle {\left({\frac {\partial V_{y}^{'}}{\partial x^{'}}}+{\frac {\partial V_{x}^{'}}{\partial y^{'}}}\right)}^{2}+}$${\displaystyle {\left({\frac {\partial V_{z}^{'}}{\partial y^{'}}}+{\frac {\partial V_{y}^{'}}{\partial z^{'}}}\right)}^{2}+}$${\displaystyle {\left({\frac {\partial V_{x}^{'}}{\partial z^{'}}}+{\frac {\partial V_{z}^{'}}{\partial x^{'}}}\right)}^{2}}$
(7)
Therefore, the generated entropy ${\textstyle \left(S_{\mbox{gen}}^{'}\right)}$ is written as
${\displaystyle S_{\mbox{gen}}^{'}={\frac {k}{T_{0}^{2}}}\left[{\left({\frac {\partial T^{'}}{\partial x^{'}}}\right)}^{2}+\right.}$${\displaystyle \left.{\left({\frac {\partial T^{'}}{\partial y^{'}}}\right)}^{2}+\right.}$${\displaystyle \left.{\left({\frac {\partial T^{'}}{\partial z^{'}}}\right)}^{2}\right]+}$${\displaystyle {\frac {\mu }{T_{0}}}\left\{2\left[{\left({\frac {\partial V_{x}^{'}}{\partial x^{'}}}\right)}^{2}+\right.\right.}$${\displaystyle \left.\left.{\left({\frac {\partial V_{y}^{'}}{\partial y^{'}}}\right)}^{2}+\right.\right.}$${\displaystyle \left.\left.{\left({\frac {\partial V_{z}^{'}}{\partial z^{'}}}\right)}^{2}\right]+\right.}$${\displaystyle \left.{\left({\frac {\partial V_{y}^{'}}{\partial x^{'}}}+{\frac {\partial V_{x}^{'}}{\partial y^{'}}}\right)}^{2}+\right.}$${\displaystyle \left.{\left({\frac {\partial V_{z}^{'}}{\partial y^{'}}}+{\frac {\partial V_{y}^{'}}{\partial z^{'}}}\right)}^{2}+\right.}$${\displaystyle \left.{\left({\frac {\partial V_{x}^{'}}{\partial z^{'}}}+{\frac {\partial V_{z}^{'}}{\partial x^{'}}}\right)}^{2}\right\}}$
(8)
The dimensionless local generated entropy (${\textstyle N_{s}}$) is written as follows:
${\displaystyle N_{s}=S_{\mbox{gen}}^{'}{\frac {1}{k}}{\left({\frac {{LT}_{0}}{\Delta T}}\right)}^{2}}$
(9)
where
${\displaystyle N_{s}=\left[{\left({\frac {\partial T}{\partial x}}\right)}^{2}+\right.}$${\displaystyle \left.{\left({\frac {\partial T}{\partial y}}\right)}^{2}+\right.}$${\displaystyle \left.{\left({\frac {\partial T}{\partial z}}\right)}^{2}\right]+}$${\displaystyle \varphi \cdot \left\{2\left[{\left({\frac {\partial V_{x}}{\partial x}}\right)}^{2}+\right.\right.}$${\displaystyle \left.\left.{\left({\frac {\partial V_{y}}{\partial y}}\right)}^{2}+\right.\right.}$${\displaystyle \left.\left.{\left({\frac {\partial V_{z}}{\partial z}}\right)}^{2}\right]+\right.}$${\displaystyle \left.\left[{\left({\frac {\partial V_{y}}{\partial x}}+{\frac {\partial V_{x}}{\partial y}}\right)}^{2}+\right.\right.}$${\displaystyle \left.\left.{\left({\frac {\partial V_{z}}{\partial y}}+{\frac {\partial V_{y}}{\partial z}}\right)}^{2}+\right.\right.}$${\displaystyle \left.\left.{\left({\frac {\partial V_{x}}{\partial z}}+{\frac {\partial V_{z}}{\partial x}}\right)}^{2}\right]\right\}}$
(10)
where ${\textstyle \varphi ={\frac {\mu {\alpha }^{2}T_{0}}{L^{2}k\Delta T^{2}}}}$ is the irreversibility coefficient.
The first term of (Ns) represents the local irreversibility due to the temperature gradients, and it is noted as (Ns–th). The second term of (Ns) represents the contribution of the viscous effects in the irreversibility and it is noted as (Ns–fr). It is useful to mention that Eq. (10) gives a good idea on the profile and the distribution of the dimensionless local generated entropy (Ns ). Also, the total dimensionless generated entropy (${\textstyle S_{\mbox{tot}}}$) is written as
${\displaystyle S_{\mbox{tot}}={\int }_{v}N_{s}dv={\int }_{v}\left(N_{s-{\mbox{th}}}+\right.}$${\displaystyle \left.N_{s-{\mbox{fr}}}\right)dv=S_{\mbox{th}}+S_{\mbox{fr}}}$
(11)
The local and average Nusselt numbers are given by
${\displaystyle {\mbox{Nu}}={\frac {\partial T}{\partial y}}_{y=0}\quad {\mbox{and}}\quad {\mbox{Nu}}_{\mbox{av}}=}$${\displaystyle {\frac {L}{W}}{\int }_{0}^{1}{\int }_{0}^{W/L}{\mbox{Nu}}\cdot dx\cdot dz}$
(12)
The mathematical model described above was written by a FORTRAN program. The control volume finite difference method is used to discretize governing (2), (3), (4) and (10) respectively. The central-difference scheme is used for treating convective terms while the fully implicit procedure is used to discretize the temporal derivatives. The grids are considered uniform in all directions with clustering nodes on boundaries. The successive relaxation iteration scheme is used to solve the resulting non-linear algebraic equations. The time step (10−4) and spatial mesh (71 × 71 × 71) are utilized to carry out all the numerical tests. The solution is considered acceptable when the following convergence criterion is satisfied for each step of time:
${\displaystyle \sum _{i}^{1{\mbox{,}}2{\mbox{,}}3}{\frac {max\left|{\psi }_{i}^{n}-{\psi }_{i}^{n-1}\right|}{max\left|{\psi }_{i}^{n}\right|}}+}$${\displaystyle max\left|T_{i}^{n}-T_{i}^{n-1}\right|\leq {10}^{-4}}$
(13)
In order to check the accuracy of our code, a verification is performed by using the present numerical algorithm to simulate the same problem considered by Basak et al. [31] using the same geometry and boundary conditions for laminar natural convection flow in a two-dimensional trapezoidal porous enclosure. The comparison for both streamlines and isotherms is shown in Fig. 2 using the dimensionless parameters: Pr = 0.7, Da = 10−3, Ra = 105 and ${\textstyle \Phi =0{\mbox{°}}}$ and a good agreement is obtained. Therefore, the computational procedure is ready and can predict the three-dimensional natural convection and the entropy generation in an inclined trapezoidal cavity and as a result, the previous verification gives a good interest in the present numerical model to deal with the recent physical problem.
Figure 2. Comparison of streamlines and isotherms at Pr = 0.7, Da = 10−3, Ra = 105 and ${\textstyle \Phi =0{\mbox{°}}}$.
## 3. Results and discussion
The unsteady laminar three-dimensional natural convection and entropy generation in an inclined cubical trapezoidal air-filled cavity are investigated numerically in this work. The effect of Rayleigh number (Ra) and cavity inclination angle (${\textstyle \Phi }$) on the fluid flow, heat transfer and entropy generation has been performed. The results were presented via streamlines, trajectory of particles, isotherms, entropy contours together with local and average Nusselt numbers.
### 3.1. Effect of Rayleigh number on the thermal field
Fig. 3 illustrates isosurfaces of temperature around the all geometry (left) and isotherms along the mid-section [Z = 0.5] (right) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105]. When the Rayleigh number is low [Ra = 103] or when the effect of natural convection is slight, the isotherms are in general smooth, straight lines and parallel to cavity sidewalls. This behavior is due to flow weakness when the Rayleigh number is low [Ra = 103]. It can be seen from Fig. 3, that isotherms emanate from the lower wall (or base) where the heat source exists and end on cold right and left sidewalls, indicating the heat flow path. Heat conduction is the dominant mechanism of heat transfer inside the cubical cavity in this case. But, as the Rayleigh number increases to [Ra = 104 and Ra = 105], buoyancy force dominates over viscous force leading to increase the natural convection effect. Therefore, the shape of isotherms begins to deviate sharply from uniform one encountered in the case where [Ra = 103]. This is due to the fact that strong circulation occurs when the Rayleigh number is high. The concentration of isotherms adjacent to the lower wall increases as the Rayleigh number increases illustrating high amount of heat and large temperature gradient adjacent to the lower wall of the cubical trapezoidal cavity. Therefore, a thermal boundary layer is constructed at this region and can be observed especially when [Ra = 105]. Heat convection is the dominant mechanism of heat transfer in this case. Therefore, it can be concluded that there is a clear conversion in isotherms pattern from uniform smooth shape to a high confuse one as the Rayleigh number increases. This behavior gives a clear approval of natural convection effect on the thermal field inside the cubical trapezoidal cavity.
Figure 3. The isosurfaces of temperature around the all geometry (left) and isotherms along the mid-section (right) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number (a) Ra = 103, (b) Ra = 104 and (c) Ra = 105.
### 3.2. Effect of Rayleigh number on the flow field
Fig. 4 presents the trajectory of particles around the all geometry (left) and streamlines along the mid section [Z = 0.5] (right) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105]. According to the natural convection effect, the flow field begins to move from the hot lower wall until it arrives to the insulated upper wall. Then, it changes its direction and moves toward the cold right and left sidewalls after passing secondly adjacent to the hot lower wall. This cyclic motion of the flow field generates the re-circulating vortices which occupy all span of the cubical trapezoidal cavity as seen in Fig. 4. It can be noticed that the air stream which moves upward adjacent to the hot lower wall separates into two re-circulating vortices as indicated in 3D results. This flow behavior can be seen for all the considered values of Rayleigh number, but there is a difference in the shape of vortices related with the value of Rayleigh number. In the case of low Rayleigh number [Ra = 103], the effect of buoyancy force [which is generated due to the temperature difference] is slight and the flow circulation is uniform. Therefore, the convection effect is weak. Now, when the Rayleigh number increases to [Ra = 104 and Ra = 105] or when the effect of buoyancy force is significant, the frictional resistance to the fluid motion diminishes gradually. Therefore, the flow circulation becomes less uniform. This notation is in a good agreement with the results of Fusegi et al. [32]. It can be observed that, the strict uniform pattern of re-circulating vortices encountered at [Ra = 103] is broken due to the strong effect of natural convection inside the cubical trapezoidal cavity, while, a weak fluid motion is noticed at the edges of the cubical cavity for all the considered range of Rayleigh number. Moreover, it can be seen also that the difference between the two-dimensional and three-dimensional results becomes clear as Rayleigh number increases from [Ra = 103] to [Ra = 105].
Figure 4. The trajectory of particles around the all geometry (left) and streamlines along the mid-section (right) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number (a) Ra = 103, (b) Ra = 104 and (c) Ra = 105.
### 3.3. Effect of Rayleigh number on local and average Nusselt numbers
Fig. 5 demonstrates the profiles of the local Nusselt numbers (NuLoc) at the hot lower wall in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105]. These profiles reveal that the local Nusselt number is linear at low Rayleigh number [Ra = 103] .This indicates that the heat is transferred inside the cubical trapezoidal cavity by pure conduction. But, as the Rayleigh number increases to [Ra = 104 and Ra = 105], local Nusselt number profiles start to change its shape indicating the onset of natural convection. However, at [Ra = 104], the local Nusselt profiles are still in the transition process from conduction mode to a fully convection one. Moreover, it is interesting to observe high values of the local Nusselt number adjacent to the cavity hot lower wall due to the existence of heat source at this place. Fig. 6 illustrates the effect of Rayleigh number on the average Nusselt number (Nuav) in the cubical trapezoidal cavity at [${\textstyle \Phi =0{\mbox{°}}}$]. The average Nusselt number is a measure of the heat transfer rate inside the cubical trapezoidal cavity. It can be seen as expected, that the average Nusselt number increases as the Rayleigh number increases. This is because the natural convection and flow circulation enhance when the Rayleigh number increases. Therefore, the highest average Nusselt number corresponds to the highest Rayleigh number and vice versa. The reason of this behavior is due to the increase in the temperature gradient when the Rayleigh number increases. This increase leads to the average Nusselt number increasing.
Figure 5. The variation in the local Nusselt numbers at the hot lower wall in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105].
Figure 6. The variation in the average Nusselt numbers in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number.
### 3.4. Effect of Rayleigh number on entropy generation
Fig. 7 displays iso-surfaces (left) and isolines (right) of the entropy generation due to the heat transfer (Sth) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105]. It can be noticed that the entropy generation contours extend deeply inside the cubical trapezoidal cavity as the Rayleigh number increases. This is due to the increase in the heat transfer losses with increasing the Rayleigh number, which leads to increase the entropy generation. Moreover, it can be seen that the entropy generation contours increase near the lower wall due to the heat source existence, while, they weak adjacent to the left and right cavity sidewalls even at [Ra = 105]. Furthermore, it can be seen from Fig. 7, that there is a space region in the cubical cavity core. This is because of stagnant or low fluid velocity. Fig. 8 demonstrates iso-surfaces (left) and isolines (right) of the entropy generation due to the friction (Sfr) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105]. It can be seen that the entropy generation contours due to the fluid friction are concentrated strongly adjacent to the cavity right and left sidewalls. This is due to the existence of boundary layer which leads to increase the entropy generation due to the friction there. This observation is in a good agreement with the results of Mukhopadhyay [33]. It is interesting to see that, the total entropy generation (Stot) and the entropy generation due to heat transfer (Sth) have an approximately similar pattern when [Ra = 103 and Ra = 104]. This behavior indicates the domination of the irreversibility due to the heat transfer at this range of Rayleigh number. But, when the Rayleigh number is high [Ra = 105] the total entropy generation (Stot) has a similar pattern with the entropy generation due to the fluid friction (Sfr). This behavior indicates the domination of the irreversibility due to the fluid friction at high Rayleigh number. This behavior will be highlighted in more detail later. Fig. 9 shows iso-surfaces (left) and isolines (right) of the total entropy generation (Stot) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105]. From this figure, one can notice that when the Rayleigh number is low [Ra = 103], the entropy contours are concentrated near the lower wall in a similar behavior to their corresponding contours noticed in Fig. 7. The main conclusion of this behavior is that the entropy generation due to heat transfer (Sth) is high and dominant, while the entropy generation due to the fluid friction (Sfr) is weak. Therefore, the contribution of the entropy generation due to heat transfer (Sth) is greater than the entropy generation due to the fluid friction (Sfr) which makes the contours of the total entropy generation (Stot) similar to the contours of entropy generation due to heat transfer (Sth). At [Ra = 104], the total entropy generation contours (Stot) are somewhat similar to the corresponding contours which are seen in Figure 7 and Figure 8 respectively. So, in this case the contribution of entropy generations due to heat transfer and fluid friction is comparable. But when the Rayleigh number is high [Ra = 105], the entropy contours are concentrated adjacent to the cavity right and left sidewalls in a similar behavior to their corresponding contours noticed in Fig. 8. The main conclusion of this behavior is that the entropy generation due to the fluid friction (Sfr) is high and dominant, while the entropy generation due to the heat transfer (Sth) is weak. Therefore, the contribution of the entropy generation due to fluid friction (Sfr) is greater than the entropy generation due to the heat transfer (Sth) which makes the contours of the total entropy generation (Stot) similar to the contours of the entropy generation due to fluid friction (Sfr). Fig. 10 presents the variation of different entropy generation [Sth, Sfr and Stot] in the cubical trapezoidal cavity with various values of Rayleigh number. It can be seen that the entropy generation rate due to the heat transfer (Sth) increases slightly as the Rayleigh number increases. From the other side, there is a clear increase of entropy generation due to friction (Sfr) and the total entropy generation (Stot) when the Rayleigh number increases. This is due to the slight increase in the heat transfer losses and high increase in fluid friction losses with increasing the Rayleigh number. Therefore, the maximum value of the total entropy generation (Stot) and entropy generation due to friction (Sfr) occurs at maximum value of Rayleigh number [i.e., Ra = 105].
Figure 7. The iso-surfaces (left) and isolines (right) of the entropy generation due to the heat transfer (Sth) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105].
Figure 8. The iso-surfaces (left) and isolines (right) of the entropy generation due to the friction (Sfr) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, (b) Ra = 104 and (c) Ra = 105].
Figure 9. The iso-surfaces (left) and isolines (right) of the total entropy generation (Stot) in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] for various values of Rayleigh number [(a) Ra = 103, Ra = 104 and (c) Ra = 105] and φ = 10−4.
Figure 10. The variation of different entropy generation in the cubical trapezoidal cavity [${\textstyle \Phi =0{\mbox{°}}}$] with various values of Rayleigh number.
### 3.5. Effect of inclination angle on the thermal field
Fig. 11 shows the isosurfaces of temperature around the all geometry (upper) and isotherms along the mid-section [Z = 0] (lower) in the cubical trapezoidal cavity at [Ra = 105] and various values of inclination angle [(a) ${\textstyle \Phi =0{\mbox{°}}}$; (b) ${\textstyle \Phi =30{\mbox{°}}}$; (c) ${\textstyle \Phi =60{\mbox{°}}}$; (d) ${\textstyle \Phi =90{\mbox{°}}}$; (e) ${\textstyle \Phi =120{\mbox{°}}}$; (f) ${\textstyle \Phi =150{\mbox{°}}}$ and (g) ${\textstyle \Phi =180{\mbox{°}}}$]. For horizontal cubical trapezoidal cavity [i.e., ${\textstyle \Phi =0{\mbox{°}}}$], the thermal field is governed by the Rayleigh number effect only which is explained previously in Fig. 3. Since, the value of the selected Rayleigh number is high [i.e., Ra = 105], the natural convection effect is strong which causes to make the thermal field represented by isotherms and isosurfaces of temperature are non-uniform. Now, when the cavity inclination angle increases to ${\textstyle \Phi =30{\mbox{°}}}$ and ${\textstyle 60{\mbox{°}}}$ respectively, the effect of natural convection decreases slightly and the disturbance in the isotherms becomes less than the corresponding one which is observed at ${\textstyle \Phi =0{\mbox{°}}}$. Furthermore, the thermal boundary layers adjacent to the hot lower wall become less thicker than that observed at ${\textstyle \Phi =0{\mbox{°}}}$. For vertical cubical trapezoidal cavity [i.e., ${\textstyle \Phi =90{\mbox{°}}}$], the thermal field becomes somewhat uniform indicating the beginning of the conduction mode of the heat transfer inside the cavity. But, when the cavity inclination angle increases to ${\textstyle \Phi =120{\mbox{°}}}$, ${\textstyle 150{\mbox{°}}}$ and ${\textstyle 180{\mbox{°}}}$ respectively, the pattern of isotherms will change completely especially at ${\textstyle \Phi =180{\mbox{°}}}$. The isotherms are uniformly distributed inside the cavity implying conduction as the dominant heat transfer mode.
Figure 11. The isosurfaces of temperature around the all geometry (upper) and isotherms along the mid-section (lower) in the cubical trapezoidal cavity at [Ra = 105] and various values of inclination angle [(a) ${\textstyle \Phi =0{\mbox{°}}}$; (b) ${\textstyle \Phi =30{\mbox{°}}}$; (c) ${\textstyle \Phi =60{\mbox{°}}}$; (d) ${\textstyle \Phi =90{\mbox{°}}}$; (e) ${\textstyle \Phi =120{\mbox{°}}}$; (f) ${\textstyle \Phi =150{\mbox{°}}}$ and (g) ${\textstyle \Phi =180{\mbox{°}}}$].
### 3.6. Effect of inclination angle on the flow field
Fig. 12 displays the trajectory of particles around the all geometry (upper) and streamlines along the mid-section [Z = 0] (lower) in the cubical trapezoidal cavity at [Ra = 105] and various values of inclination angle [(a) ${\textstyle \Phi =0{\mbox{°}}}$; (b) ${\textstyle \Phi =30{\mbox{°}}}$; (c) ${\textstyle \Phi =60{\mbox{°}}}$; (d) ${\textstyle \Phi =90{\mbox{°}}}$; (e) ${\textstyle \Phi =120{\mbox{°}}}$; (f) ${\textstyle \Phi =150{\mbox{°}}}$ and (g) ${\textstyle \Phi =180{\mbox{°}}}$]. The results demonstrate that the cavity inclination angle has a significant role on the flow pattern. When the inclination angle is ${\textstyle \Phi =0{\mbox{°}}}$ and ${\textstyle 30{\mbox{°}}}$, the flow field inside the cavity can be represented by two symmetrical re-circulating vortices. The effect of buoyancy force in this case is high. But, when the cavity inclination angle increases to ${\textstyle \Phi =60{\mbox{°}}}$ and ${\textstyle 90{\mbox{°}}}$, the flow field inside the cavity is represented by a single large re-circulating vortex. As, the cavity inclination angle jumps to ${\textstyle \Phi =120{\mbox{°}}}$, ${\textstyle 150{\mbox{°}}}$ and ${\textstyle 180{\mbox{°}}}$ respectively, the flow field inside the cavity is characterized again by two unsymmetrical re-circulating vortices. The effect of natural convection in this case begins to decrease gradually as the cavity inclination angle increases.
Figure 12. The trajectory of particles around the all geometry (upper) and streamlines along the mid-section (lower) in the cubical trapezoidal cavity at [Ra = 105] and various values of inclination angle [(a) ${\textstyle \Phi =0{\mbox{°}}}$; (b) ${\textstyle \Phi =30{\mbox{°}}}$; (c) ${\textstyle \Phi =60{\mbox{°}}}$; (d) ${\textstyle \Phi =90{\mbox{°}}}$; (e) ${\textstyle \Phi =120{\mbox{°}}}$; (f) ${\textstyle \Phi =150{\mbox{°}}}$ and (g) ${\textstyle \Phi =180{\mbox{°}}}$].
### 3.7. Effect of inclination angle on average Nusselt numbers
Fig. 13 illustrates the relationship between the average Nusselt numbers in the cubical trapezoidal cavity and Rayleigh number for various values of inclination angle. It is found that the average Nusselt number reaches its maximum value at ${\textstyle \Phi =30{\mbox{°}}}$, while, the minimum value of the average Nusselt number is reached at ${\textstyle \Phi =180{\mbox{°}}}$. The reason of this behavior is due to the increase of distance between the hot and cold cavity walls when the inclination angle increases to ${\textstyle \Phi =180{\mbox{°}}}$. This increase reduces the temperature gradient and causes to drop the average Nusselt number values. Fig. 14 depicts the relationship between the average Nusselt numbers in the cubical trapezoidal cavity and inclination angle for various values of Rayleigh number. It can be seen that the average Nusselt number is an increasing function of Rayleigh number due to the significant increase in the convection effect as expected. The results show also that when the inclination angle in the range is 0° ⩽ Φ ⩽ 90°, the effect of Rayleigh number is greater than its effect when the inclination angle in the range is 120° ⩽ Φ ⩽ 180°. The reason of this behavior is explained previously.
Figure 13. The relationship between the average Nusselt numbers in the cubical trapezoidal cavity and Rayleigh number for various values of inclination angle.
Figure 14. The relationship between the average Nusselt numbers in the cubical trapezoidal cavity and inclination angle for various values of Rayleigh number.
### 3.8. Effect of inclination angle on entropy generation
Fig. 15 displays the relationship between the total entropy generation (Stot) in the cubical trapezoidal cavity and inclination angle for various values of Rayleigh number. It can be seen from this figure, that the total entropy generation increases strongly for high Rayleigh number [i.e., Ra = 105] and [${\textstyle \Phi =0{\mbox{°}}}$]. This increase is due to an enhancement in the flow circulation when the Rayleigh number is high. But, as the cavity inclination angle increases, the profiles of the total entropy generation begin to decrease gradually. The result indicated also that the total entropy generation has an approximately linear profile when the Rayleigh number is low. Therefore, one can conclude that the inclination angle effect on the total entropy generation becomes insignificant when the Rayleigh number is low. Fig. 16 illustrates the relationship between the total entropy generation (Stot) in the cubical trapezoidal cavity and Rayleigh number for various values of inclination angle. Similar to the results noticed in Fig. 15, it can be seen that the total entropy generation has a maximum value for horizontal cubical trapezoidal cavity [i.e., ${\textstyle \Phi =0{\mbox{°}}}$], while, it has a minimum value when the inclination angle equals to [${\textstyle \Phi =180{\mbox{°}}}$]. This behavior can be seen for all values of the Rayleigh number. Furthermore, the total entropy generation (Stot) increases as the Rayleigh number increases. But as mentioned previously, this increase is high for horizontal cubical trapezoidal cavity [i.e., ${\textstyle \Phi =0{\mbox{°}}}$], while it decreases as the inclination angle increases.
Figure 15. The relationship between the total entropy generation (Stot) in the cubical trapezoidal cavity and inclination angle for various values of Rayleigh number.
Figure 16. The relationship between the total entropy generation (Stot) in the cubical trapezoidal cavity and Rayleigh number for various values of inclination angle.
## 4. Conclusions
The following conclusions can be detected from the results of the present work:
• When the natural convection is weak [i.e., Ra = 103], isotherms are in general smooth, nearly vertical and semi-parallel to the right and left cavity sidewalls and the heat is transferred by conduction.
• When the natural convection is strong [i.e., Ra = 105], the concentrated region of isotherms adjacent to the lower wall becomes more intense and isothermal lines are become more confuse due to strong circulation.
• The difference between the two-dimensional and three-dimensional flow pattern results becomes clear as the Rayleigh number increases.
• When the Rayleigh number increases, the flow circulation increases and some disturbances in the flow patterns are observed especially in 3D results.
• The flow pattern inside the cubical trapezoidal cavity consists of four spiral re-circulating vortices which cover all the span of it.
• The local Nusselt number distribution is almost parallel to the horizontal upper and lower cavity walls.
• When the Rayleigh number increases the average Nusselt number increases.
• The entropy generation contours due to the heat transfer (Sth) increase strongly when the Rayleigh number increases especially adjacent to the heat source location at the lower wall.
• The entropy generation contours due to the fluid friction (Sfr) are concentrated strongly adjacent to the cavity right and left sidewalls.
• The entropy generation due to the heat transfer (Sth) increases slightly as the Rayleigh number increases, while, a strong increase with the Rayleigh number is seen with respect to entropy generation due to friction (Sfr) and the total entropy generation (Stot).
• Different flow patterns can be seen inside the cubical trapezoidal cavity as the inclination angle increases from ${\textstyle \Phi =0{\mbox{°}}}$ to ${\textstyle \Phi =180{\mbox{°}}}$. This gives an important indication that the flow field inside the cavity is significantly affected by the inclination angle.
• When the cavity inclination angle increases, isotherms are uniformly distributed, less compressed inside the cavity and the conduction is the dominant heat transfer mode. Also, the thickness of thermal boundary layer decreases as the cavity inclination angle increases.
• The minimum average Nusselt number inside the cubical trapezoidal cavity corresponds to the highest inclination angle [i.e., ${\textstyle \Phi =180{\mbox{°}}}$], while, the results indicated that the average Nusselt number reaches its maximum value at ${\textstyle \Phi =30{\mbox{°}}}$.
• The inclination angle effect on the total entropy generation becomes insignificant when the Rayleigh number is low.
• The total entropy generation has a maximum value for horizontal cubical trapezoidal cavity [i.e., ${\textstyle \Phi =0{\mbox{°}}}$], while, it has a minimum value when the inclination angle equals to [${\textstyle \Phi =180{\mbox{°}}}$].
## References
1. [1] A. Hussein; Computational analysis of natural convection in a parallelogrammic cavity with a hot concentric circular cylinder moving at different vertical locations; Int. Commun. Heat Mass Transfer, 46 (2013), pp. 126–133
2. [2] S. Hussain, A. Hussein; Numerical investigation of natural convection phenomena in a uniformly heated circular cylinder immersed in square enclosure filled with air at different vertical locations; Int. Commun. Heat Mass Transfer, 37 (8) (2010), pp. 1115–1126
3. [3] S. Lam, R. Gani, J. Simons; Experimental and numerical studies of natural convection in trapezoidal cavities; ASME J. Heat Transfer, 111 (1989), pp. 372–377
4. [4] S. Kumar; Natural convective heat transfer in trapezoidal enclosure of box-type solar cooker; Renewable Energy, 29 (2004), pp. 211–222
5. [5] T. Basak, S. Roy, A. Singh, B. Pandey; Natural convection flow simulation for various angles in a trapezoidal enclosure with linearly heated side wall(s); Int. J. Heat Mass Transfer, 52 (2009), pp. 4413–4425
6. [6] K. Lasfer, M. Bouzaiane, T. Lili; Numerical study of laminar natural convection in a side-heated trapezoidal cavity at various inclined heated sidewalls; Heat Transfer Eng., 31 (5) (2010), pp. 362–373
7. [7] T. Basak, D. Ramakrishna, S. Roy, A. Matta, I. Pop; A comprehensive heatline based approach for natural convection flows in trapezoidal enclosures: effect of various walls heating; Int. J. Therm. Sci., 50 (2011), pp. 1385–1404
8. [8] S. Sahoo, S. Singh, R. Banerjee; Analysis of heat losses from a trapezoidal cavity used for linear fresnel reflector system; Sol. Energy, 86 (2012), pp. 1313–1322
9. [9] S. Natarajan, K. Reddy, T. Mallick; Heat loss characteristics of trapezoidal cavity receiver for solar linear concentrating system; Appl. Energy, 93 (2012), pp. 523–531
10. [10] A. Mustafa, I. Ghani; Natural convection in trapezoidal enclosure heated partially from below; Al-Khwarizmi Eng. J., 8 (1) (2012), pp. 76–85
11. [11] A. Da silva, E. Fontana, V. Mariani, F. Marcondes; Numerical investigation of several physical and geometric parameters in the natural convection into trapezoidal cavities; Int. J. Heat Mass Transfer, 55 (2012), pp. 6808–6818
12. [12] N. Tracy, D. Crunkleton; Oscillatory natural convection in trapezoidal enclosures; Int. J. Heat Mass Transfer, 55 (2012), pp. 4498–4510
13. [13] L. Iyican, L. Witte, Y. Bayazitoglu; An experimental study of natural convection in trapezoidal enclosures; ASME J. Heat Transfer, 102 (8) (1980), pp. 648–653
14. [14] F. Moukalled, S. Acharya; Natural convection in a trapezoidal enclosure with offset baffles; J. Thermophys. Heat Transfer, 15 (2001), pp. 212–218
15. [15] F. Moukalled, M. Darwish; Natural convection in a trapezoidal enclosure heated from the side with a baffle mounted on its upper inclined surface; Heat Transfer Eng., 25 (2004), pp. 80–93
16. [16] E. Natarajan, S. Roy, T. Basak; Effect of various thermal boundary conditions on natural convection in a trapezoidal cavity with linearly heated side wall(s); Numer. Heat Transfer, Part B, 52 (6) (2007), pp. 551–568
17. [17] W. Hiller, S. Koch, T. Kowalewski, F. Stella; Onset of natural convection in a cube; Int. J. Heat Mass Transfer, 13 (1993), pp. 3251–3263
18. [18] R. Frederick; Natural convection heat transfer in a cubical enclosure with two active sectors on one vertical wall; Int. Commun. Heat Mass Transfer, 24 (4) (1997), pp. 507–520
19. [19] R. Frederick, S. Moraga; Three-dimensional natural convection in finned cubical enclosures; Int. J. Heat Fluid Flow, 28 (2007), pp. 289–298
20. [20] P. Oosthuizen, A. Kalendar, T. Simko, Three-dimensional natural convective flow in a rectangular enclosure with a rectangular heated section on one vertical wall and a cooled horizontal upper wall, in: 5th European Thermal-Sciences Conference, Netherlands, 2008, pp. 1–8.
21. [21] Z. Bocu, Z. Altac; Laminar natural convection heat transfer and air flow in three-dimensional rectangular enclosures with pin arrays attached to hot wall; Appl. Therm. Eng., 31 (2011), pp. 3189–3195
22. [22] A. Da silva, L. Gosselin; On the thermal performance of an internally finned 3D cubic enclosure in natural convection; Int. J. Therm. Sci., 44 (2005), pp. 540–546
23. [23] B. Bennett, J. Hsueh; Natural convection in a cubic cavity: implicit numerical solution of two benchmark problems; Numer. Heat Transfer, Part A, 50 (2006), pp. 99–107
24. [24] D. Lo, S. Leu; DQ analysis of 3D natural convection in an inclined cavity using an velocity-vorticity formulation; Proc. World Acad. Sci., Eng. Technol., 36 (2008), pp. 370–375
25. [25] W. El-Maghlany, K. Saqr, M. Teamah; Numerical simulations of the effect of an isotropic heat field on the entropy generation due to natural convection in a square cavity; Energy Convers. Manage., 85 (2014), pp. 333–342
26. [26] Y. Varol, H. Oztop, I. Pop; Entropy analysis due to conjugate-buoyant flow in a right-angle trapezoidal enclosure filled with a porous medium bounded by a solid vertical wall; Int. J. Therm. Sci., 48 (2009), pp. 1161–1175
27. [27] T. Basak, R. Anandalakshmi, P. Kumar, S. Roy; Entropy generation vs energy flow due to natural convection in a trapezoidal cavity with isothermal and non-isothermal hot bottom wall; Energy, 37 (2012), pp. 514–532
28. [28] D. Ramakrishna, T. Basak, S. Roy; Analysis of heatlines and entropy generation during free convection within trapezoidal cavities; Int. Commun. Heat Mass Transfer, 45 (2013), pp. 32–40
29. [29] K. Ghachem, L. Kolsi, C. Mâatki, A. Hussein, M. Borjini; Numerical simulation of three-dimensional double diffusive free convection flow and irreversibility studies in a solar distiller; Int. Commun. Heat Mass Transfer, 39 (2012), pp. 869–876
30. [30] L. Kolsi, H. Oztop, M. Borjini, K. Al-Salem; Second law analysis in a three-dimensional lid-driven cavity; Int. Commun. Heat Mass Transfer, 38 (2011), pp. 1376–1383
31. [31] T. Basak, S. Roy, A. Singh, I. Pop; Finite element simulation of natural convection flow in a trapezoidal enclosure filled with porous medium due to uniform and non-uniform heating; Int. J. Heat Mass Transfer, 52 (2009), pp. 70–78
32. [32] T. Fusegi, M. Hyun, K. Kuwahara, B. Farouk; A numerical study of three-dimensional natural convection in a differentially heated cubical enclosure; Int. J. Heat Mass Transfer, 34 (6) (1991), pp. 1543–1557
33. [33] A. Mukhopadhyay; Analysis of entropy generation due to natural convection in square enclosures with multiple discrete heat sources; Int. Commun. Heat Mass Transfer, 37 (2010), pp. 867–872
### Document information
Published on 12/04/17
Licence: Other
### Document Score
0
Views 27
Recommendations 0 | 2021-12-01 09:42:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 173, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369430541992188, "perplexity": 13570.553407152447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00040.warc.gz"} |
http://www.cs.iit.edu/~dbgroup/research/oracletprov.php | ## IIT Database Group
Fork me on GitHub
## Provenance for Updates and Transactions
In this project we explore how provenance computation can benefit from temporal database techniques. This project is funded by and executed in collaboration with the Oracle corporation. Starting from porting rewrite-based techniques such as the ones used in Perm (Perm) to the Oracle SQL dialect, we will study how to 1) compute the provenance of past query and 2) compute the provenance for updates and transactions. This requires non-trivial extensions to current provenance techniques, because of, e.g., interaction of transactions under lower serialization level. Our solution can retroactively trace transaction provenance as long as an audit log and time travel functionality are available (both are supported by most DBMS). One of the major outcomes so far is the development of the concept of reenactment queries, queries that reenact the effects of a transaction. Reenactment queries are the main enabler of retroactive provenance computation for transactions.
Within this project we have made the following major contributions to provenance management
• Development of MV-relations, a provenance model for queries, updates, and transactional histories that extends the seminal semiring annotation model (defined for queries) with support for updates and transactions.
• Development of reenactment, a declarative replay technique with provenance capture that enables tracking the provenance of a past update or transaction retroactively by executing a query.
• Implementation of provenance tracking for transactions over a standard relational database as part of the GProM system.
### MV-relations - A Provenance Model for Transactional Updates
As part of this project we have developed a provenance model that allows tracking the provenance of tuples through queries and transactional updates. In our model, the complete derivation history of a tuple - which update operations derived the tuple and one which inputs of these operations does it depend on - can be encoded in the annotation of the tuple.
### Reenactment - Declarative Replay with Provenance Capture
Reenactment is a declarative replay technique that enables a transactional history (or part thereof) to be repeated by executing a so-called reenactment query. We have proven that reenactment queries produce the same result and have the same provenance as the operation(s) they are replaying. Thus, a reenactment query can be used to retroactively compute the provenance of an operation executed some time in the past as long as the database state seen by this operation can be accessed.
### Implementation in GProM
To retrieve the provenance of a past update (transaction, or history) we construct its reenactment query based on a log of SQL operations executed in the past (e.g., Oracle's audit log facility). Such an reenactment query needs to be executed over the database state seen by the operation(s) to be replayed. We use time travel to access such past database states. The techniques developed in this project have been integrated in the GProM system, a database independent middleware application for computing provenance.
### Collaborators
• Dieter Gawlick, Architect in Special Projects, Oracle
• Vasudha Krishnaswamy, Oracle
• Zhen Hua Liu, Oracle
### Funding
• Title: Provenance using temporal databases (extension)
• Amount: $95,829 • Funding Agency: Oracle Corporation • Project Page: webpage • Recipient: Dr. Glavic • Principal Contact at Oracle: Dieter Gawlick, Architect in Special Projects • Title: Provenance using temporal databases • Amount:$85,000
• Funding Agency: Oracle Corporation
• Project Page: webpage
• Recipient: Dr. Glavic
• Principal Contact at Oracle: Dieter Gawlick, Architect in Special Projects
### Publications
2018 [8] Using Reenactment to Retroactively Capture Provenance for Transactions Bahareh Arab, Dieter Gawlick, Vasudha Krishnaswamy, Venkatesh Radhakrishnan, Boris GlavicIn IEEE Transactions on Knowledge and Data Engineering (TKDE), volume 30, 2018 [bibtex] [pdf] [doi] 2017 [7] Debugging Transactions and Tracking their Provenance with Reenactment Xing Niu, Boris Glavic, Seokki Lee, Bahareh Arab, Dieter Gawlick, Zhen Hua Liu, Vasudha Krishnaswamy, Su Feng, Xun ZouIn Proceedings of the VLDB Endowment (Demonstration Track) (PVLDB), volume 10, 2017 [bibtex] [pdf] [6] Answering Historical What-if Queries with Provenance, Reenactment, and Symbolic Execution In Proceedings of the 8th USENIX Workshop on the Theory and Practice of Provenance (TaPP), 2017 [bibtex] [pdf] 2016 [5] Reenactment for Read-Committed Snapshot Isolation (long version) Bahareh Arab, Dieter Gawlick, Vasudha Krishnaswamy, Venkatesh Radhakrishnan, Boris GlavicTechnical report, Illinois Institute of Technology, 2016 [bibtex] [pdf] [4] Reenactment for Read-Committed Snapshot Isolation Bahareh Arab, Dieter Gawlick, Vasudha Krishnaswamy, Venkatesh Radhakrishnan, Boris GlavicIn Proceedings of the 25th ACM International Conference on Information and Knowledge Management (CIKM), 2016 [bibtex] [pdf] [3] Formal Foundations of Reenactment and Transaction Provenance Bahareh Arab, Dieter Gawlick, Vasudha Krishnaswamy, Venkatesh Radhakrishnan, Boris GlavicTechnical report, Illinois Institute of Technology, IIT/CS-DB-2016-01, 2016 [bibtex] [pdf] 2014 [2] Reenacting Transactions to Compute their Provenance Bahareh Arab, Dieter Gawlick, Vasudha Krishnaswamy, Venkatesh Radhakrishnan, Boris GlavicTechnical report, Illinois Institute of Technology, IIT/CS-DB-2014-02, 2014 [bibtex] [pdf] [1] A Generic Provenance Middleware for Database Queries, Updates, and Transactions Bahareh Arab, Dieter Gawlick, Venkatesh Radhakrishnan, Hao Guo, Boris GlavicIn Proceedings of the 6th USENIX Workshop on the Theory and Practice of Provenance (TaPP), 2014 [bibtex] [pdf] [slides] | 2018-07-18 04:20:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23339548707008362, "perplexity": 14549.258302046752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590051.20/warc/CC-MAIN-20180718041450-20180718061450-00376.warc.gz"} |
https://www.techwhiff.com/learn/which-one-of-the-following-statements-concerning/92780 | 1 answer
# Which one of the following statements concerning convertible bonds is false? Multiple Choice A convertible bond...
###### Question:
Which one of the following statements concerning convertible bonds is false? Multiple Choice A convertible bond is similar to a bond with a call option. Som A convertible bond should always be worth less than a comparable straight bond. ences New shares of stock are issued when a convertible bond is converted, A convertible bond can be redeemed just like a straight bond at maturity. A convertible bond can be described as having upside potential with downside protection.
## Answers
#### Similar Solved Questions
1 answer
##### Use the H-NMR and label the following peaks for the product 2-acetonaphthone (methyl 2-naphthyl ketone). Label...
Use the H-NMR and label the following peaks for the product 2-acetonaphthone (methyl 2-naphthyl ketone). Label the integration, splitting and shift for each. Thank you. Agsa HEIGHT Pulse sequencet sipul solventi 13 Operator rator T 288.1K ecalhost" Relax. delay 1.000 sec ec Width S TA PROCESS...
1 answer
##### What is the pressure of a gas in kPa if 0.250 moles of the gas occupies a volume of 4.75 liters at 273 K?
What is the pressure of a gas in kPa if 0.250 moles of the gas occupies a volume of 4.75 liters at 273 K?...
1 answer
##### 10. Let A = {2 - 10.x | x € Z} and B = {5y -...
10. Let A = {2 - 10.x | x € Z} and B = {5y - 8 ye Z}. Show that A CB....
1 answer
##### M3-4 (Algo) Calculating Physical Units [LO 3-2] Eagle Company had 590 units in work in process...
M3-4 (Algo) Calculating Physical Units [LO 3-2] Eagle Company had 590 units in work in process on January 1. During the month, Eagle completed 2,460 units and had 1,180 units in process on January 31. Determine how many units Eagle started during January Number of Units Started...
1 answer
##### Question 11 (2 points) A total of 84 members of the Conservative and Liberal parties are...
Question 11 (2 points) A total of 84 members of the Conservative and Liberal parties are drawn at random and asked if they do or do not support the implementation of a national carbon tax to help prvent global warning. The results, for or against by party, are listed below. Party Cons. Liberal 35 22...
1 answer
##### How much heat must be absorbed by water of mass ? = 300 ? at ?...
How much heat must be absorbed by water of mass ? = 300 ? at ? = 20°? to take it to the gas state at 100°?? (The heat of vaporization is 540 ??? /? and the specific heat of water is 1.00 ??? /?? )...
1 answer
##### Problem 1 For a slope as shown in the figure below. Using the ordinary method of...
Problem 1 For a slope as shown in the figure below. Using the ordinary method of slices (i.e. Fellenius), find the factor of safety for the trial case B = 45',' = 20°, c' = 19.2 kPa, y = 18.08 kN/m, H= 12.2 m, a = 30° and 0 = 700. Hint: You can divide the trial slip surface into ...
1 answer
##### I have a question need it to get solved in 20 minutes please in C++ programming...
i have a question need it to get solved in 20 minutes please in C++ programming . this is the question: Write a function GetPtrMaxElem that takes as input the base address of an array of ints and the number of elements. The function should inspect the contents of the array and return a pointer to th...
1 answer
##### When does recombination occur? A. During meiosis B. In response to a single- or double-stranded break...
When does recombination occur? A. During meiosis B. In response to a single- or double-stranded break in DNA C. During antibody formation D. When catalyzed by a recombinase E. All of the above...
1 answer
##### If a demand is elastic, how will an increase in price change total revenue? Explain.
If a demand is elastic, how will an increase in price change total revenue? Explain....
1 answer
##### Determine the amount of money in a savings account at the end of 10 years, given...
Determine the amount of money in a savings account at the end of 10 years, given an initial deposit of \$5,500 and a 12 percent annual interest rate when interest is compounded (a) annually, (b) semiannually, and (c) quarterly. Include financial calculator steps, including the keys pressed on the cal...
1 answer
##### If the errors produced by a forecasting method for 3 observations are +3, +3, and -3,...
If the errors produced by a forecasting method for 3 observations are +3, +3, and -3, then what is the mean absolute deviation? Multiple Choice 9 0 3 -3...
1 answer
##### Im completly lost. couls anybody solve this in detail. i will rate fast and high. thank...
im completly lost. couls anybody solve this in detail. i will rate fast and high. thank you 1. For . 2 the given circuit find the current methods. Use MULTISIM to verify I your answer. using 6v 22 nu 24 D se şan on 4.12 8.1...
1 answer
##### Check My Work Please read the following scenario and answer the questions that follow. At Inner...
Check My Work Please read the following scenario and answer the questions that follow. At Inner City Health Care, clinical assistant Joe Guerrero, CMA (MAMA), is responsible for encouraging parents to keep track of their children's immunication records. Joe teaches parents the importance of immu...
1 answer
##### Financial Product Case Study You work in the Treasury Derivate-Bank AG division. Which hedging instrument...
Financial Product Case Study You work in the Treasury Derivate-Bank AG division. Which hedging instrument would you conclude under the following circumstances? Your IT unit intends to purchase new hardware in mid-March next year. The planned transaction is to take place in USD,...
1 answer
##### The springs of a 1700 kg car compress 4.0 mm when its 64 kg driver gets...
The springs of a 1700 kg car compress 4.0 mm when its 64 kg driver gets into the driver's seat. If the car goes over a bump, what will be the frequency of oscillations? Ignore damping. Express your answer to two significant figures and include the appropriate units....
1 answer
##### How do I use a graph to solve the system of equations y=2x+1 and 2y=4x+2?
How do I use a graph to solve the system of equations y=2x+1 and 2y=4x+2?...
1 answer
##### Near the surface of the Earth there is an electric field of about 150 V/m which...
Near the surface of the Earth there is an electric field of about 150 V/m which points downward. Two identical balls with mass m = 0.680 kg are dropped from a height of 1.80 m , but one of the balls is positively charged with q1 = 850 μC , and the second is negatively charged withq2 = -850 μC ... | 2023-01-27 05:35:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3002700209617615, "perplexity": 2207.8693331367126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00247.warc.gz"} |
https://answers.ros.org/answers/247169/revisions/ | # Revision history [back]
"undefined reference" is a linker error, so it means that it does not find your *.so file, which is not surprising, since it does not search in the "include" directory for it (it's only for headers!)
link_directories(/path/where/your/so/lies)
export LD_LIBRARY_PATH=/path/where/your/so/lies | 2022-07-06 10:21:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27248960733413696, "perplexity": 3853.8516539513366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00161.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_and_Computation_Fundamentals/Book%3A_An_Introduction_to_Ontology_Engineering_(Keet)/10%3A_Ontologies_and_Natural_Languages/10.01%3A_Toward_Multilingual_Ontologies | # 10.1: Toward Multilingual Ontologies
Let us first have a look at just one natural language and an ontology (Section 9.1: Toward Multilingual Ontologies "Linking a Lexicon to an Ontology") before complicating matters with multiple languages in Section 9.1: Toward Multilingual Ontologies "Multiple Natural Languages".
## Linking a Lexicon to an Ontology
Most, if not all, ontologies you will have inspected and all examples given in the preceding chapters simply gave a human-readable name to the DL concept or OWL class. Perhaps you have loaded an ontology in the ODE, and the class hierarchy showed numbers, alike GO:00012345, and you had to check the class’s annotation what was actually meant with that cryptic identifier. This is an example of a practical difference between OBO and OWL (recall Section 7.1: Relational Databases and Related ‘legacy’ KR), which is, however, based on different underlying modelling principles. DLs assume that a concept is identified by the name given to it; that is, there is a 1:1 correspondence between what a concept, say, being a vegetarian, means and the name we give to it. Natural language and the knowledge, reality etc. are thus tightly connected and, perhaps, even conflated. Not everybody agrees with that underlying assumption. An alternative viewpoint is to assume that there are language-independent entities—i.e., they exist regardless whether humans name them or not—that somehow have to be identified and then one sticks one or more labels or names to it. Put differently: knowledge is one thing and natural language another, and they should be kept as distinct kinds of things.
My impression is that the second view is prevailing at least within the ontology engineering arena. To date, two engineering solutions have been proposed how to handle this in an ontology. The first one is the OBO solution, which was used since its inception in 1998 by the Gene Ontology Consortium [Gen00]: each languageindependent entity gets an identifier and it must have at least one label that is human-readable. This solution clearly allows also for easy recording of synonyms, variants, and abbreviations, which were commonplace in genetics especially at the inception of the GO.
The second option emerged within the Semantic Web field [BCHM09]. It also acknowledges that distinction between the knowledge layer and the language layer, yet it places the latter explicitly on top of the knowledge layer. This has as effect that the solution does not ‘overload’ OWL’s annotation fields, but proposes to store all the language and linguistic information in a separate file that interacts with the OWL file. How that separate file should look like, what information should be stored in it, and how it should interact with the OWL file is open to manifold possible solutions. One such proposal will be described here to illustrate how something like that may work, which is the Lemon model [MdCB+12, MAdCB+12], of which a fragment has been accepted as a community standard by the W3C3. It also aims to cater for multilingual ontologies.
Consider the Lemon model as depicted in Figure 9.1.1, which depicts the kind of things one can make annotations of, and how those elements relate to each other. At the bottom-center of the figure, there is the ontology to which the language information is linked. Each vocabulary element of the ontology will have an entry in the Lemon file, with more or less lexical information, among others: in which sense that word is meant, what the surface string is, the POS tag (noun, noun phrase, verb etc.), gender, case and related properties (if applicable).
Figure 9.1.1: The Lemon model for multilingual ontologies (Source: [MdCB+12])
A simple entry in the Lemon file could look like this, which lists, in sequence: the location of the lexicon, the location of the ontology, the location of the Lemon specification, the lexical entry (including stating in which language the entry is), and then the link to the class in the OWL ontology:
@base <www.example.org/lexicon>
@prefix ontology: <www.example.org/AfricanWildlinfeOntology1#>
@prefix lemon: <www.monnetproject.eu/lemon#>
:myLexicon a lemon:Lexicon ;
lemon:language "en" ;
lemon:entry :animal .
:animal a lemon:LexicalEntry ;
lemon:form [ lemon:writtenRep "animal"@en ] ;
lemon:sense [ lemon:reference AfricanWildlinfeOntology1:animal ] .
One can also specify rules in the Lemon file, such as how to generate the plural from a singular. However, because the approach is principally a declarative specification, it is not as well equipped at handling rules compared to the well-established grammar systems for NLP. Also, while Lemon covers a fairly wide range of language features, it may not cover all that is needed; e.g., the noun class system emblematic for the indigenous language spoken in a large part of sub-Saharan Africa does not quite fit [CK14]. Nonetheless, Lemon, and other proposals with a similar idea of separation of concerns, are a distinct step forward for ontology engineering where interaction with languages is a requirement. Such a separation of concerns is even more important when the scope is broadened to a multilingual setting, which is the topic of the next section.
## Multiple Natural Languages
Although this textbook is written in one language, English, for it is currently the dominant language in science, the vast majority of people in the world speak another language and they both have information systems in their own language as well as that they may develop an ontology in their own language, or else localise an ontology into their own language. One could just develop the ontology in one’s own language in the same way as the examples were given in English in the previous chapters and be done with it. But what if, say, SNOMED CT [SNO12] should be translated in one’s own language for electronic health records, like with OpenMRS [Ope], or the ontology has to import an existing ontology that happens to be not represented in the target language and compatibility with the original ontology has to be maintained? What if some named class is not translatable into one single term? For instance, in French, there are two words for the English ‘river’: one for a river that ends in the sea and another word for a river that doesn’t (fleuve and rivière), and isiZulu has two words and corresponding meanings for the participation relation: one as we have see in Section 6.2: Part-Whole Relations and another for participation of collectives in a process (-hlanganyela). The following example illustrates some actual (unsuccessful) ‘struggling’ trying to handle this when there is not even a name for the entity in the other language (example from [AFK12]); a more extensive list of the type of issues can be found in [LAF14].
Example $$\PageIndex{1}$$:
South Africa has a project on indigenous knowledge management systems, but the example equally well can be generalised to cultural historic museum curation in any country (AI for cultural heritage). Take ingcula, which is a ‘small bladed hunting spear’ (in isiZulu), that has no equivalent term in English. Trying to represent it in the ‘English understanding’, i.e., adding it not as a single class but as a set of axioms, then one could introduce a class Spear that has two properties, e.g., $$\texttt{Spear}\sqsubseteq\exists\texttt{hasShape.Bladed}\cap\exists\texttt{participatesIn.Hunting}$$. To represent the ‘small’, one could resort to fuzzy concepts; e.g., following [BS11]’s fuzzy OWL notation, then, e.g.,
$$\texttt{MesoscopicSmall : Natural}\to \texttt{[0, 1]}$$ is a fuzzy datatype,
$$\texttt{MesoscopicSmall(x)} = \texttt{trz(x, 1, 5, 13, 20)}$$, with trz the trapezoidal function,
so that a small spear can be defined as
$$\texttt{SmallSpear}\equiv\texttt{Spear}\cap\exists\texttt{size.MesoscopicSmall}$$
Then one can create a class in English and declare something alike
$$\texttt{SmallBladedHuntingSpear}\equiv\texttt{SmallSpear}\cap\exists\texttt{hasShape.Bladed}\cap$$
$$\exists\texttt{participatesIn.Hunting}$$
This is just one of the possibilities of a formalised transliteration of an English natural language description4 , not a definition of ingcula as it may appear in an ontology about indigenous knowledge of hunting.
Let’s assume for now the developer does want to go in this direction, then it requires more advanced capabilities than even lexicalised ontologies to keep the two ontologies in sync: lexicalised ontologies only link dictionaries and grammars to the ontologies, but here one now would need to map sets of axioms between ontologies.
That is, what was intended as a translation exercise ended up as a different ontology file at least5. It gets even more interesting in multilingual organisations and societies, like the European Union with over 20 languages and, e.g., South Africa that has 11 official languages, for then it would require some way of managing all those versions.
those versions. Several approaches have been proposed for the multilingual setting, both for localisation and internationalisation of the ontology with links to the original ontology and multiple languages at the same time in the same system. The simplest approach is called semantic tagging. This means that the ontology is developed ‘in English’, i.e., naming the vocabulary elements in one language and for other languages, labels are added, such as Fakultät and Fakulteit for the US-English School. This may be politically undesirable and anyhow it does not solve the issue of non 1:1 mappings of vocabulary elements. It might be a quick ‘smart’ solution if you’re lucky (i.e., there happen to be only 1:1 mappings for the vocabulary elements in your ontology), but a solid reusable solution it certainly is not. OBO’s approach of IDs and labels avoids the language politics: one ID with multiple labels for each language, so that it at least treats all the natural languages as equals.
However, both falter as soon as there is no neat 1:1 translation of a term into another single term in a different language—which is quite often the case except for very similar languages—though within the scientific realm, this is much less of an issue, where handling synonyms may be more relevant.
One step forward is a mildly “lexicalised ontology” [BCHM09], of which an example is depicted in Figure 9.1.2. Although it still conflates the entity and its name and promotes one language as the primary, at least the handling of other languages is much more extensive and, at least in theory, will be able to cope with multilingual ontologies to a greater extent. This is thanks to its relatively comprehensive information about the lexical aspects in its own linguistic ontology, with the WordForm etc., which is positioned orthogonally to the domain ontology. In Figure 9.1.2, the English OralMucosa has its equivalent in German as Mundschleimhaut, which is composed here of two sub-words that are nouns themselves, Mund ‘mouth’ and Schleimhaut ‘mucosa’. It is this idea that has been made more precise and comprehensive in its successor, the Lemon model, that is tailored to the Semantic Web setting [MdCB+12]. Indeed, the same Lemon from the previous section. The Lemon entries can become quite large for multiple languages and, as it uses RDF for the serialisation, it is not easily readable. An example for the class Cat in English, French, and German is shown diagrammatically in Figure 9.1.3, and two annotated short entries of the Friend Of A Friend (FOAF)6 structured vocabulary in Chichewa (a language spoken in Malawi) are shown in Figure 9.2.1.
Figure 9.1.2: Ontologies in practice: Semantic Tagging—Lexicalized Ontologies. (Source: www.deri.ie/fileadmin/documen...lNLP.final.pdf)
Figure 9.1.3: The Lemon model for multilingual ontologies to represent the class Cat (Source: [MdCB+12])
There are only few tools that can cope with ontologies and multiple languages. A web-based tool for creating Lemon files is under development at the time of writing. It would be better if at least some version of language management were to be integrated in ODEs. At present, to the best of my knowledge, only MoKI provides such a service partially for a few languages, inclusive of a localised interface [BDFG14].
As a final note: those non-1:1 mappings of the form of having one class in ontology $$O_{1}$$ and one or more axioms in $$O_{2}$$, sets of axioms in both, like in Example 9.1.1 with the hunting spear, as well as non-1:1 property alignments, are feasible by now with the (mostly) theoretical results presented in [FK17, Kee17b], so this ingcula example could be solved in theory at least. Its details are not pursued here, because it intersects with the topic of ontology alignment. Further, one may counter that an alternative might be to SKOSify it, for it would avoid the complex mapping between a named class to a set of axioms. However, then the differences would be hidden in the label of the concepts rather than solving the modeling problem.
## Footnotes
2E.g., it has been shown to enhance precision and recall of queries (including enhancing dialogue systems [VF09]), to sort results of an information retrieval query to the digital library [DAA+08], (biomedical) text mining, and annotating textbooks for ease of navigation and automated question generation [CCO+13] as an example of adaptive e-learning.
3www.w3.org/community/ontolex..._Specification
4Plain OWL cannot deal with this, though, for it deals with crisp knowledge only. Refer to Section 10.1: Uncertainty and Vagueness "Fuzzy Ontologies" for some notes on fuzzy ontologies
5whether it is also a different conceptualisation is a separate discussion. | 2021-12-05 19:55:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4165630638599396, "perplexity": 1634.1126698743724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00358.warc.gz"} |
https://www-users.cse.umn.edu/~mahrud/drp/expectations/ | ## Expectations for Participants
### For Mentors
• What are the things I am expected to do as a DRP mentor? There are essentially only three things that you are required to do as a DRP mentor, anything beyond that (while encouraged) is your prerogative.
• The first thing you will need to do, is help facilitate the choosing of a topic by your assigned mentee. They will most likely have a rough idea (some more specific than others) of what they want to do, but it is your job to solidify this idea and to curb any unreasonable goals (e.g. someone who has taken only Math 2243 wanting to learn p-adic Hodge Theory).
• Once this is done, you will need to oversee the learning of this topic with your mentee. Generically, this means guiding your student through the book and/or papers that you have decided to read. This will entail helping lay out a reasonable timetable for the students to follow. It will also involve meeting with your student at least once a week, to answer any questions they may have. That said, this is a free-form process that is ultimately left up to the mentor/mentee pair. A pair may decide that the mentee should present material at the weekly meeting, or they may just treat the time as a general Q&A session. These more fine-point details are best determined by each pair.
• Lastly, you will need to assist the student in preparation with their end-of-term presentation. This could entail several things, depending upon the particular mentee. Invariably, you will help them organize the general layout of their presentation: what to talk about, appropriate assumptions of background for the intended audience, etc. But, depending on the previous experience your mentee has with giving math presentations, you may have them give practice talks to you (this is highly recommended), or help them with $\LaTeX$ if they decide to do a beamer presentation.
• What is the expected time commitment? Generally, a mentor/mentee pair will meet for at least one hour per week. Any longer than that is entirely a decision made by an individual mentor/mentee pair. As discussed in the previous question, a mentor is expected to help their mentee prepare for their end-of-term presentation. Usually, this involves no more than three hours of work for the mentor, between brainstorming ideas with their mentee, to watching a practice presentation or two, etc.
### For Mentees
• What are the things that I am expected to do as an undergraduate participant? While the DRP is a program intentionally designed to provide a personalized experience for every undergraduate, we have some general expectations for students. Since we have a limited number of spots, we take these expectations seriously.
• Mentors and undergraduates have weekly meetings discussing their progress, with graduate students providing feedback and advice on the material covered. The specifics of these meetings is left up to the individual preferences of a mentor/mentee pair. Each pair should meet for at least one hour each week, but mentees and mentors are welcome to meet for more than an hour per week if desired.
• Students should expect to commit at least 2 hours a week working on their DRP individually. This time can be spent reading, solving problems, or preparing for weekly meetings. Each mentor/mentee pair will decide on specific expectations for weekly preparation and study at the beginning of the semester.
• Giving a mathematical presentation is a vital skill in academic mathematics, and the act of presenting also solidifies the presenter’s knowledge of the math presented. Consequently, a key responsibility of DRP participants is to prepare, practice, and deliver a 10-12 minute presentation at the end of the term in front of the other mentor/mentee pairs. The presentation should cover some main point or concept related to the mentee’s DRP project. Mentors will help their respective mentees plan and practice this presentation near the end of the semester.
Note that all of the above expectations are only the minimum expected of all participants. Anything more than this is determined on an individual basis by a mentee and their mentor. For instance, we anticipate that there will be a number of very enthusiastic students that wish to do individual study for more than two hours per week. | 2023-03-22 18:49:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2963426113128662, "perplexity": 1082.0184620110028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00689.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/a-capacitor-made-flat-plate-area-second-plate-having-stair-like-structure-shown-figure-width-each-stair-height-b-capacitors-capacitance_68662 | Department of Pre-University Education, Karnataka course PUC Karnataka Science Class 12
Share
# A Capacitor is Made of a Flat Plate of Area a and a Second Plate Having a Stair-like Structure as Shown in Figure . the Width of Each Stair is a and the Height is B. - Physics
ConceptCapacitors and Capacitance
#### Question
A capacitor is made of a flat plate of area A and a second plate having a stair-like structure as shown in figure . The width of each stair is a and the height is b. Find the capacitance of the assembly.
#### Solution
The total area of the flat plate is A. The width of each stair is the same. Therefore, the area of the surface of each stair facing the flat plate is the same, that is, A/3 .
From the figure, it can be observed that the capacitor assembly is made up of three capacitors. The three capacitors are connected in parallel.
For capacitor C1, the area of the plates is A/3 and the separation between the plates is d.
For capacitor C2, the area of the plates is A/3 and the separation between the plates is (d + b).
For capacitor C3, the area of the plates is A/3 and the separation between the plates is (+ 2b).
Therefore ,
C_1 = (∈_0A)/(3d)
C_2 = (∈_0A)/(3(d+b)
C_3 = (∈_0A)/(3(d+2b)
As the three capacitors are in parallel combination,
C = C_1 + C_2 + C_3
⇒ C = (∈_0A)/(3d) + (∈_0A)/(3(d+b)) + (∈_0A)/(3(d+2b)
⇒ C = (∈_0A)/3 ((3d^2 + 6bd + 2b^2))/(d(d+b)(d+2b))
Is there an error in this question or solution?
#### Video TutorialsVIEW ALL [2]
Solution A Capacitor is Made of a Flat Plate of Area a and a Second Plate Having a Stair-like Structure as Shown in Figure . the Width of Each Stair is a and the Height is B. Concept: Capacitors and Capacitance.
S | 2020-04-01 20:44:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6119574904441833, "perplexity": 813.8015571119784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506121.24/warc/CC-MAIN-20200401192839-20200401222839-00559.warc.gz"} |
https://www.enotes.com/homework-help/single-variable-calculus-chapter-3-3-8-section-3-8-1314556?en_action=hh-question_click&en_label=hh-sidebar&en_category=internal_campaign | # Single Variable Calculus, Chapter 3, 3.8, Section 3.8, Problem 44
## Expert Answers info
eNotes | Certified Educator
calendarEducator since 2007
starTop subjects are Math, Literature, and Science
At what rate is the distance between the tips of the hands changing at one'o clock
Given that the hour hand of a watch is 4mm long and the minute hand is 8mm long. Determine how fast is the distance between the tips of the hand changing at an one o' clock?
For our strategy, we will use the law of cosines to relate the length of the sides and the distance between the tips of the hands. Also, both of the hands are moving at constant rates so the angle between them is also changing at a constant rate.
Law of cosines: $a^2 = b^2 + c^@ - 2(b)(c) \cos \theta \qquad \Longleftarrow \text{ Equation 1}$
\begin{aligned} \text{we let } a & \text{ be the distance between the tips of the hands}\\ b & \text{ be the length of the hour hand}\\ c & \text{ be the length of the minute hand}\\ \theta & \text{ be the angle between the hands} \end{aligned}
At one o' clock, the angle between the hands is $30^\circ$, $\displaystyle \frac{360}{12} = 30^\circ$
\begin{aligned} a^2 &= 4^2 + 8^2 - 2(4)(8) \cos (30)\\ a &= 4.9573 \text{ mm} && \text{, the distance between the tips precisely at one o' clock} \end{aligned}
Notice that in Equation 1, the only variables that change are $a$ and $\theta$,
Now we differentiate Equation 1 with respect to time...
$\displaystyle 2a \frac{da}{dt} = -2 bc(-\sin \theta) \frac{d \theta}{dt}$
Solving for $\displaystyle \frac{da}{dt}$
$\displaystyle \frac{da}{dt} = \frac{bc \sin \theta}{a} \left( \frac{d \theta}{dt} \right) \qquad \Longleftarrow \text{ Equation 2}$
To solve for $\displaystyle \frac{d \theta}{dt}$, now the angle is changing with respect to time, note that the hour hand makes one revolution every hour in the clockwise direction. Hence, the hour hand angle has a rate of change of $2 \pi$ radians every 12 hours and is equal to $\displaystyle \frac{2\pi}{12} = \frac{\pi}{6} \text{rph}$ (radians per hour). Also, the minute hand has an angle that changes by $2 \pi$ radians every hour and is equal to $2 \pi$ rph. Thus, the relative velocity of the two hands is then $\displaystyle \frac{\pi}{6} - 2 \pi = \frac{-11\pi}{6} \text{rph}$ the value is negative because it is always the hour hand that takes the lead from the minute hand.
So $\displaystyle \frac{d \theta}{dt} = \frac{-11 \pi}{6}$, using this to solve for the rate of change of the distance between the tips of the hands in Equation 2.
\begin{aligned} \frac{da}{dt} &= \frac{bc \sin \theta}{a} \left( \frac{d\theta}{dt} \right)\quad ; \theta = 30^\circ\\ \\ \frac{da}{dt} &= \frac{4(8) \sin (30)}{4.9573} \left( \frac{-11 \pi}{6} \right) \end{aligned}
$\boxed{\displaystyle \frac{da}{dt} = -18.59 \frac{\text{mm}}{h}}$ | 2020-10-22 03:53:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.9980928897857666, "perplexity": 586.1239435962214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00187.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/dcds.2017217 | Article Contents
Article Contents
# Global solution for the $3D$ quadratic Schrödinger equation of $Q(u, \bar{u}$) type
• We study a class of $3D$ quadratic Schrödinger equations as follows, $(\partial_t -i Δ) u = Q(u, \bar{u})$. Different from nonlinearities of the $uu$ type and the $\bar{u}\bar{u}$ type, which have been studied by Germain-Masmoudi-Shatah in [2], the interaction of $u$ and $\bar{u}$ is very strong at the low frequency part, e.g., $1× 1 \to 0$ type interaction (the size of input frequency is "1" and the size of output frequency is "0"). It creates a growth mode for the Fourier transform of the profile of solution around a small neighborhood of zero. This growth mode will again cause the growth of profile in the medium frequency part due to the $1× 0\to 1$ type interaction. The issue of strong $1× 1\to 0$ type interaction makes the global existence problem very delicate.
In this paper, we show that, as long as there are "$ε$" derivatives inside the quadratic term $Q (u, \bar{u})$, there exists a global solution for small initial data. As a byproduct, we also give a simple proof for the almost global existence of the small data solution of $(\partial_t -i Δ)u = |u|^2 = u\bar{u}$, which was first proved by Ginibre-Hayashi [3]. Instead of using vector fields, we consider this problem purely in Fourier space.
Mathematics Subject Classification: Primary: 35Q55; Secondary: 35Q35.
Citation:
• T. Cazenave and F. Weissler , Rapidly decaying solutions of the nonlinear Schrödinger equation, Comm. Math. Phys., 147 (1992) , 75-100. doi: 10.1007/BF02099529. P. Germain , N. Masmoudi and J. Shatah , Global Solutions for $3D$ Quadratic Schrödinger Equations, Int. Math. Res. Notice, 2009 (2009) , 414-432. doi: 10.1093/imrn/rnn135. J. Ginibre and N. Hayashi , Almost global existence of small solutions to quadratic nonlinear Schrödinger equations in three space dimensions, Math. Z., 219 (1995) , 119-140. doi: 10.1007/BF02572354. Z. Guo , L. Peng and B. Wang , Decay estimates for a class of wave equations, J. Func, Anal., 254 (2008) , 1642-1660. doi: 10.1016/j.jfa.2007.12.010. N. Hayashi and P. Naumkin , On the quadratic nonlinear Schrödinger equation in three space dimensions, Int. Math. Res. Notice, 2000 (2000) , 115-132. doi: 10.1155/S1073792800000088. M. Ikeda and T. Inui , Small data blow-up of $L^2$ or $H^1$-solution for the semilinear Schrödinger equation without gauge invariance, J. Evol. Equ., 15 (2015) , 571-581. doi: 10.1007/s00028-015-0273-7. Y. Kawahara , Global existence and asymptotic behavior of small solutions to nonlinear Schrödinger equations in $3D$, Differential Integral Equations, 18 (2005) , 169-194. S. Klainerman and G. Ponce , Global, small amplitude solutions to nonlinear evolution equations, Comm. Pure. Appl. Math., 36 (1983) , 133-141. doi: 10.1002/cpa.3160360106. W. Strauss , Nonlinear scattering theory at low energy, J. Fun. Anal., 41 (1981) , 110-133. doi: 10.1016/0022-1236(81)90063-X. | 2023-03-26 22:58:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7762132287025452, "perplexity": 1277.4522274836597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00312.warc.gz"} |
https://datascience.stackexchange.com/questions/326/python-vs-r-for-machine-learning/339 | # Python vs R for machine learning
I'm just starting to develop a machine learning application for academic purposes. I'm currently using R and training myself in it. However, in a lot of places, I saw people using Python.
What are people using in academia and industry, and what is the recommendation?
• Well, what type of machine learning (image/video? NLP? financial? astronomy?), which classifiers, what size datasets (Mb? Gb? Tb?), what scale, what latency, on what platform (mobile/single-computer/multicore/cluster/cloud)...? What specific libraries will your application use/need, and have you checked what is available in each language? Are you just building a toy application for your personal learning or does it matter if it ever gets productized? Using open-source or proprietary? Will you be working with other people or existing apps, and what do they use/support? Web frontend/GUI? etc – smci Dec 12 '16 at 22:51
• One observation is that Python is more used by machine learning people working with big datasets while R is more used by traditional "statisticians", e.g. those working with psychology experiments with hundreds of data points. Though that difference might be diminishing. – xji Feb 15 '18 at 8:43
• python all the way man! I do 4 times the things my colleagues do in one day. And you can use python for all kind of programming tasks, not only machine learning. – Francesco Pegoraro Sep 24 '18 at 20:57
Some real important differences to consider when you are choosing R or Python over one another:
• Machine Learning has 2 phases. Model Building and Prediction phase. Typically, model building is performed as a batch process and predictions are done realtime. The model building process is a compute intensive process while the prediction happens in a jiffy. Therefore, performance of an algorithm in Python or R doesn't really affect the turn-around time of the user. Python 1, R 1.
• Production: The real difference between Python and R comes in being production ready. Python, as such is a full fledged programming language and many organisations use it in their production systems. R is a statistical programming software favoured by many academia and due to the rise in data science and availability of libraries and being open source, the industry has started using R. Many of these organisations have their production systems either in Java, C++, C#, Python etc. So, ideally they would like to have the prediction system in the same language to reduce the latency and maintenance issues. Python 2, R 1.
• Libraries: Both the languages have enormous and reliable libraries. R has over 5000 libraries catering to many domains while Python has some incredible packages like Pandas, NumPy, SciPy, Scikit Learn, Matplotlib. Python 3, R 2.
• Development: Both the language are interpreted languages. Many say that python is easy to learn, it's almost like reading english (to put it on a lighter note) but R requires more initial studying effort. Also, both of them have good IDEs (Spyder etc for Python and RStudio for R). Python 4, R 2.
• Speed: R software initially had problems with large computations (say, like nxn matrix multiplications). But, this issue is addressed with the introduction of R by Revolution Analytics. They have re-written computation intensive operations in C which is blazingly fast. Python being a high level language is relatively slow. Python 4, R 3.
• Visualizations: In data science, we frequently tend to plot data to showcase patterns to users. Therefore, visualisations become an important criteria in choosing a software and R completely kills Python in this regard. Thanks to Hadley Wickham for an incredible ggplot2 package. R wins hands down. Python 4, R 4.
• Dealing with Big Data: One of the constraints of R is it stores the data in system memory (RAM). So, RAM capacity becomes a constraint when you are handling Big Data. Python does well, but I would say, as both R and Python have HDFS connectors, leveraging Hadoop infrastructure would give substantial performance improvement. So, Python 5, R 5.
So, both the languages are equally good. Therefore, depending upon your domain and the place you work, you have to smartly choose the right language. The technology world usually prefers using a single language. Business users (marketing analytics, retail analytics) usually go with statistical programming languages like R, since they frequently do quick prototyping and build visualisations (which is faster done in R than Python).
• R hardly beats python in visualization. I think it's rather the reverse; not only does python have ggplot (which I don't use myself, since there are more pythonic options, like seaborn), it can even do interactive visualization in the browser with packages like bokeh. – Emre Jun 12 '14 at 15:57
• Also R has the ability to interactive viz with Shiny. – stanekam Jun 12 '14 at 22:04
• Librariers - I do not agree at all with that. R is by far the richest tool set, and more than that it provides the information in a proper way, partly by inheriting S, partly by one of the largest community of reputed experts. – rapaio Jun 13 '14 at 6:11
• "Speed: R software initially had problems with large computations (say, like nxn matrix multiplications). But, this issue is addressed with the introduction of R by Revolution Analytics. They have re-written computation intensive operations in C which is blazingly fast. Python being a high level language is relatively slow." I'm not an experienced R user, but as far as I know pretty much everything with low-level implementations in R also has a similar low-level implementation in numpy/scipy/pandas/scikit-learn/whatever. Python also has numba and cython. This point should be a tie. – Dougal Apr 3 '15 at 22:05
• For you "Dealing with Big Data" comment, I would add that python is one of the 3 languages supported by apache spark, which has blazing fast speeds. Your comment about R having a C back end is true, but so does python the scikitlearn library is very fast as well. I think your post has nice balance, but I contend that speed is at least a tie, and scalability (i.e. handling big data) is certainly in favor of python. – j.a.gartner May 12 '15 at 16:31
There is nothing like "python is better" or "R is much better than x".
The only fact I know is that in the industry allots of people stick to python because that is what they learned at the university. The python community is really active and have a few great frameworks for ML and data mining etc.
But to be honest, if you get a good c programmer he can do the same as people do in python or r, if you got a good java programmer he can also do (near to) everything in java.
So just stick with the language you are comfortable with.
• But what about the libraries? There are advanced R packages (think Ranfom Forest or Caret) that would be utterly impractical to reimplement in a general purpose language such us C or Java – Santiago Cepas Jun 13 '14 at 9:35
• mahout i.e. supports random forest for java – Johnny000 Jun 13 '14 at 10:17
• Yeah maybe, but R doesn't bring the performance at all that you need for proccessing big sets of data and most of the time you have really big datasets in industrial use. – Johnny000 Jun 13 '14 at 10:41
• Yes, a good programmer can do the same in C. BUT a bad programmer can do it in Python as fast as an experienced programmer can do it in C. – Pithikos Jan 13 '15 at 13:38
• I don't think that's always true @Pithikos Given the underlying math formulas, I can usually implement them faster myself with VB/T-SQL faster than I can by wading through the unnecessarily arcane syntax for either R or Python libraries. And in the process, make the resulting code far more scalable. I'm glad these libraries exist but there are downsides built into them; in some situations and particular projects it's better to bypass them. – SQLServerSteve May 8 '17 at 20:15
The programming language 'per se' is only a tool. All languages were designed to make some type of constructs more easy to build than others. And the knowledge and mastery of a programming language is more important and effective than the features of that language compared to others.
As far as I can see there are two dimensions of this question. The first dimension is the ability to explore, build proof of concepts or models at a fast pace, eventually having at hand enough tools to study what is going on (like statistical tests, graphics, measurement tools, etc). This kind of activity is usually preferred by researchers and data scientists (I always wonder what that means, but I use this term for its loose definition). They tend to rely on well-known and verified instruments, which can be used for proofs or arguments.
The second dimension is the ability to extend, change, improve or even create tools, algorithms or models. In order to achieve that you need a proper programming language. Roughly all of them are the same. If you work for a company, than you depend a lot on the company's infrastructure, internal culture and your choices diminish significantly. Also, when you want to implement an algorithm for production use, you have to trust the implementation. And implementing in another language which you do not master will not help you much.
I tend to favor for the first type of activity the R ecosystem. You have a great community, a huge set of tools, proofs that these tools works as expected. Also, you can consider Python, Octave (to name a few), which are reliable candidates.
For the second task, you have to think before at what you really want. If you want robust production ready tools, then C/C++, Java, C# are great candidates. I consider Python as a second citizen in this category, together with Scala and friends. I do not want to start a flame war, it's my opinion only. But after more than 17 years as a developer, I tend to prefer a strict contract and my knowledge, than the freedom to do whatever you might think of (like it happens with a lot of dynamic languages).
Personally, I want to learn as much as possible. I decided that I have to choose the hard way, which means to implement everything from scratch myself. I use R as a model and inspiration. It has great treasures in libraries and a lot of experience distilled. However, R as a programming language is a nightmare for me. So I decided to use Java, and use no additional library. That is only because of my experience, and nothing else.
If you have time, the best thing you can do is to spend some time with all these things. In this way you will earn for yourself the best answer possible, fitted for you. Dijkstra said once that the tools influence the way you think, so it is advisable to know your tools before letting them to model how you think. You can read more about that in his famous paper called The Humble Programmer
I would add to what others have said till now. There is no single answer that one language is better than other.
Having said that, R has a better community for data exploration and learning. It has extensive visualization capabilities. Python, on the other hand, has become better at data handling since introduction of pandas. Learning and development time is very less in Python, as compared to R (R being a low level language).
I think it ultimately boils down to the eco-system you are in and personal preferences. For more details, you can look at this comparison here.
• "R has a better community for [...] learning" - I guess this highly depends on the type of learning. How much is going on with neural networks (arbitrary feed-forward architectures, CNNs, RNNs) in R? – Martin Thoma Jul 19 '15 at 14:41
• R is not really that "low level" IMO. It's also a dynamic language. – xji Feb 15 '18 at 8:45
There isn't a silver bullet language that can be used to solve each and every data related problem. The language choice depends on the context of the problem, size of data and if you are working at a workplace you have to stick to what they use.
Personally I use R more often than Python due to its visualization libraries and interactive style. But if I need more performance or structured code I definitely use Python since it has some of the best libraries as SciKit-Learn, numpy, scipy etc. I use both R and Python in my projects interchangeably.
So if you are starting on data science work I suggest you to learn both and it's not difficult since Python also provides a similar interface to R with Pandas.
If you have to deal with much larger datasets, you can't escape eco-systems built with Java(Hadoop, Pig, Hbase etc).
There is no "better" language. I have tried both of them and I am comfortable with Python so I work with Python only. Though I am still learning stuff, but I haven't encounter any roadblock with Python till now. The good thing about Python is community is too good and you can get a lot of help on the Internet easily. Other than that, I would say go with the language you like not the one people recommend.
In my experience, the answer depends on the project at hand. For pure research, I prefer R for two reasons: 1) broad variety of libraries and 2) much of the data science literature includes R samples.
If the project requires an interactive interface to be used by laypersons, I've found R to be too constrained. Shiny is a great start, but it's not flexible enough yet. In these cases, I'll start to look at porting my R work over to Python or js.
Most of the aforementioned wonderful R libraries are GPL (e.g. ggplot2, data.table). This prevents you from distributing your software in a proprietary form.
Although many usages of those libraries do not imply distribution of the software (e.g. to train models offline), the GPL may by itself lure away companies from using them. At least in my experience.
In the python realm, on the other hand, most libraries have business-friendly distribution licenses, such as BSD or MIT.
In academia, licensing issues normally are non-issues.
Not much to add to the provided comments. Only thing is maybe this infographic comparing R vs Python for data science purposes http://blog.datacamp.com/r-or-python-for-data-analysis/
One of real challenges, I faced with R is different packages compatible with different versions.. quite a lot R packages are not available for latest version of R.. And R quite a few time gives error due to library or package was written for older version..
• I'm not sure this is a particular problem with R, or that it answers the question of how Python and R differ. – Sean Owen Oct 21 '14 at 14:02
I haven't tried R (well, a bit, but not enough to make a good comparison). However, here are some of Pythons strengths:
• Very intuitive syntax: tuple unpacking, element in a_list, for element in sequence, matrix_a * matrix_b (for matrix multiplication), ...
• Many libraries:
• scipy: Scientific computations; many parts of it are only wrappers for pretty fast Fortran code
• theano > Lasagne > nolearn: Libraries for neural networks - they can be trained on GPU (nvidia, CUDA is required) without any adjustment
• sklearn: General learning algorithms
• Good community:
• IPython notebooks
• Misc:
• 0-indexed arrays ... I made that error all the time with R.
• Established package structures
• Good support for testing your code
I prefer Python over R because Python is a complete programming language so I can do end to end machine learning tasks such as gather data using a HTTP server written in Python, perform advanced ML tasks and then publish the results online. This can all be done in Python. I actually found R to be harder to learn and the payoffs for learning Python are much greater because it can be used for pretty much any programming task.
• You can do all those 3 things very easily in R – Gaius Aug 25 '17 at 13:04
R: R is the Open source counterpart. which has traditionally been used in academics and research. Because of its open source nature, latest techniques get released quickly. There is a lot of documentation available over the internet and it is a very cost-effective option. Python: With origination as an open source scripting language, Python usage has grown over time. Today, it sports libraries (numpy, scipy and matplotlib) and functions for almost any statistical operation / model building you may want to do. Since introduction of pandas, it has become very strong in operations on structured data.
Python Code
# Import other necessary libraries like pandas, numpy...
from sklearn import linear_model
# Identify feature and response variable(s) and values must be numeric and numpy arrays
x_train=input_variables_values_training_datasets y_train=target_variables_values_training_datasets x_test=input_variables_values_test_datasets
# Create linear regression object
linear = linear_model.LinearRegression()
# Train the model using the training sets and check score
linear.fit(x_train, y_train) linear.score(x_train, y_train)
# Equation coefficient and Intercept
print('Coefficient: \n', linear.coef_) print('Intercept: \n', linear.intercept_)
# Predict Output
predicted= linear.predict(x_test) R Code
# Identify feature and response variable(s) and values must be numeric and numpy arrays
x_train <- input_variables_values_training_datasets y_train <- target_variables_values_training_datasets x_test <- input_variables_values_test_datasets x <- cbind(x_train,y_train)
# Train the model using the training sets and check score
linear <- lm(y_train ~ ., data = x) summary(linear)
# Predict Output
predicted= predict(linear,x_test)
I do not think Python has point-click GUI that turn it into SPSS and SAS. Playing around with those is genuinely fun.
I got this image in a linkedin post. Whenever I get a doubt of using python or R, I look into it and it proves to be very useful.
• So what do you choose? – Serhii Polishchuk Nov 17 '18 at 21:35
## protected by Dawny33♦Aug 24 '17 at 6:57
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). | 2019-02-19 01:06:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22084197402000427, "perplexity": 1573.5268303808762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00411.warc.gz"} |
http://tex.stackexchange.com/questions/98388/how-to-make-table-with-rotated-table-headers-in-latex | # How to make table with rotated table headers in LaTeX
I saw a table created in PowerPoint and wanted to know how to do it in LaTeX. The table is shown below. The aspects of the table that I'm most interested in are the rotated table headers as well as Knowledge Areas and Process labels outside the table.
Here is what I've come up with so far after seeing Rotated column titles in tabular suggested in the comments (thanks!)
\documentclass{article}
\usepackage{adjustbox}
\usepackage{array}
\usepackage{booktabs}
\usepackage{multirow}
\newcolumntype{R}[2]{%
>{\adjustbox{angle=#1,lap=\width-(#2)}\bgroup}%
l%
<{\egroup}%
}
\newcommand*\rot{\multicolumn{1}{R{90}{1em}}}% no optional argument here, please!
\begin{document}
\begin{table} \centering
\begin{tabular}{clcccccccccc}
& & \multicolumn{10}{c}{Knowledge Areas} \\
& & \rot{Integration} & \rot{Scope} & \rot{Time} & \rot{Cost}
& \rot{Quality} & \rot{Human Resource} & \rot{Communication}
& \rot{Risk} & \rot{Procurement} & \rot{Stakeholder Management} \\
\midrule
\multirow{5}{*}{{Processes}}
& Initiating & * & & & & & & * & & & * \\
& Planning & * & * & * & * & * & * & * & * & * & * \\
& Executing & * & & & & * & * & * & & * & * \\
& Monitoring and Control & * & * & * & * & * & & * & * & * & * \\
& Closing & * & & & & & & * & & * & * \\
\bottomrule
\end{tabular}
\caption{Some caption}
\end{table}
\end{document}
With the result looking like:
I'm not so concerned about the row coloring (sorry, should have mentioned that before). There are just a few things I don't know how to do:
1. How can I make Stakeholder Management stack on top of each other?
2. How can I rotate Processes on the left-hand side? The \rot command I used in the table header didn't work, presumably because it is in the \multirow command.
-
could you show what you've tried so far? perhaps Rotated Column Titles in Tabular will get you started.... – cmhughes Feb 15 '13 at 18:03
Did you searched for suitable similar questions? Rowcoloring and rotating should be already covered. – Martin Scharrer Feb 15 '13 at 18:03
those labels "Knowledge Areas" and "Process" are not really labels, there are just a further row and a further column, respectively. can be done with multirow and multicolumn. And this table has a flaw that I would not want to copy. Stuff should be readable when the head is tilted to the left. Like the process lable. All the stuf in the green part is the wrong way round. – eject Feb 15 '13 at 19:20
@eject I tried to make the Process label on the left side rotated 90º like the table headers, but got an error. Do you know how to make it work? – Jeremy Feb 16 '13 at 0:33
## 3 Answers
Using \rlap makes it easier to position text without additional space. And if you want the label "Processes" outside then use \cmidrule{2-12} and \cmidrule[1pt]{2-12} instead.
\documentclass{article}
\usepackage{array,graphicx}
\usepackage{booktabs}
\usepackage{pifont}
\newcommand*\rot{\rotatebox{90}}
\newcommand*\OK{\ding{51}}
\begin{document}
\begin{table} \centering
\begin{tabular}{@{} cl*{10}c @{}}
& & \multicolumn{10}{c}{Knowledge Areas} \\[2ex]
& & \rot{Integration} & \rot{Scope} & \rot{Time} & \rot{Cost}
& \rot{Quality} & \rot{Human Resource} & \rot{Communication}
& \rot{Risk} & \rot{Procurement} & \rot{\shortstack[l]{Stakeholder\\Management}} \\
\cmidrule{2-12}
& Initiating & \OK & & & & & & \OK & & & \OK \\
& Planning & \OK & \OK & \OK & \OK & \OK & \OK & \OK & \OK & \OK & \OK \\
& Executing & \OK & & & & \OK & \OK & \OK & & \OK & \OK \\
& Monitoring and Control & \OK & \OK & \OK & \OK & \OK & & \OK & \OK & \OK & \OK \\
\rot{\rlap{~Processes}}
& Closing & \OK & & & & & & \OK & & \OK & \OK \\
\cmidrule[1pt]{2-12}
\end{tabular}
\caption{Some caption}
\end{table}
\end{document}
and the same colored:
\documentclass{article}
\usepackage{array,graphicx}
\usepackage{booktabs}
\usepackage{pifont}
\usepackage[table]{xcolor}
\newcommand*\rot{\rotatebox{90}}
\newcommand*\OK{\ding{51}}
\begin{document}
\begin{table} \centering
\begin{tabular}{@{} cr*{10}c }
& & \multicolumn{10}{c}{Knowledge Areas} \\[2ex]
\rowcolor{blue!30} \cellcolor{white}
& & \rot{Integration} & \rot{Scope} & \rot{Time} & \rot{Cost}
& \rot{Quality} & \rot{Human Resource~} & \rot{Communication}
& \rot{Risk} & \rot{Procurement} & \rot{\shortstack[l]{Stakeholder\\Management}} \\
\cmidrule{2-12}
\rowcolor{black!15} \cellcolor{white}
& Initiating &\OK & & & & & &\OK & & &\OK \\
& Planning &\OK &\OK &\OK &\OK &\OK &\OK &\OK &\OK &\OK &\OK \\
\rowcolor{black!15} \cellcolor{white}
& Executing &\OK & & & &\OK &\OK &\OK & &\OK &\OK \\
& Monitoring and Control
&\OK &\OK &\OK &\OK &\OK & &\OK &\OK &\OK &\OK \\
\rowcolor{black!15} \cellcolor{white}
\rot{\rlap{~Processes}}
& Closing &\OK & & & & & &\OK & &\OK &\OK \\
\cmidrule[1pt]{2-12}
\end{tabular}
\caption{Some caption}
\end{table}
\end{document}
-
You didn't use pstricks? – Marc van Dongen Feb 16 '13 at 10:41
Very nicely done. – Jeremy Feb 17 '13 at 3:14
I’d use \cmidrule[\heavyrulewidth]{2-12} for the bottom line to have the same width as \bottomrule. – Qrrbrbirlbel Feb 17 '13 at 19:26
When I tried to read your table, I found it impossible to read the column headings because of the rotation, which is why I recommend a solution without rotation. All it is is a simple reorganisation of the rows and columns.
It isn't perfect. Perhaps aligning the Processes to the right is better.
You can simplify the table as well because all knowledge areas require planning, so why put it in the table? Just mention it in the caption. Removing the column for planning should make the table less wide, which is always a good thing because it makes it easier to scan the table from left to right and back.
\documentclass{article}
\usepackage{booktabs}
\newcommand*\ON[0]{$\surd$}
\begin{document}
\begin{table}
\begin{center}
\begin{tabular}{@{}lccccc@{}}
& \multicolumn{5}{c}{\textbf{Processes}}
\\ \cmidrule{2-6}
& & & & \textbf{Monitoring}
\\ \textbf{Knowledge Areas}
& \textbf{Initiating}
& \textbf{Planning}
& \textbf{Executing}
& \textbf{\&\ Control}
& \textbf{Costing}
\\ \midrule
\textbf{Integration} & \ON & \ON & \ON & \ON & \ON
\\ \textbf{Scope} & & \ON & & \ON &
\\ \textbf{Time} & & \ON & & \ON &
\\ \textbf{Cost} & & \ON & & \ON &
\\ \textbf{Quality} & & \ON & \ON & \ON &
\\ \textbf{Human Resource} & & \ON & \ON & &
\\ \textbf{Communication} & \ON & \ON & \ON & \ON & \ON
\\ \textbf{Risk} & & \ON & & \ON &
\\ \textbf{Procurement} & & \ON & \ON & \ON & \ON
\\ \textbf{Stakeholder
Management} & \ON & \ON & \ON & \ON & \ON
\\ \bottomrule
\end{tabular}
\caption{Some caption}
\end{center}
\end{table}
\end{document}
-
+1 For not having to turn my head to read anything. To be personally I would have left out the "Processes", as is self-evident. – Yiannis Lazarides Feb 16 '13 at 9:15
I had made a similar table for my use.
\documentclass[oneside, 10pt, a4paper]{article}
\usepackage{multirow}
\usepackage{graphicx}
\usepackage{booktabs}
\newcommand{\mcrot}[4]{\multicolumn{#1}{#2}{\rlap{\rotatebox{#3}{#4}~}}}
\newcommand*{\twoelementtable}[3][l]%
{%
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}[t]{@{}#1@{}}%
#2\tabularnewline
#3%
\end{tabular}%
}
\begin{document}
\begin{table}[h] \label{tab:activityTracking}
\centering
\caption{Tracking daily activities.}
\begin{tabular}{ *2{ll|} *6c | *6c }
\\
\multicolumn{2}{c}{Date} & \multicolumn{1}{c}{Start} & \multicolumn{1}{c}{Stop}
& \mcrot{1}{l}{60}{Activity 1} & \mcrot{1}{l}{60}{Analysis} & \mcrot{1}{l}{60}{\twoelementtable{No. of}{processes}} & \phantom{p}& \mcrot{1}{l}{60}{Result} & \mcrot{1}{l}{60}{Backup}
& \mcrot{1}{l}{60}{Activity 2} & \mcrot{1}{l}{60}{Analysis} & \mcrot{1}{l}{60}{\twoelementtable{No. of}{processes}} & \phantom{p} & \mcrot{1}{l}{60}{Result} & \mcrot{1}{l}{60}{Backup} \\
\midrule \midrule
\multirow{4}{*}{\rotatebox{90}{\textbf{January}}}
& 11 & 1:30~am & 10:45~am
& x & x & \multicolumn{2}{c}{-} & - & x
& - & x & \multicolumn{2}{c}{x} & x & x \\
& 12 & &
& - & - & \multicolumn{2}{c}{x} & - & -
& - & - & \multicolumn{2}{c}{x} & - & - \\
& 13 & &
& - & - & \multicolumn{2}{c}{x} & - & -
& - & - & \multicolumn{2}{c}{x} & - & - \\
& 14 & &
& - & - & \multicolumn{2}{c}{-} & - & -
& - & - & \multicolumn{2}{c}{x} & - & - \\
\bottomrule
\end{tabular}
\end{table}
\end{document}
- | 2015-08-31 22:06:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6870646476745605, "perplexity": 4323.703878734426}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068098.37/warc/CC-MAIN-20150827025428-00072-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/851791/covariant-derivative-of-a-vector-field-parallel-vector-field | # Covariant Derivative of a vector field - Parallel Vector Field
I'm having trouble to understand the concept of Covariant Derivative of a vector field.
The definition from doCarmo's book states that the Covariant Derivative $(\frac{Dw}{dt})(t), t \in I$ is defined as the orthogonal projection of $\frac{dw}{dt}$ in the tangent plane.
Does that mean that if $w_0 \in T_pS$ is a vector in the tangent plane at point $p$, then its covariant derivative $Dw/dt$ is always zero? Since $dw_0/dt$ will be parallel to the normal $N$ at point $p$.
Is that correct?
If so, then for a vector field to be parallel, then every vector must be in the tangent plane.
Is that also correct?
Could you explain without using tensors and Riemannian Manifolds? Thank you
• You mean that $Dw/dt$ lie in the tangent plane, but $dw/dt$ does not necessarily lies in the tangent plane, correct? Can I say that if a vector $w_0$ in this vector field $w$ lies in the tangent plane, that is $w_0 \in T_pS$, then its covariant derivative (at this point $p$) is zero? – cryptow Jun 30 '14 at 1:22
• What I mean is, for each point $p \in S$, i have a vector determined by this vector field $w$. At this point p, $Dw/dt$ is the projection of $dw/dt$ in the tangent plane. My question is: if the vector at $p$, determined by my vector field $w$ lies (the vector) in the tangent plane, does that mean the covariant derivative at this point will be zero? Can I even ask that? Or is it totally out of sense? – cryptow Jun 30 '14 at 1:42
• I think I understand now: $dw/dt$ is the "rate" of change of the vector field $w$ along the tangent vector $\alpha'(0)$ at $p$. And $Dw/dt$ is the projection of this rate to the tangent plane. So, $Dw/dt = 0$ means the vector field doesn't change (locally) along side the direction defined by the tangent vector $y$(for a curve $\alpha$ and $\alpha'(0) = y$). It is also proved that the covariant derivative does not depend on this curve, only on the direction $y$. – cryptow Jun 30 '14 at 3:31
Consider that the surface is the plane $OXY.$ Consider the curve $(t,0,0)$ and the vector field $V(t)=t\partial_x.$ You have that its covariant derivative $\frac{dV}{dt}=\partial_x$is not zero. Note that, even being $N$ constant, the length of $V$ changes. This is the reason, in this case, to have non-zero covariant derivative.
Now, when we say that a vector field is parallel we assume it is tangent to the surface. In any case, if you consider that the orthogonal projection is zero without being tangent, think of the above case of the plane and $V=\partial_x+\partial_z.$ | 2019-08-25 01:14:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941006064414978, "perplexity": 113.69563473117022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00212.warc.gz"} |
https://astronomy.stackexchange.com/questions/11009/pulsars-how-do-astronomers-measure-minute-changes-in-period-picoseconds-per-y | Pulsars: How do astronomers measure minute changes in period (~picoseconds per year)?
I've been to some talks that mention how stable the period of a millisecond pulsar is over long periods of time. Recently, it was mentioned that astronomers have calculated the change in period over time to be less than 10^-12 seconds per year for several pulsars. No one I've talked to seems to know any details of this calculation. How do we calculate such small differences in period? How much data must be collected and what are the exposure times for imaging such rapid phenomena? A source/paper would be excellent. I apologize that I don't have a citation for the 10^-12s figure, but the lack of citation is mostly my reason for posting this question.
• I'm not sure of the details of the calculation, either, and would be interested to see some details. I know the basic idea: the pulsar is emitting a lot of electromagnetic radiation, and that constitutes an energy loss. The energy has to have come from something. If it doesn't have an accretion disk or the intense magnetic field of a magnetar, then the most likely source is the conversion of angular momentum. As its angular momentum is lost, the rotation speed goes down. I don't know the conversion mechanism or calculations right now, though. – zibadawa timmy Jun 15 '15 at 23:25
Let us suppose that the pulsar is spinning down at a uniform rate. So it has a period $P$ and a rate of change of period $dP/dt$ that is positive and constant (in practice there are also second, third, fourth etc. derivatives to worry about, but this doesn't change the principle of my answer).
Now let's assume you can measure the period very accurately - say you look at the pulsar today and measure its radio signals for a few hours, do a Fourier transform of the signal and get a nice big peak with a period of 0.1 seconds (for example).
With that period, you can "fold" the data to create an average pulse profile. This pulse profile can then be cross-correlated with subsequent measurements of the pulse to determine an offset between the predicted time of "phase zero" in the profile, calculated using the 0.1 s period, and the actual time of phase zero. This is often called an "O-C" curve or a residuals curve.
If you have the correct period and $dP/dt=0$, then the residuals will scatter randomly around zero with no trend as you perform later and later observations (see plot (a) from Lorimer & Kramer 2005, The Handbook of Pulsar Astronomy). If the initial period was in error, then the residuals would immediately begin to depart from zero on a linear trend.
If however, you have the period correct, but $dP/dt$ is positive, then the residuals curve will be in the form of a parabola (see plot (b)).
If you have second, third etc. derivatives in the period, then this will affect the shape of the residuals curve correspondingly.
The residuals curve is modelled to estimate the size of the derivatives of $P$. The reason that $dP/dt$ can be measured so precisely is that pulsars spin fast and have repeatable pulse shapes, so changes in the phase of the pulse quickly become apparent and can be tracked over many years.
Mathematically it works something like this. The phase $\phi(t)$ is given by $$\phi(t) \simeq \phi_0 + 2\pi \frac{\Delta t}{nP} - \frac{2\pi}{2}\frac{(\Delta t)^2}{nP^2} \frac{dP}{dt} + ...,$$ where $\phi_0$ is an arbitrary phase zero, $\Delta t$ is the time between the first and last observation and $n$ is the integer number of full turns the pulsar has made during that time. If the period is approximately correct, then $n = int(\Delta t/P)$.
The "residual curve" would be given by $$\phi_0 - 2\pi\frac{\Delta t}{nP} -\phi(t) \simeq \frac{2\pi}{2}\frac{(\Delta t)^2}{nP^2} \frac{dP}{dt} + ...,$$
For example, if the period of a $P \sim 0.01$ second pulsar changed by a picosecond in a year, then there would be an accumulated residual of almost $10^{-4}$ seconds after 1 year of observation. Depending on how "sharp" the pulse is, then this shift of about 1% in the phase of the pulse might be detectable.
Perhaps needless to say, but there are a host of small effects and corrections to make in order to get this very high precision timing. You need to know exactly how the Earth is moving in its orbit. The proper motion of the pulsar on the sky also has an effect. These and more can be found in Lorimer and Kramer's book, but there is also a summary here.
My comment notwithstanding, here's a solution to a homework problem that does the calculation. It doesn't specify the exact mechanism that converts angular momentum (aka rotational energy) into electromagnetic radiation. In this case, it is just an assertion of the problem (partially justified with what I said in my comment: the energy must come from somewhere, and if there doesn't seem to be any sources other than angular momentum, then it must be coming from the angular momentum).
Slightly rephrasing that link's content for the sake of future accessibility:
The pulsar is radiating energy (which we observe as radio waves). Since the total energy in the universe must be conserved, this radio energy must come from somewhere. In this case, it is taken out of the rotational kinetic energy of the pulsar: thus, it gradually slows down. We're interested in a relation between the pulsar luminosity and its rotational period. In general, the kinetic energy of a rotating body is given by $$E=\frac{1}{2} I \omega^2 = 2\pi^2 I P^2.$$ Since Luminosity is the time derivative of energy, we are now in a position to relate the quantities we are interested in: $$L= \frac{\partial E}{\partial t} = -4\pi^2 I P^{-3} \frac{\partial P}{\partial t}.$$ Rearranging this in terms of the quantity we want – the rate of change of the period – gives: $$\frac{\partial P}{\partial t} = \frac{-L P^3}{4\pi^2 I}.$$ If we assume that this neutron star is a homogeneous sphere (not really true, but a simple approximation), then its moment of inertia is just: $$I_{\text{sphere}}=\frac{2}{5}M R^2,$$ and so the final rate of change of period we get is: $$\frac{\partial P}{\partial t} = -\frac{5}{8\pi ^2} \frac{L P^3}{MR^2}.$$
So as long as we have measurements of the quantities on the right-hand side, we can just plug them in to get a value for the rate of change in the period. The hardest to measure is usually the moment of inertia (where the mass, $M$, and radius, $R$, terms come from). These are easier to get from eclipsing binary systems, since then there are nice relations between their orbital paths and masses.
• Going by this and this, the mechanism comes for a magnetic dipole radiation. Which, if I understand correctly, means that the magnetic field accelerates surface (or nearby) electrons, and applies torque to them. Acceleration means they must radiate away energy, torque means angular momentum is transferred. It's hard to get an explanation that really spells it out in full, it seems. – zibadawa timmy Jun 16 '15 at 4:34
• But we don't have the quantities on the RHS. The mass, radius and moment of intertia are essentially unknowns. The pulsar "luminosity" is also rather hard to estimate in the context of this calculation. – ProfRob Jun 16 '15 at 9:12
• @RobJeffries Thanks for pointing that out. I was fairly sure that most of those quantities were going to give a lot of compounding sources of measurement problems, and so there had to be another way around the issue. Glad to see you posted about such a way. – zibadawa timmy Jun 16 '15 at 10:08 | 2021-01-23 14:32:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988862633705139, "perplexity": 298.8335678824767}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538082.57/warc/CC-MAIN-20210123125715-20210123155715-00517.warc.gz"} |
https://physics.stackexchange.com/questions/600542/interaction-of-three-particles-in-phi4-theory | # Interaction of three particles in $\phi^4$ theory
I am trying to compute, without using Feynman diagrams, the scattering amplitude in $$\phi^4$$ theory with three incoming and three outgoing particles. (See this question for an outline of $$\phi^4$$ theory.)
Let's restrict out attention to the case that three $$\phi$$-particles interact to give a single new $$\phi$$ particle, which then react to give a new set of three $$\phi$$-particles. (So the Feynman diagram is a tree graph.)
Initial and final states (with details of normalization omitted; they are not related to my question): $$|i\rangle = \sqrt{8\omega_{\vec p_1}\omega_{\ldots}} a^{\dagger}_{\vec p_1}a^{\dagger}_{\vec p_2}a^{\dagger}_{\vec p_3} |0\rangle\\ |f\rangle = \sqrt\ldots a^{\dagger}_{\vec p_1'}a^{\dagger}_{\vec p_2'}a^{\dagger}_{\vec p_3'} |0\rangle\\$$
I am sure the answer is a constant times $$(-i\lambda)^2 \frac{i}{(p_1+p_2+p_3)^2-m^2},$$ where $$p_1,p_2,p_3$$ are the $$4$$-momenta of initial particles. But I am not sure about whether I have got the correct constant in the front.
Here is how I get the constant.
I first use wick's theorem and consider contractions of the product $$\phi(x)^4\phi(y)^4$$ any of the $$\phi(x)$$ can be contracted with any of the $$\phi(y)$$, giving $$4 \times 4 =16$$ possible contractions. (We do not need to consider other contractions, including ones with more than four $$\phi$$'s contracted, since they are not part of the amplitude we are going to find - we are going to annihilate 3 particles and create three new ones.)
Next, of the remaining six $$\phi$$'s that are not contracted, any one of them can be used to annilate or create any of the particles $$\vec p_j,\vec p_j'$$. This gives $$6!$$ permutations.
The second-order term in Dyson series has the coefficient $$\frac{(-i)}{2}$$. The interaction term in $$\phi^4$$ has coefficient $$\frac{\lambda}{4!}$$. Putting all these numbers together, I end up with the coefficient $$\frac{16 \times 6!}{2\times 4!\times 4!}=10,$$ which seems quite large.
Is the answer $$10(-i\lambda)^2 \frac{i}{(p_1+p_2+p_3)^2-m^2}$$ correct?
Is the number $$10$$ correct?
Apparently, symmetric factors are not something to worry about, because we are calculating from first principles. (And anyway, the Feynman diagram I mentioned has symmetric factor $$1$$.)
• These 6! contractions that you are talking about do not all correspond to the same diagram. Some of them have your 3 initial particles all going to one vertex and the 3 final particles to the other. Then your propagator is in the s-channel as you have written. But other contractions have 1 or 2 initial & final particles at the same vertex. Then your propagator is in the t-channel. You need to identify these separately. Dec 14, 2020 at 12:01
• @kaylimekay Thank you for the hint. I now work out that $6!$ should be replaced by $2\times 3!\times 3!$, where $3!$ comes from permuting the initial and final 3 particles, and $2$ comes from the ordering - we can either first create $3$ new particles and then annihilate the old ones, or do annihilate first before creating. Is that right? Dec 14, 2020 at 12:34
• Yes, and this precisely cancels the factors in your denominator, so the overall coefficient is 1. Dec 14, 2020 at 12:50
Assuming you take the interaction term to be $$\lambda/4!$$ then, from the Feynman rules, the vertex for the interaction is indeed $$-i\lambda$$.The lowest order diagram is indeed three particles coming in, one (intermediate) propagator and three particles coming out. So you have indeed $$(-i\lambda)^2$$ times the propagator.
• Is $10$ the correct coefficient? (That's what I am asking.) Dec 14, 2020 at 11:26
• It doesn't seem to be $10$, but I do want to know where it goes wrong. Could you explain it a little bit? Thanks. Dec 14, 2020 at 11:32
• Think again about the $6!$. Dec 14, 2020 at 11:40 | 2023-03-26 08:30:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7502489686012268, "perplexity": 331.29190615413484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00201.warc.gz"} |
https://de3de.com/industrial-hemp-tipap/chain-rule-questions-f6e051 | Check your level of preparation with the practice exercises based on chain rule questions. Welcome to highermathematics.co.uk A sound understanding of the Chain Rule is essential to ensure exam success. Basic examples that show how to use the chain rule to calculate the derivative of the composition of functions. About. Here Given Chain Rule practice questions, quiz, fully solved questions, tips & trick and Mock tests, which include question from each topic will help you to excel in Chain Rule. Reverse chain rule example. This calculus video tutorial shows you how to find the derivative of any function using the power rule, quotient rule, chain rule, and product rule. Practice: Reverse chain rule. This is a way of differentiating a function of a function. 2 2 10 10 7 7 x dx x C x = − + ∫ − 6. Each element has two figures except one element that has one part missing. The Chain Rule Powerpoint Lesson 1. In school, there are some chocolates for 240 adults and 400 children. Online aptitude preparation material with practice question bank, examples, solutions and explanations. Partial fraction expansion. The Chain Rule is a means of connecting the rates of change of dependent variables. with full confidence. • If pencil is used for diagrams/sketches/graphs it must be dark (HB or B). Covered for all Bank Exams, Competitive Exams, Interviews and Entrance tests. This unit illustrates this rule. The other given part of the same element is taken as base and is compared separately with all the other elements e.g. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. So when using the chain rule: Back to Problem List. The Chain Rule is a formula for computing the derivative of the composition of two or more functions. This can be viewed as y = sin(u) with u = x2. How do I apply the chain rule to double partial derivative of a multivariable function? 1. by the Chain Rule, dy/dx = dy/dt × dt/dx so dy/dx = 3t² × 2x = 3(1 + x²)² × 2x = 6x(1 + x²) ². 2 3 1 sin cos cos 3 ∫ x x dx x C= − + 5. Some of the types of chain rule problems that are asked in the exam. Ask Question Asked today. For instance, if f and g are functions, then the chain rule expresses the derivative of their composition. Therefore we have dy du = cos(u) and du dx = 2x. Chain rule is used to find out this missing part of an element by subsequent comparison. The Product, Quotient, and Chain Rules. The chain rule says that. • Fill in the boxes at the top of this page with your name. Answer to 2: Differentiate y = sin 5x. Nested Multivariable Chain Rule. Example 1; Example 2; Example 3; Example 4; Example 5; Example 6; Example 7; Example 8 ; In threads. To access a wealth of additional free resources by topic please either use the above Search Bar or click on any of the Topic Links found at the bottom of this page as well as on the Home Page HERE. Khan Academy is a 501(c)(3) nonprofit organization. Chain Rule Instructions • Use black ink or ball-point pen. This chapter focuses on some of the major techniques needed to find the derivative: the product rule, the quotient rule, and the chain rule. y=f(u) u=f(x) y=(2x+4)3 y=u3andu=2x+4 dy du =3u2 du dx =2 dy dx =3u2×2=2×3(2x+4)2 dy dx = dy du ⋅ du dx dy dx =6(2x+4)2. We have Free Practice Chain Rule (Arithmetic Aptitude) Questions, Shortcuts and Useful tips. Why Aptitude Chain Rule? In calculus, the chain rule is a formula to compute the derivative of a composite function. Video lectures to prepare quantitative aptitude for placement tests, competitive exams like MBA, Bank exams, RBI, IBPS, SSC, SBI, RRB, Railway, LIC, MAT. However, we rarely use this formal approach when applying the chain rule to specific problems. The rule itself looks really quite simple (and it is not too difficult to use). En anglais, on peut dire the chain rule (of differentiation of a function composed of two or more functions). In order to master the techniques explained here it is vital that you undertake plenty of practice exercises so that they become second nature. So all we need to do is to multiply dy /du by du/ dx. Rates of change . Question 1 . Most problems are average. Differentiate using the chain rule. Integral of tan x. The chain rule is a rule for differentiating compositions of functions. Section 3-9 : Chain Rule. back to top . The chain rule is used to differentiate composite functions. Donate or volunteer today! Confusing limit problem within proof of the chain rule. Differentiate $$f\left( x \right) = {\left( {6{x^2} + 7x} \right)^4}$$ . Multivariable Chain Rule - A solution I can't understand. Top; Examples. VCE Maths Methods - Chain, Product & Quotient Rules The chain rule 3 • The chain rule is used to di!erentiate a function that has a function within it. Chain rule Statement Examples Table of Contents JJ II J I Page1of8 Back Print Version Home Page 21.Chain rule 21.1.Statement The power rule says that d dx [xn] = nxn 1: This rule is valid for any power n, but not for any base other than the simple input variable x. Created by T. Madas Created by T. Madas Question 1 Carry out each of the following integrations. In examples such as the above one, with practise it should be possible for you to be able to simply write down the answer without having to let t = 1 + x² etc. If you're seeing this message, it means we're having trouble loading external resources on our website. The Questions. Hint : Recall that with Chain Rule problems you need to identify the “inside” and “outside” functions and then apply the chain rule. Let u = 5x (therefore, y = sin u) so using the chain rule. You can use our resources like sample question papers and Maths previous years’ papers to practise questions and answers for Maths board exam preparation. Integral of tan x. Our mission is to provide a free, world-class education to anyone, anywhere. If air is blown into a spherical balloon at the rate of 10 cm 3 / sec. • The quotient rule • The chain rule • Questions 2. Example #1 . The Chain Rule mc-TY-chain-2009-1 A special rule, thechainrule, exists for differentiating a function of another function. 2. Active today. ∫4sin cos sin3 4x x dx x C= + 4. Understand how to differentiate composite functions by using the Chain Rule correctly with our CBSE Class 12 Science Maths video lessons. Here you will be shown how to use the Chain Rule for differentiating composite functions. The Chain Rule Equation . The Chain Rule
2. The answer keys and explanations are given for the same. BY REVERSE CHAIN RULE . These Multiple Choice Questions (MCQs) on Chain Rule help you evaluate your knowledge and skills yourself with this CareerRide Quiz. Example #1 Differentiate (3 x+ 3) 3 . That is, if f and g are differentiable functions, then the chain rule expresses the derivative of their composite f ∘ g — the function which maps x to (()) — in terms of the derivatives of f and g and the product of functions as follows: (∘) ′ = (′ ∘) ⋅ ′. Skip to navigation (Press Enter) Skip to main content (Press Enter) Home; Threads; Index; About; Math Insight. Help Center Detailed answers to any questions you might have ... How does chain rule work for complex valued function? The Chain Rule. The chain rule makes it possible to differentiate functions of func-tions, e.g., if y is a function of u (i.e., y = f(u)) and u is a function of x (i.e., u = g(x)) then the chain rule states: if y = f(u), then dy dx = dy du × du dx Example 1 Consider y = sin(x2). Viewed 16 times 0 $\begingroup$ Let ... Browse other questions tagged real-analysis multivariable-calculus or ask your own question. The Chain Rule is used for differentiating composite functions. Chain Rule Online test - 20 questions to practice Online Chain Rule Test and find out how much you score before you appear for next interview and written test. 2. J'ai constaté que la version homologue française « règle de dérivation en chaîne » ou « règle de la chaîne » est quasiment inconnue des étudiants. ( ) ( ) 3 1 12 24 53 10 ∫x x dx x C− = − + 2. In the following discussion and solutions the derivative of a function h(x) will be denoted by or h'(x) . A few are somewhat challenging. 1. Mes collègues locuteurs natifs m'ont recommandé de … This is the currently selected item. Chain Rule can be applied in questions where two or more than two elements are given. Chapter 5. Show Solution. Next lesson. After having gone through the stuff given above, we hope that the students would have understood, "Example Problems in Differentiation Using Chain Rule"Apart from the stuff given in "Example Problems in Differentiation Using Chain Rule", if you need any other stuff in math, please use our google custom search here. Chain Rule problems or examples with solutions. ( ) ( ) 1 1 2 3 31 4 1 42 21 6 x x dx x C − ∫ − = − − + 3. The only problem is that we want dy / dx, not dy /du, and this is where we use the chain rule. As u = 3x − 2, du/ dx = 3, so. Site Navigation. The most important thing to understand is when to use it and then get lots of practice. Use the chain rule to differentiate composite functions like sin(2x+1) or [cos(x)]³. Example #2 Differentiate y =(x 2 +5 x) 6 . The Problem
Complex Functions
Why?
not all derivatives can be found through the use of the power, product, and quotient rules
In this section you can learn and practice Aptitude Questions based on "Chain Rule" and improve your skills in order to face the interview, competitive examination and various entrance test (CAT, GATE, GRE, MAT, Bank Exam, Railway Exam etc.) 1. If the chocolates are taken away by 300 children, then how many adults will be provided with the remaining chocolates? Up Next. 1. Chain Rule Examples. Integral of tan x. Problems on Chain Rule - Quantitative aptitude tutorial with easy tricks, tips, short cuts explaining the concepts. Each test has all the basics questions to advanced questions with answer and explanation for your clear understanding, you can download the test result as pdf for further reference. The chain rule states formally that . Question 3 Use the chain rule and the fact that when $y=af(x)$, $\frac{\mathrm{d}y}{\mathrm{d}x}=af'(x)$ to differentiate the following: Page Navigation. | 2021-04-20 19:25:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7450886368751526, "perplexity": 1029.6189871332053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00613.warc.gz"} |
https://www.hackmath.net/en/math-problem/1882 | # Box and whisker plot
Construct a box and whisker plot for the given data.
56, 32, 54, 32, 23, 67, 23, 45, 12, 32, 34, 24, 36, 47, 19, 43
Result
min = 12
Q1 = 23.5
Q2 = 33
Q3 = 46
max = 67
#### Solution:
$\{12, 19, 23, 23, 24, 32, 32, 32, 34, 36, 43, 45, 47, 54, 56, 67\} \ \\ min = 12$
$n = 16 \ \\ i_{ 1 } = n/4 = 16/4 = 4 \ \\ Q_{ 1 } = (23+24)/2 = \dfrac{ 47 }{ 2 } = 23.5$
$i_{ 2 } = n/2 = 16/2 = 8 \ \\ Q_{ 2 } = (32+34)/2 = 33$
$i_{ 3 } = 0.75 \cdot \ n = 0.75 \cdot \ 16 = 12 \ \\ Q_{ 3 } = (45+47)/2 = 46$
$max = 67$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Mel B
love this site
Tips to related online calculators
Looking for help with calculating arithmetic mean?
Looking for a statistical calculator?
Looking for a standard deviation calculator?
Would you like to compute count of combinations?
## Next similar math problems:
1. Standard deviation
Find standard deviation for dataset (grouped data): Age (years) No. Of Persons 0-10 15 10-20 15 20-30 23 30-40 22 40-50 25 50-60 10 60-70 5 70-80 10
2. Normal Distribution
At one college, GPA's are normally distributed with a mean of 3.1 and a standard deviation of 0.4. What percentage of students at the college have a GPA between 2.7 and 3.5?
3. Lottery
Fernando has two lottery tickets each from other lottery. In the first is 973 000 lottery tickets from them wins 687 000, the second has 1425 000 lottery tickets from them wins 1425 000 tickets. What is the probability that at least one Fernando's ticket w
4. Normal distribution GPA
The average GPA is 2.78 with a standard deviation of 4.5. What are students in the bottom the 20% having what GPA?
5. One green
In the container are 45 white and 15 balls. We randomly select 5 balls. What is the probability that it will be a maximum one green?
6. Today in school
There are 9 girls and 11 boys in the class today. What is the probability that Suzan will go to the board today?
7. Cards
The player gets 8 cards of 32. What is the probability that it gets a) all 4 aces b) at least 1 ace
8. Class - boys and girls
In the class are 60% boys and 40% girls. Long hair has 10% boys and 80% girls. a) What is the probability that a randomly chosen person has long hair? b) The selected person has long hair. What is the probability that it is a girl?
9. Hearts
5 cards are chosen from a standard deck of 52 playing cards (13 hearts) with replacement. What is the probability of choosing 5 hearts in a row?
10. US GDP
Consider the following dataset , which contsins the domestic US gross in millions of the top grossing movie over the last 5 years. 300,452,513,550,780 I. Find the Mean of the Dataset II. Find the Squared deviation of the second observation from the mean
11. 75th percentile (quartille Q3)
Find 75th percentile for 30,42,42,46,46,46,50,50,54
12. Median and modus
Radka made 50 throws with a dice. The table saw fit individual dice's wall frequency: Wall Number: 1 2 3 4 5 6 frequency: 8 7 5 11 6 13 Calculate the modus and median of the wall numbers that Radka fell.
13. Life expectancy
The life expectancy of batteries has a normal distribution with a mean of 350 minutes and standard deviation of 10 minutes. What the range in minutes 68% of the batteries will last? What is the range in minutes approximately 99.7% of batteries will last?
14. Energy
In one region, the September energy consumption levels for single-family homes are found to be normally distributed with a mean of 1050 kWh and a standard deviation of of 218 kWh. Find the consumption level separating the bottom 45% from the top 55%.
15. Std-deviation
Calculate standard deviation for file: 63,65,68,69,69,72,75,76,77,79,79,80,82,83,84,88,90
16. Average
If the average(arithmetic mean) of three numbers x,y,z is 50. What is the average of there numbers (3x +10), (3y +10), (3z+10) ?
17. Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? | 2020-02-25 09:19:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5198403000831604, "perplexity": 1105.596001389273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146064.76/warc/CC-MAIN-20200225080028-20200225110028-00240.warc.gz"} |
https://www.transtutors.com/questions/it-is-december-31-2011-and-30-year-old-camille-henley-is-reviewing-her-retirement-sa-3457008.htm | # It is December 31, 2011, and 30-year-old Camille Henley is reviewing her retirement savings and...
It is December 31, 2011, and 30-year-old Camille Henley is reviewing her retirement savings and planning for her retirement at age 60. She currently has $55,000 saved (which includes the deposit she just made today) and invests #2,000 per year (at the end of the year) in a retirement account that earns about 9% annually. She has decided that she is comfortable living on$40000 per year (in today’s dollars) and believes she can continue to live on that amount as long as it is adjusted annually for inflation. Inflation is expected to average 2.36% per year for the foreseeable future. After researching information on average life expectancy for females of her background, her plan will assume she lives to age 90. She will withdraw the amount needed for each year during retirement at the beginning of the year. So, on December 31 at age 60, she will make her last deposit of $2,000 and the following day (January 1) she will withdraw her first installment for retirement.1. If Camille continues on her current plan, will she be able to accomplish it?2. How would the situation change if Camille were to start placing her$2,000 annual savings into her retirement account on January 1st of each year rather than December 31st of each year? Assume that the investment still pays interest at the end of the year.3. If Camille resumes making her deposits at the end of the year, how much would she have to save each year to accomplish her objective?4. Assume that Camille continues with her current plan. What interest rate would she have to earn on her investment to make it work?5. If Camille wishes to leave a \$50,000 perpetuity to her alma mater, starting one year from the year she turns 90, then how much extra money would she need to have on December 31st of the year she turns 90? Assume that the investment will earn 9%.6. Rework the previous question for the case where Camille wants the university investment to grow by 5% per year. | 2022-01-22 14:57:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20227037370204926, "perplexity": 1827.6955134284738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00184.warc.gz"} |
https://brilliant.org/discussions/thread/determinantnot-really-but-still-problem/ | ×
# Determinant(not really but still) Problem
What is the sum of the non-integral solutions of the equation
$$\left| \begin{array}{ccc} x & 3 & 4 \\ 5 & x & 5 \\ 4 & 2 & x \end{array} \right|=0\hspace{2mm}?$$
Note by Krishna Jha
3 years, 11 months ago
Sort by:
Reduce to a $$2 \times 2$$ matrix: $$\left[ \begin{array}{cc} 10-4x & x^2-10 \\ 12-2x & 8-3x \end{array} \right]= 0$$
Then to a $$1 \times 1$$ matrix: $$(8-3x)(10-4x)-(12-2x)(x^2-10) = 0$$
Which then becomes $$x^3 - 41x + 100 = 0$$
Then after trial and error, x = 4 is one of the non-zero solutions · 3 years, 11 months ago
Please see the edit now..And please could you give a solution other than hit and trial??Like using synthetic division?? · 3 years, 11 months ago
This solution was created using the Carroll-Dodson Condensation Method. Synthetic division in this case has no meaningful purpose. However, I will see what I can do as far as posting a better solution. · 3 years, 11 months ago
Why not just calculate the $$3\times3$$ determinant directly? The C-D method requires the calculation of five $$2\times2$$ determinants, when only three are necessary: $0 \; = \; x(x^2-10) - 3(5x-20) + 4(10-4x) \; = \; x^3 - 41x + 100$ About the only short-cut to looking for the integer root of this cubic is to remember that any integer root $$n$$ has to be a factor of $$100$$. Testing $$\pm1,\pm2,\pm4,\pm5,\ldots$$ we find $$n=4$$ quite quickly. We could trim the testing a little by observing that $$|x^3-41x| \le 8 + 82 = 90 < 100$$ for $$|x| \le 2$$, and hence $$|n| \ge 4$$. Thus we can start testing with $$n=\pm4$$, and hit paydirt first time. We might not have been so lucky, and it isn't really necessary. Testing on a predetermined finite set (the factors of $$100$$) is not really trial and error. · 3 years, 11 months ago | 2017-08-19 14:54:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785048127174377, "perplexity": 725.6475334597318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105455.37/warc/CC-MAIN-20170819143637-20170819163637-00035.warc.gz"} |
https://rexdouglass.com/chapters/probability/probability_density_mass_function.html | # Probability Mass/Density Function (PMF/PDF)
Instance of: function
AKA: Discrete density function; density; Probability Function
Distinct from:
English: A function that takes in a value, and returns the relative likelihood that a random variable takes on that value. Probability mass functions refer to discrete distributions, e.g. what is the probability that a 6 sided die lands on 5? Probability density functions refer to continuous probability distributions and are usually discussed in terms of ranges or cut ponts, e.g., what is the probability that a sample from a random normal value is above 2?
Formalization:
In the continous case, the probability that a random variable $$X$$ takes on a value between $$a$$ and $$b$$ is the integration of its density over that range. $Pr[a \le X \le b]= \int_{a}^{b} f_x(x)dx$
Where $$a$$ and $$b$$ are the range of values, and $$f_x$$ is the density of the random variable.
Cites: Wikipedia ; Wikidata ; Wolfram
Code
Examples:
Examples:
library(DBI)
# Create an ephemeral in-memory RSQLite database
#con <- dbConnect(RSQLite::SQLite(), dbname = ":memory:")
#dbListTables(con)
#dbWriteTable(con, "mtcars", mtcars)
#dbListTables(con)
require(RPostgres)
Loading required package: RPostgres
# Connect to the default postgres database
con <- dbConnect(RPostgres::Postgres())
import torch | 2023-03-20 22:35:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233143091201782, "perplexity": 1057.7711860751303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00485.warc.gz"} |
http://www.reference.com/browse/Column+rank | Definitions
# Rank (linear algebra)
The column rank of a matrix A is the maximal number of linearly independent columns of A. Likewise, the row rank is the maximal number of linearly independent rows of A.
Since the column rank and the row rank are always equal, they are simply called the rank of A; for the proofs, see, e.g., Murase (1960), Andrea & Wong (1960), Williams & Cater (1968), Mackiw (1995). It is commonly denoted by either rk(A) or rank A.
The rank of an $m times n$ matrix is at most $min\left(m,n\right)$. A matrix that has a rank as large as possible is said to have full rank; otherwise, the matrix is rank deficient.
## Alternative definitions
The maximal number of linearly independent columns of the m-by-n matrix A with entries in the field F is equal to the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A). Since the column rank and the row rank are the same, we can also define the rank of A as the dimension of the row space of A.
If one considers the matrix A as a linear map
f : FnFm
with the rule
f(x) = Ax
then the rank of A can also be defined as the dimension of the image of f (see linear map for a discussion of image and kernel). This definition has the advantage that they can be applied to any linear map without need for a specific matrix. The rank can also be defined as n minus the dimension of the kernel of f; the rank-nullity theorem states that this is the same as the dimension of the image of f.
Another equivalent definition of the rank of a matrix is the order of the greatest non-vanishing minor in the matrix.
## Properties
We assume that A is an m-by-n matrix over the field F and describes a linear map f as above.
• only the zero matrix has rank 0
• $operatorname\left\{rank\right\} A leq min\left(m, n\right)$
• f is injective if and only if A has rank n (in this case, we say that A has full column rank).
• f is surjective if and only if A has rank m (in this case, we say that A has full row rank).
• In the case of a square matrix A (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank).
• If B is any n-by-k matrix, then
$operatorname\left\{rank\right\}\left(AB\right) leq min\left(operatorname\left\{rank\right\} A, operatorname\left\{rank\right\} B\right)$
As an example of the "<" case, consider the product
begin{bmatrix}
` 0 & 0 `
` 1 & 0 `
end{bmatrix} begin{bmatrix}
` 0 & 0 `
` 0 & 1 `
end{bmatrix}
Both factors have rank 1, but the product has rank 0.
• If B is an n-by-k matrix with rank n, then
$operatorname\left\{rank\right\}\left(AB\right) = operatorname\left\{rank\right\}\left(A\right)$
• If C is an l-by-m matrix with rank m, then
$operatorname\left\{rank\right\}\left(CA\right) = operatorname\left\{rank\right\}\left(A\right)$
• The rank of A is equal to r if and only if there exists an invertible m-by-m matrix X and an invertible n-by-n matrix Y such that
` XAY =`
begin{bmatrix}
` I_r & 0 `
` 0 & 0 `
end{bmatrix}
where Ir denotes the r-by-r identity matrix.
• Sylvester’s rank inequality: If A and B are any n-by-n matrices, then
$operatorname\left\{rank\right\}\left(A\right) + operatorname\left\{rank\right\}\left(B\right) - n leq operatorname\left\{rank\right\}\left(A B\right)$
• Subadditivity: $operatorname\left\{rank\right\}\left(A + B\right) leq operatorname\left\{rank\right\}\left(A\right) + operatorname\left\{rank\right\}\left(B\right)$ when A and B are of the same dimension. As a consequence, a rank-k matrix can be written as the sum of k rank-1 matrices, but not less.
• The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix (this is the "rank theorem" or the "rank-nullity theorem").
• Rank of matrix and corresponding Gram matrix is equal:
$operatorname\left\{rank\right\}\left(A^T A\right) = operatorname\left\{rank\right\}\left(A A^T\right) = operatorname\left\{rank\right\}\left(A\right)$
This can be shown by proving equality of their null spaces. Null space of the Gram matrix is given by vectors $x$ for which $A^T A x = 0$. If this condition is fulfilled, also holds $0 = x^T A^T A x = |A x|^2$. This proof was adapted from.
## Computation
The easiest way to compute the rank of a matrix A is given by the Gauss elimination method. The row-echelon form of A produced by the Gauss algorithm has the same rank as A, and its rank can be read off as the number of non-zero rows.
Consider for example the 4-by-4 matrix
` A =`
begin{bmatrix}
` 2 & 4 & 1 & 3 `
` -1 & -2 & 1 & 0 `
` 0 & 0 & 2 & 2 `
` 3 & 6 & 2 & 5 `
end{bmatrix}.
We see that the second column is twice the first column, and that the fourth column equals the sum of the first and the third. The first and the third columns are linearly independent, so the rank of A is two. This can be confirmed with the Gauss algorithm. It produces the following row echelon form of A:
` A =`
begin{bmatrix}
` 1 & 2 & 0 & 1 `
` 0 & 0 & 1 & 1 `
` 0 & 0 & 0 & 0 `
` 0 & 0 & 0 & 0 `
end{bmatrix}
which has two non-zero rows.
When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less expensive choices, such as QR decomposition with pivoting, which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.
## Applications
One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. The system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank.
In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.
## Generalization
There are different generalisations of the concept of rank to matrices over arbitrary rings. In those generalisations, column rank, row rank, dimension of column space and dimension of row space of a matrix may be different from the others or may not exist.
There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative.
Matrix rank should not be confused with tensor rank. Matrices can be defined as tensors with tensor rank 2. | 2015-05-05 08:22:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681091070175171, "perplexity": 165.53288236894295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455283053.76/warc/CC-MAIN-20150501044123-00041-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.askiitians.com/forums/Modern-Physics/two-neutral-particles-are-kept-1-m-apart-suppose_102374.htm | Click to Chat
1800-2000-838
+91-120-4616500
CART 0
• 0
MY CART (5)
Use Coupon: CART20 and get 20% off on all online Study Material
ITEM
DETAILS
MRP
DISCOUNT
FINAL PRICE
Total Price: Rs.
There are no items in this cart.
Continue Shopping
Two neutral particles are kept 1 m apart. Suppose by some mechanism some charge is transferred from one particle to the other and the electric potential energy lost is completely converted into a photon. Calculate the longest and the next smaller wavelength of the photon possible.
3 years ago
Navjyot Kalra
654 Points
Sol. r = 1 m
Energy = kq^2/R = kq^2/1
Now, kQ^2/1 = hc/λ or λ = hc/kq^2
For max ‘λ’, ‘q’ should be min,
For λ hc/kq^2 – 0.863 * 10^3 = 863 m.
For next smaller wavelength = 6.63 * 3 *10^-34 * 10^8/9 * 10^9 *(1.6 * 2)^2 * 10^-38 = 863/4 = 215.74 m
3 years ago
Dhawal Patil
24 Points
As charge is quantized, the least amount of charge that can be transferred is e.According to the question, energy lost is completely transferred in the form of the photon.Please note that the energy is lost as conservative forces are considered to be negative if they are attractive in nature.As electric potential energy = $-\frac{1}{4\pi\epsilon_o}\!\!\frac{{q}^2}{r}$And energy of the photon = $h\nu$ We have :$-\frac{1}{4\pi\epsilon_o}\!\!\frac{{q}^2}{r} = h\nu$Sustituting, $\nu = \frac{c}{\lambda}$ and discarding the minus sign, we get:$\frac{1}{4\pi\epsilon_o}\!\!\frac{q^2}{r} = \frac{hc}{\lambda}$which implies, $\lambda=\frac{hc}{\frac{1}{4\pi\epsilon_o}\!\!\frac{q^2}{r}}$Also, $\lambda_{n^{th}largest} = \frac{(6\!\cdot\!63.10^{-34}Js).(3\!\cdot\!0.10^8ms^{-1}).1m)}{(9.10^9Nm^2C^{-2}).(n.1\!\cdot\!6.10^{-19})^2}$$\therefore\lambda_{largest} = \frac{(6\!\cdot\!63.10^{-34}Js).(3\!\cdot\!0.10^8ms^{-1}).1m)}{(9.10^9Nm^2C^{-2}).(1\!\cdot\!6.10^{-19})^2}$$\therefore\lambda_{largest} = \frac{(6\!\cdot\!63.10^{-34}Js).(3\!\cdot\!0.10^8ms^{-1}).1m)}{(9.10^9Nm^2C^{-2}).(2.1\!\cdot\!6.10^{-19})^2}=\frac{\lambda_{largest}}{4}$
one year ago
Think You Can Provide A Better Answer ?
## Other Related Questions on Modern Physics
View all Questions »
• Complete Physics Course - Class 12
• OFFERED PRICE: Rs. 2,756
• View Details
• Complete Physics Course - Class 11
• OFFERED PRICE: Rs. 2,968
• View Details | 2018-03-18 15:37:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4580240845680237, "perplexity": 4342.164326747572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645824.5/warc/CC-MAIN-20180318145821-20180318165821-00254.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-the-system-2-x-4-y-6-3x-2-y-3-13 | How do you solve the system?: 2(x-4) + y=6, 3x-2(y-3)=13
Nov 2, 2015
$x = 5 , y = 4$
Explanation:
Nov 2, 2015
First expand the system and then use simultaneous equations to get $x = 5$ and $y = 4$
Explanation:
Expanding and simplifying the system
$2 \left(x - 4\right) + y = 2 x - 8 + y = 6$
Therefore,
$2 x + y = 14$ $\left(1\right)$
$3 x - 2 \left(y - 3\right) = 3 x - 2 y + 6 = 13$
Therefore,
$3 x - 2 y = 7$ $\left(2\right)$
Simultaneous Equations
(2 $\times$ $\left(1\right)$) $+$ $\left(2\right)$
$4 x + 2 y + 28$ $\left(1\right)$
$3 x - 2 y = 7$ $\left(2\right)$
Cancel out the $y$ terms and equate:
$7 x = 35$
$x = 5$
Sub $x$ back into $\left(1\right)$:
$2 \left(5\right) + y = 14$
$y = 4$ | 2019-10-23 18:40:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 26, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42320549488067627, "perplexity": 3151.9528474632525}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00363.warc.gz"} |
https://aitopics.org/mlt?cdid=conferences%3A130FBD2D&dimension=pagetext | ### Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks
It has been shown that deep neural network (DNN) based classifiers are vulnerable to human-imperceptive adversarial perturbations which can cause DNN classifiers to output wrong predictions with high confidence. We propose an unsupervised learning approach to detect adversarial inputs without any knowledge of attackers. Our approach tries to capture the intrinsic properties of a DNN classifier and uses them to detect adversarial inputs. The intrinsic properties used in this study are the output distributions of the hidden neurons in a DNN classifier presented with natural images. Our approach can be easily applied to any DNN classifiers or combined with other defense strategies to improve robustness. Experimental results show that our approach demonstrates state-of-the-art robustness in defending black-box and gray-box attacks.
### Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks
It has been shown that deep neural network (DNN) based classifiers are vulnerable to human-imperceptive adversarial perturbations which can cause DNN classifiers to output wrong predictions with high confidence. We propose an unsupervised learning approach to detect adversarial inputs without any knowledge of attackers. Our approach tries to capture the intrinsic properties of a DNN classifier and uses them to detect adversarial inputs. The intrinsic properties used in this study are the output distributions of the hidden neurons in a DNN classifier presented with natural images. Our approach can be easily applied to any DNN classifiers or combined with other defense strategies to improve robustness. Experimental results show that our approach demonstrates state-of-the-art robustness in defending black-box and gray-box attacks.
### Lower bounds on the robustness to adversarial perturbations
The input-output mappings learned by state-of-the-art neural networks are significantly discontinuous.It is possible to cause a neural network used for image recognition to misclassify its input by applying very specific, hardly perceptible perturbations to the input, called adversarial perturbations. Many hypotheses have been proposed to explain the existence of these peculiar samples as well as several methods to mitigate them, but a proven explanation remains elusive. In this work, we take steps towards a formal characterization of adversarial perturbations by deriving lower bounds on the magnitudes of perturbations necessary to change the classification of neural networks. The proposed bounds can be computed efficiently, requiring time at most linear in the number of parameters and hyperparameters of the model for any given sample. This makes them suitable for use in model selection, when one wishes to find out which of several proposed classifiers is most robust to adversarial perturbations. They may also be used as a basis for developing techniques to increase the robustness of classifiers, since they enjoy the theoretical guarantee that no adversarial perturbation could possibly be any smaller than the quantities provided by the bounds. We experimentally verify the bounds on the MNIST and CIFAR-10 data sets and find no violations. Additionally, the experimental results suggest that very small adversarial perturbations may occur with nonzero probability on natural samples.
### Enhancing ML Robustness Using Physical-World Constraints
Recent advances in Machine Learning (ML) have demonstrated that neural networks can exceed human performance in many tasks. While generalizing well over natural inputs, neural networks are vulnerable to adversarial inputs -an input that is similar'' to the original input, but misclassified by the model. Existing defenses focus on Lp-norm bounded adversaries that perturb ML inputs in the digital space. In the real world, however, attackers can generate adversarial perturbations that have a large Lp-norm in the digital space. Additionally, these defenses also come at a cost to accuracy, making their applicability questionable in the real world. To defend models against such a powerful adversary, we leverage one constraint on its power: the perturbation should not change the human's perception of the physical information; the physical world places some constraints on the space of possible attacks. Two questions follow: how to extract and model these constraints? and how to design a classification paradigm that leverages these constraints to improve robustness accuracy trade-off? We observe that an ML model is typically a part of a larger system with access to different input modalities. Utilizing these modalities, we introduce invariants that limit the attacker's action space. We design a hierarchical classification paradigm that enforces these invariants at inference time. As a case study, we implement and evaluate our proposal in the context of the real-world application of road sign classification because of its applicability to autonomous driving. With access to different input modalities, such as LiDAR, camera, and location we show how to extract invariants and develop a hierarchical classifier. Our results on the KITTI and GTSRB datasets show that we can improve the robustness against physical attacks at minimal harm to accuracy.
### Reject Illegal Inputs with Generative Classifier Derived from Any Discriminative Classifier
Generative classifiers have been shown promising to detect illegal inputs including adversarial examples and out-of-distribution samples. Supervised Deep Infomax~(SDIM) is a scalable end-to-end framework to learn generative classifiers. In this paper, we propose a modification of SDIM termed SDIM-\emph{logit}. Instead of training generative classifier from scratch, SDIM-\emph{logit} first takes as input the logits produced any given discriminative classifier, and generate logit representations; then a generative classifier is derived by imposing statistical constraints on logit representations. SDIM-\emph{logit} could inherit the performance of the discriminative classifier without loss. SDIM-\emph{logit} incurs a negligible number of additional parameters, and can be efficiently trained with base classifiers fixed. We perform \emph{classification with rejection}, where test samples whose class conditionals are smaller than pre-chosen thresholds will be rejected without predictions. Experiments on illegal inputs, including adversarial examples, samples with common corruptions, and out-of-distribution~(OOD) samples show that allowed to reject a portion of test samples, SDIM-\emph{logit} significantly improves the performance on the left test sets. | 2020-01-20 04:35:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47456255555152893, "perplexity": 891.1940273688259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597230.18/warc/CC-MAIN-20200120023523-20200120051523-00231.warc.gz"} |
http://www.physicsforums.com/showpost.php?p=1063490&postcount=3 | Thread: Friedmann equation View Single Post
You may be interested in reading Relativity Demystified by David McMahon. On page 161 the following problem is worked out: Consider the Robertson-Walker metric and suppose we take the Einstein equation with nonzero constant, find the Friedman equations. | 2013-05-20 16:34:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8971858024597168, "perplexity": 647.2399669501724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699113041/warc/CC-MAIN-20130516101153-00048-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://eprints.ma.man.ac.uk/811/ | You are here: MIMS > EPrints
MIMS EPrints
## 2007.91: Optimal Scaling of Random Walk Metropolis algorithms with Discontinuous target densities
2007.91: P Neal, G Roberts and J Yuen (2007) Optimal Scaling of Random Walk Metropolis algorithms with Discontinuous target densities.
Full text available as:
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader368 Kb
## Abstract
We consider the optimal scaling problem for high-dimensional Random walk Metropolis (RWM) algorithms where the target distribution has a discontinuous probability density function. All previous analysis has focused upon continuous target densities. The main result is a weak convergence result as the dimensionality $d$ of the target densities converges to $\infty$. In particular, when the proposal variance is scaled by $d^{-2}$, the sequence of stochastic processes formed by the first component of each Markov chain converges to an appropriate Langevin diffusion process. Therefore optimising the efficiency of the RWM algorithm is equivalent to maximising the speed of the limiting diffusion. This leads to an asymptotic optimal acceptance rate of $e^{-2} (=0.1353)$ under quite general conditions. The results have major practical implications for the implementation of RWM algorithms by highlighting the detrimental effect of choosing RWM algorithms over Metropolis-within-Gibbs algorithms.
Item Type: MIMS Preprint Submitted to Annals of Applied Probability Random walk Metropolis, Markov chain Monte Carlo, optimal scaling MSC 2000 > 60 Probability theory and stochastic processesMSC 2000 > 65 Numerical analysis 2007.91 Dr Peter Neal 29 May 2007 | 2015-04-01 01:12:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544895648956299, "perplexity": 1281.866607073057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131302428.75/warc/CC-MAIN-20150323172142-00014-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://brilliant.org/problems/limits-vs-summation-part-2/ | # Limits vs Summation
Calculus Level 3
$\large \displaystyle\lim_{n\to\infty}\left(\dfrac{n^2b+1}{2n^2a-1}\right)=\ 1$
Suppose for real numbers $$a,b$$, we have the limit above.
What is the value of $$\displaystyle\sum_{n=1}^{\infty}\left(\dfrac{ab}{a^2+b^2}\right)^n \ ?$$
× | 2018-04-20 07:05:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448933601379395, "perplexity": 2660.045053002815}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937161.15/warc/CC-MAIN-20180420061851-20180420081851-00345.warc.gz"} |
http://clay6.com/qa/15021/a-ball-is-dropped-from-a-height-h-on-a-floor-of-coefficient-of-restitution- | # A ball is dropped from a height 'h' on a floor of coefficient of restitution 'e'. The total distance covered by the ball just before second hit is
$\begin {array} {1 1} (1)\;h(1-2e^2) & \quad (2)\;h(1+2e^2) \\ (3)\;h(1+e^2) & \quad (4)\;he^2 \end {array}$
$(2)\;h(1+2e^2)$ | 2017-08-21 00:59:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7774179577827454, "perplexity": 305.8550996189968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00417.warc.gz"} |
http://www.ni.com/documentation/en/labview-comms/latest/m-ref/single/ | # single
Version:
Converts input elements to single-precision numbers.
## Syntax
c = single(a)
## a
Numeric scalar or numeric array of any dimension.
## c
Elements of a as single-precision numbers. c is a scalar or an array of the same size as a. If a is a complex number, this function converts the real and imaginary parts of the number separately and then returns a complex number that consists of those parts.
## Special Cases
If an element in a is greater than the maximum positive number, the corresponding element in c is Inf. If an element in a is less than the minimum positive number and greater than the minimum negative number, the corresponding element in c is 0. If an element in a is less than the minimum negative number, the corresponding element in c is -Inf.
A = int8(12.3)
C = single(A)
Where This Node Can Run:
Desktop OS: Windows
FPGA: Not supported | 2018-01-18 20:13:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7270734310150146, "perplexity": 758.5600107646184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00766.warc.gz"} |
http://math.stackexchange.com/questions/32311/finding-x-in-ax-bmod-b-c-when-values-a-b-and-c-are-known | # Finding x in $a^{x} \bmod b = c$ when values a,b, and c are known?
If values $a$, $b$, and $c$ are known, is there an efficient way to find $x$ in the equation: $a^{x} \bmod b = c$?
E.g. finding $x$ in $128^{x}\bmod 209 = 39$.
- | 2016-06-30 01:21:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415419697761536, "perplexity": 275.63933058681835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz"} |