url
stringlengths 6
1.61k
| fetch_time
int64 1,368,856,904B
1,726,893,854B
| content_mime_type
stringclasses 3
values | warc_filename
stringlengths 108
138
| warc_record_offset
int32 9.6k
1.74B
| warc_record_length
int32 664
793k
| text
stringlengths 45
1.04M
| token_count
int32 22
711k
| char_count
int32 45
1.04M
| metadata
stringlengths 439
443
| score
float64 2.52
5.09
| int_score
int64 3
5
| crawl
stringclasses 93
values | snapshot_type
stringclasses 2
values | language
stringclasses 1
value | language_score
float64 0.06
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://oeis.org/A210335
| 1,701,764,842,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-50/segments/1700679100550.40/warc/CC-MAIN-20231205073336-20231205103336-00416.warc.gz
| 500,925,356
| 4,521
|
The OEIS is supported by the many generous donors to the OEIS Foundation.
Year-end appeal: Please make a donation to the OEIS Foundation to support ongoing development and maintenance of the OEIS. We are now in our 60th year, we have over 367,000 sequences, and we’ve reached 11,000 citations (which often say “discovered thanks to the OEIS”). Other ways to Give
Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)
A210335 T(n,k)=1/4 the number of (n+1)X(k+1) 0..3 arrays with every 2X2 subblock having two or three distinct clockwise edge differences 9
39, 406, 406, 4200, 11877, 4200, 43519, 344659, 344659, 43519, 450903, 10050240, 27838382, 10050240, 450903, 4672093, 293095459, 2266459861, 2266459861, 293095459, 4672093, 48410727, 8549740834, 184424586007, 517186072378 (list; table; graph; refs; listen; history; text; internal format)
OFFSET 1,1 COMMENTS Table starts ........39...........406..............4200..................43519 .......406.........11877............344659...............10050240 ......4200........344659..........27838382.............2266459861 .....43519......10050240........2266459861...........517186072378 ....450903.....293095459......184424586007........117888703860657 ...4672093....8549740834....15012328786175......26888341896724303 ..48410727..249408131339..1221987668386411....6132398417786621509 .501617911.7275728429567.99470467438933662.1398664622985458705816 LINKS R. H. Hardin, Table of n, a(n) for n = 1..144 EXAMPLE Some solutions for n=4 k=3 ..1..0..0..0....0..1..1..1....0..1..0..1....3..2..2..1....2..2..2..2 ..0..0..1..2....1..2..2..3....3..2..3..2....1..2..3..0....3..2..3..3 ..1..0..1..3....0..3..3..2....3..2..2..1....1..3..2..1....1..1..2..1 ..1..0..1..1....0..3..0..1....1..1..2..0....1..1..1..0....1..2..1..0 ..0..1..0..0....2..1..2..1....1..0..0..2....3..2..1..0....2..3..1..0 CROSSREFS Sequence in context: A238102 A266172 A229601 * A210328 A231993 A357958 Adjacent sequences: A210332 A210333 A210334 * A210336 A210337 A210338 KEYWORD nonn,tabl AUTHOR R. H. Hardin Mar 20 2012 STATUS approved
Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam
Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recents
The OEIS Community | Maintained by The OEIS Foundation Inc.
Last modified December 5 03:25 EST 2023. Contains 367567 sequences. (Running on oeis4.)
| 835
| 2,410
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.875
| 3
|
CC-MAIN-2023-50
|
latest
|
en
| 0.614638
|
http://mathhelpforum.com/advanced-statistics/index189.html
| 1,505,915,002,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2017-39/segments/1505818687281.63/warc/CC-MAIN-20170920123428-20170920143428-00419.warc.gz
| 220,940,724
| 14,719
|
Advanced Statistics and Probability Help Forum
1. ### Hypothesis Testing
• Replies: 3
• Views: 3,646
Oct 24th 2012, 01:25 PM
1. ### The Delta Method
• Replies: 0
• Views: 474
Oct 28th 2009, 10:05 AM
2. ### one last joint distribution
• Replies: 5
• Views: 516
Oct 28th 2009, 08:42 AM
3. ### Confusing riddle problem
• Replies: 3
• Views: 688
Oct 28th 2009, 03:55 AM
4. ### Hypothesis testing: sign test
• Replies: 0
• Views: 589
Oct 28th 2009, 02:36 AM
5. ### Joint cumulative distribution function
• Replies: 9
• Views: 915
Oct 27th 2009, 10:26 PM
6. ### Probability of 2 jointly uniform random variables
• Replies: 2
• Views: 537
Oct 27th 2009, 07:44 PM
7. ### Beta pdf
• Replies: 1
• Views: 498
Oct 27th 2009, 06:35 PM
8. ### SSE simple linear regression
• Replies: 2
• Views: 920
Oct 27th 2009, 05:44 PM
9. ### Random variables
• Replies: 2
• Views: 402
Oct 27th 2009, 12:47 PM
10. ### Probability Density Functions
• Replies: 5
• Views: 744
Oct 27th 2009, 06:45 AM
11. ### Joint PMF Problem
• Replies: 5
• Views: 1,063
Oct 27th 2009, 06:20 AM
12. ### Perfect information
• Replies: 2
• Views: 741
Oct 27th 2009, 03:34 AM
13. ### Mathematical Expectation
• Replies: 2
• Views: 466
Oct 26th 2009, 08:59 PM
14. ### Continuous RV
• Replies: 1
• Views: 1,316
Oct 26th 2009, 05:45 PM
15. ### Dummy Variable LS Coefficients
• Replies: 0
• Views: 603
Oct 26th 2009, 03:03 PM
16. ### Is my professor wrong again?
• Replies: 1
• Views: 560
Oct 26th 2009, 12:30 PM
17. ### Socks in a drawer conundrum
• Replies: 5
• Views: 657
Oct 26th 2009, 11:24 AM
18. ### Uniform Distributions.. plz help
• Replies: 5
• Views: 596
Oct 26th 2009, 03:58 AM
19. ### Method of moments
• Replies: 7
• Views: 761
Oct 26th 2009, 03:45 AM
20. ### Poisson and Queues
• Replies: 0
• Views: 442
Oct 26th 2009, 02:38 AM
21. ### Questions I'm struggling with Part 1
• Replies: 3
• Views: 663
Oct 26th 2009, 02:19 AM
22. ### Problem on Random Variables
• Replies: 1
• Views: 481
Oct 25th 2009, 11:46 PM
23. ### Questions I'm Struggling With Part 4
• Replies: 1
• Views: 636
Oct 25th 2009, 11:00 PM
24. ### Questions I'm Struggling With Part 2
• Replies: 1
• Views: 381
Oct 25th 2009, 10:55 PM
25. ### PDF of discrete RV sum?
• Replies: 3
• Views: 689
Oct 25th 2009, 10:47 PM
26. ### Confidence interval and MLE
• Replies: 0
• Views: 896
Oct 25th 2009, 04:22 PM
27. ### how to prove this
• Replies: 1
• Views: 407
Oct 25th 2009, 02:39 PM
28. ### Markov Processes
• Replies: 1
• Views: 500
Oct 25th 2009, 11:53 AM
,
,
,
### zoomhd.su
Click on a term to search for related topics.
Use this control to limit the display of threads to those newer than the specified time frame.
Allows you to choose the data by which the thread list will be sorted.
Note: when sorting by date, 'descending order' will show the newest results first.
| 1,092
| 2,831
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.859375
| 3
|
CC-MAIN-2017-39
|
longest
|
en
| 0.60623
|
http://budshaw.ca/addenda/compactNotes.html
| 1,718,262,584,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861342.74/warc/CC-MAIN-20240613060639-20240613090639-00877.warc.gz
| 4,358,910
| 2,382
|
### Compactk
The sum of each k x k block, (including wrap-around), is equal to k2/n of the magic constant.
### Magic Square
A representation of a magic square of order n, where M = n-1, L = n-2.
```X0,0 X0,1 X0,2 X0,3 .. X0,M
X1,0 X1,1 X1,2 X1,3 .. X1,M
X2,0 X2,1 X2,2 X2,3 .. X2,M
X3,0 X3,1 X3,2 X3,3 .. X3,M
..
XL,0 XL,1 XL,2 XL,3 .. XL,M
XM,0 XM,1 XM,2 XM,3 .. XM,M
```
### Compact Magic Square
Consider the above magic square as compact. From the 2x2 subsquare sums we have:
``` X0,0 + X1,0 = X0,2 + X1,2
X0,0 + X1,0 = X0,4 + X1,4
..
X0,0 + X3,0 = X0,2 + X3,2
..
```
In general, for r, c = 0 .. n-1 and i, j = 0,1,2,.. :
``` Xr,c + Xr+1+2i,c = Xr,c+2j + Xr+1+2i,c+2j (1)
```
### A Compact Magic Square is Pandiagonal
Summing alternating numbers from the first and second \diagonals beginning in column 1, we have:
``` (X0,0 + X1,0) + (X2,2 + X3,2) + .. + (XL,L + XM,L)
+ (X1,1 + X2,1) + (X3,3 + X4,3) + .. + (XM,M + X0,M)
```
which by (1) equals:
``` (X0,0 + X1,0) + (X2,0 + X3,0) + .. + (XL,0 + XM,0)
+ (X1,1 + X2,1) + (X3,1 + X4,1) + .. + (XM,1 + X0,1)
```
equals the sum of column 1 plus the sum of column 2.
Therefore, if the sum of the main \diagonal is Σ, the second also sums to Σ. Next summing the second and third \diagonals, we have that the third sums to Σ. Continuing in this manner, we see that the sum of every \diagonal is Σ.
Similarly, starting from the main /diagonal we can show that all the /diagonals sum to Σ. Thus, the square is pandiagonal.
### Each Quarter of a Franklin Magic Square is Pandiagonal
Consider the top left corner of the square. Let G = n/2 -1, F = n/2 -2, etc.:
```X0,0 X0,1 X0,2 X0,3 .. X0,G ..
X1,0 X1,1 X1,2 X1,3 .. X1,G ..
X2,0 X2,1 X2,2 X2,3 .. X2,G ..
X3,0 X3,1 X3,2 X3,3 .. X3,G ..
..
XF,0 XF,1 XF,2 XF,3 .. XF,G ..
XG,0 XG,1 XG,2 XG,3 .. XG,G ..
..
```
From the bent diagonal sums and main diagonal sums of the full square, we have that the main \diagonal of the quarter square has sum Σ/2. From this, proceeding as we did above for the full square, we can show that the sum of every \diagonal is Σ/2. This is sufficient to prove that the quarter square cannot exist.
However, we can also show that the quarter square is fully pandiagonal. For the /diagonals, we can first show that the sum of the main /diagonal is Σ/2. The sum of the main \diagonal plus the sum of the main /diagonal is:
``` (X0,0 + XG,0) + (X2,2 + XE,2) + .. + (XF,F + X1,F)
+ (X1,1 + XF,1) + (X3,3 + XD,3) + .. + (XG,G + X0,G)
```
which by (1) equals:
``` (X0,0 + XG,0) + (X2,0 + XE,0) + .. + (XF,0 + X1,0)
+ (X1,1 + XF,1) + (X3,1 + XD,1) + .. + (XG,1 + X0,1)
```
equals the sum of column 1 plus the sum of column 2 of the quarter. Therefore, the sum of the main /diagonal is Σ/2 and we can proceed as before to show that the sum of every /diagonal is Σ/2. Thus, the quarter square is pandiagonal with magic sum Σ/2.
### A Compact or Complete Magic Square is Doubly Even
A compact magic square is pandiagonal. See above proof. There are no singly even order pandiagonal squares.
There are no odd order n complete squares because n/2 is fractional. There are no odd order compact squares because from (1) we have:
``` X0,0 + X0,0 = X0,2 + X0,2
or
X0,0 = X0,2
```
which is impossible.
| 1,289
| 3,277
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.96875
| 4
|
CC-MAIN-2024-26
|
latest
|
en
| 0.704413
|
https://numberworld.info/3318059
| 1,670,470,406,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00273.warc.gz
| 454,060,301
| 3,862
|
# Number 3318059
### Properties of number 3318059
Cross Sum:
Factorization:
47 * 227 * 311
Divisors:
Count of divisors:
Sum of divisors:
Prime number?
No
Fibonacci number?
No
Bell Number?
No
Catalan Number?
No
Base 2 (Binary):
Base 3 (Ternary):
Base 4 (Quaternary):
Base 5 (Quintal):
Base 8 (Octal):
32a12b
Base 32:
3589b
sin(3318059)
0.054507558366491
cos(3318059)
-0.99851335798823
tan(3318059)
-0.054588712239475
ln(3318059)
15.014890531371
lg(3318059)
6.5208841041557
sqrt(3318059)
1821.5540068853
Square(3318059)
### Number Look Up
Look Up
3318059 (three million three hundred eighteen thousand fifty-nine) is a amazing figure. The cross sum of 3318059 is 29. If you factorisate the number 3318059 you will get these result 47 * 227 * 311. The number 3318059 has 8 divisors ( 1, 47, 227, 311, 10669, 14617, 70597, 3318059 ) whith a sum of 3414528. The figure 3318059 is not a prime number. 3318059 is not a fibonacci number. The figure 3318059 is not a Bell Number. The figure 3318059 is not a Catalan Number. The convertion of 3318059 to base 2 (Binary) is 1100101010000100101011. The convertion of 3318059 to base 3 (Ternary) is 20020120112002. The convertion of 3318059 to base 4 (Quaternary) is 30222010223. The convertion of 3318059 to base 5 (Quintal) is 1322134214. The convertion of 3318059 to base 8 (Octal) is 14520453. The convertion of 3318059 to base 16 (Hexadecimal) is 32a12b. The convertion of 3318059 to base 32 is 3589b. The sine of the number 3318059 is 0.054507558366491. The cosine of 3318059 is -0.99851335798823. The tangent of 3318059 is -0.054588712239475. The root of 3318059 is 1821.5540068853.
If you square 3318059 you will get the following result 11009515527481. The natural logarithm of 3318059 is 15.014890531371 and the decimal logarithm is 6.5208841041557. I hope that you now know that 3318059 is great number!
| 668
| 1,856
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.171875
| 3
|
CC-MAIN-2022-49
|
latest
|
en
| 0.697507
|
http://abstract.ups.edu/aata/section-permutation-definitions.html
| 1,642,864,222,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00576.warc.gz
| 791,860
| 13,166
|
## Section5.1Definitions and Notation
In general, the permutations of a set $$X$$ form a group $$S_X\text{.}$$ If $$X$$ is a finite set, we can assume $$X=\{ 1, 2, \ldots, n\}\text{.}$$ In this case we write $$S_n$$ instead of $$S_X\text{.}$$ The following theorem says that $$S_n$$ is a group. We call this group the symmetric group on $$n$$ letters.
The identity of $$S_n$$ is just the identity map that sends $$1$$ to $$1\text{,}$$ $$2$$ to $$2\text{,}$$ $$\ldots\text{,}$$ $$n$$ to $$n\text{.}$$ If $$f : S_n \rightarrow S_n$$ is a permutation, then $$f^{-1}$$ exists, since $$f$$ is one-to-one and onto; hence, every permutation has an inverse. Composition of maps is associative, which makes the group operation associative. We leave the proof that $$|S_n|= n!$$ as an exercise.
A subgroup of $$S_n$$ is called a permutation group.
###### Example5.2.
Consider the subgroup $$G$$ of $$S_5$$ consisting of the identity permutation $$\identity$$ and the permutations
\begin{align*} \sigma & = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 1 & 2 & 3 & 5 & 4 \end{pmatrix}\\ \tau & = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 3 & 2 & 1 & 4 & 5 \end{pmatrix}\\ \mu & = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 3 & 2 & 1 & 5 & 4 \end{pmatrix}\text{.} \end{align*}
The following table tells us how to multiply elements in the permutation group $$G\text{.}$$
\begin{equation*} \begin{array}{c|cccc} \circ & \identity & \sigma & \tau & \mu \\ \hline \identity & \identity & \sigma & \tau & \mu \\ \sigma & \sigma & \identity & \mu & \tau \\ \tau & \tau & \mu & \identity & \sigma \\ \mu & \mu & \tau & \sigma & \identity \end{array} \end{equation*}
###### Remark5.3.
Though it is natural to multiply elements in a group from left to right, functions are composed from right to left. Let $$\sigma$$ and $$\tau$$ be permutations on a set $$X\text{.}$$ To compose $$\sigma$$ and $$\tau$$ as functions, we calculate $$(\sigma \circ \tau)(x) = \sigma( \tau(x))\text{.}$$ That is, we do $$\tau$$ first, then $$\sigma\text{.}$$ There are several ways to approach this inconsistency. We will adopt the convention of multiplying permutations right to left. To compute $$\sigma \tau\text{,}$$ do $$\tau$$ first and then $$\sigma\text{.}$$ That is, by $$\sigma \tau (x)$$ we mean $$\sigma( \tau( x))\text{.}$$ (Another way of solving this problem would be to write functions on the right; that is, instead of writing $$\sigma(x)\text{,}$$ we could write $$(x)\sigma\text{.}$$ We could also multiply permutations left to right to agree with the usual way of multiplying elements in a group. Certainly all of these methods have been used.
###### Example5.4.
Permutation multiplication is not usually commutative. Let
\begin{align*} \sigma & = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 4 & 1 & 2 & 3 \end{pmatrix}\\ \tau & = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \end{pmatrix}\text{.} \end{align*}
Then
\begin{equation*} \sigma \tau = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 1 & 4 & 3 & 2 \end{pmatrix}\text{,} \end{equation*}
but
\begin{equation*} \tau \sigma = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 3 & 2 & 1 & 4 \end{pmatrix}\text{.} \end{equation*}
### SubsectionCycle Notation
The notation that we have used to represent permutations up to this point is cumbersome, to say the least. To work effectively with permutation groups, we need a more streamlined method of writing down and manipulating permutations.
A permutation $$\sigma \in S_X$$ is a cycle of length $$k$$ if there exist elements $$a_1, a_2, \ldots, a_k \in X$$ such that
\begin{align*} \sigma( a_1 ) & = a_2\\ \sigma( a_2 ) & = a_3\\ & \vdots\\ \sigma( a_k ) & = a_1 \end{align*}
and $$\sigma( x) = x$$ for all other elements $$x \in X\text{.}$$ We will write $$(a_1, a_2, \ldots, a_k )$$ to denote the cycle $$\sigma\text{.}$$ Cycles are the building blocks of all permutations.
###### Example5.5.
The permutation
\begin{equation*} \sigma = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7\\ 6 & 3 & 5 & 1 & 4 & 2 & 7 \end{pmatrix} = (1\, 6\, 2\, 3\, 5\, 4 ) \end{equation*}
is a cycle of length $$6\text{,}$$ whereas
\begin{equation*} \tau = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 1 & 4 & 2 & 3 & 5 & 6 \end{pmatrix} = (2\, 4\, 3) \end{equation*}
is a cycle of length $$3\text{.}$$
Not every permutation is a cycle. Consider the permutation
\begin{equation*} \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 4 & 1 & 3 & 6 & 5 \end{pmatrix} = (1\, 2\, 4\, 3)(5\, 6)\text{.} \end{equation*}
This permutation actually contains a cycle of length 2 and a cycle of length $$4\text{.}$$
###### Example5.6.
It is very easy to compute products of cycles. Suppose that
\begin{equation*} \sigma = (1\, 3\, 5\, 2 ) \quad \text{and} \quad \tau = (2\, 5\, 6)\text{.} \end{equation*}
If we think of $$\sigma$$ as
\begin{equation*} 1 \mapsto 3, \qquad 3 \mapsto 5, \qquad 5 \mapsto 2, \qquad 2 \mapsto 1\text{,} \end{equation*}
and $$\tau$$ as
\begin{equation*} 2 \mapsto 5, \qquad 5 \mapsto 6, \qquad 6 \mapsto 2\text{,} \end{equation*}
then for $$\sigma \tau$$ remembering that we apply $$\tau$$ first and then $$\sigma\text{,}$$ it must be the case that
\begin{equation*} 1 \mapsto 3, \qquad 3 \mapsto 5, \qquad 5 \mapsto 6, \qquad 6 \mapsto 2 \mapsto 1\text{,} \end{equation*}
or $$\sigma \tau = (1 \, 3 \, 5 \, 6 )\text{.}$$ If $$\mu = (1 \, 6 \, 3 \, 4)\text{,}$$ then $$\sigma \mu = (1\, 6\, 5\, 2)(3\, 4)\text{.}$$
Two cycles in $$S_X\text{,}$$ $$\sigma = (a_1, a_2, \ldots, a_k )$$ and $$\tau = (b_1, b_2, \ldots, b_l )\text{,}$$ are disjoint if $$a_i \neq b_j$$ for all $$i$$ and $$j\text{.}$$
###### Example5.7.
The cycles $$(1\, 3\, 5)$$ and $$(2\, 7 )$$ are disjoint; however, the cycles $$(1\, 3\, 5)$$ and $$(3\, 4\, 7 )$$ are not. Calculating their products, we find that
\begin{align*} (1\, 3\, 5)(2\, 7 ) & = (1\, 3\, 5)(2\, 7 )\\ (1\, 3\, 5)(3\, 4\, 7 ) & = (1\, 3\, 4\, 7\, 5)\text{.} \end{align*}
The product of two cycles that are not disjoint may reduce to something less complicated; the product of disjoint cycles cannot be simplified.
Let $$\sigma = (a_1, a_2, \ldots, a_k )$$ and $$\tau = (b_1, b_2, \ldots, b_l )\text{.}$$ We must show that $$\sigma \tau(x) = \tau \sigma(x)$$ for all $$x \in X\text{.}$$ If $$x$$ is neither in $$\{ a_1, a_2, \ldots, a_k \}$$ nor $$\{b_1, b_2, \ldots, b_l \}\text{,}$$ then both $$\sigma$$ and $$\tau$$ fix $$x\text{.}$$ That is, $$\sigma(x)=x$$ and $$\tau(x)=x\text{.}$$ Hence,
\begin{equation*} \sigma \tau(x) = \sigma( \tau(x)) = \sigma(x) = x = \tau(x) = \tau( \sigma(x)) = \tau \sigma(x)\text{.} \end{equation*}
Do not forget that we are multiplying permutations right to left, which is the opposite of the order in which we usually multiply group elements. Now suppose that $$x \in \{ a_1, a_2, \ldots, a_k \}\text{.}$$ Then $$\sigma( a_i ) = a_{(i \bmod k) + 1}\text{;}$$ that is,
\begin{align*} a_1 & \mapsto a_2\\ a_2 & \mapsto a_3\\ & \vdots\\ a_{k-1} & \mapsto a_k\\ a_k & \mapsto a_1\text{.} \end{align*}
However, $$\tau(a_i) = a_i$$ since $$\sigma$$ and $$\tau$$ are disjoint. Therefore,
\begin{align*} \sigma \tau(a_i) & = \sigma( \tau(a_i))\\ & = \sigma(a_i)\\ & = a_{(i \bmod k)+1}\\ & = \tau( a_{(i \bmod k)+1} )\\ & = \tau( \sigma(a_i) )\\ & = \tau \sigma(a_i)\text{.} \end{align*}
Similarly, if $$x \in \{b_1, b_2, \ldots, b_l \}\text{,}$$ then $$\sigma$$ and $$\tau$$ also commute.
We can assume that $$X = \{ 1, 2, \ldots, n \}\text{.}$$ If $$\sigma \in S_n$$ and we define $$X_1$$ to be $$\{ \sigma(1), \sigma^2(1), \ldots \}\text{,}$$ then the set $$X_1$$ is finite since $$X$$ is finite. Now let $$i$$ be the first integer in $$X$$ that is not in $$X_1$$ and define $$X_2$$ by $$\{ \sigma(i), \sigma^2(i), \ldots \}\text{.}$$ Again, $$X_2$$ is a finite set. Continuing in this manner, we can define finite disjoint sets $$X_3, X_4, \ldots\text{.}$$ Since $$X$$ is a finite set, we are guaranteed that this process will end and there will be only a finite number of these sets, say $$r\text{.}$$ If $$\sigma_i$$ is the cycle defined by
\begin{equation*} \sigma_i( x ) = \begin{cases} \sigma( x ) & x \in X_i \\ x & x \notin X_i \end{cases}\text{,} \end{equation*}
then $$\sigma = \sigma_1 \sigma_2 \cdots \sigma_r\text{.}$$ Since the sets $$X_1, X_2, \ldots, X_r$$ are disjoint, the cycles $$\sigma_1, \sigma_2, \ldots, \sigma_r$$ must also be disjoint.
###### Example5.10.
Let
\begin{align*} \sigma & = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 6 & 4 & 3 & 1 & 5 & 2 \end{pmatrix}\\ \tau & = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 3 & 2 & 1 & 5 & 6 & 4 \end{pmatrix}\text{.} \end{align*}
Using cycle notation, we can write
\begin{align*} \sigma & = (1 \, 6 \, 2 \, 4)\\ \tau & = (1 \, 3)(4 \, 5 \,6)\\ \sigma \tau & = (1 \, 3\, 6) ( 2\, 4\, 5)\\ \tau \sigma & = (1 \, 4\, 3 )(2 \, 5 \, 6)\text{.} \end{align*}
###### Remark5.11.
From this point forward we will find it convenient to use cycle notation to represent permutations. When using cycle notation, we often denote the identity permutation by $$(1)\text{.}$$
### SubsectionTranspositions
The simplest permutation is a cycle of length $$2\text{.}$$ Such cycles are called transpositions. Since
\begin{equation*} (a_1, a_2, \ldots, a_n ) = (a_1, a_n ) (a_1, a_{n-1} ) \cdots ( a_1, a_3 ) (a_1, a_2 )\text{,} \end{equation*}
any cycle can be written as the product of transpositions, leading to the following proposition.
###### Example5.13.
Consider the permutation
\begin{equation*} ( 1 \, 6 ) (2 \, 5\, 3) = (1 \, 6 )( 2 \, 3 )( 2 \, 5 ) = (1 \, 6 )( 4 \, 5 )(2 \, 3 )( 4 \, 5 )(2 \, 5 )\text{.} \end{equation*}
As we can see, there is no unique way to represent permutation as the product of transpositions. For instance, we can write the identity permutation as $$(1 \, 2 )(1 \, 2 )\text{,}$$ as $$(1 \, 3 )(2 \, 4 )(1 \, 3 )( 2 \, 4 )\text{,}$$ and in many other ways. However, as it turns out, no permutation can be written as the product of both an even number of transpositions and an odd number of transpositions. For instance, we could represent the permutation $$(1 \, 6)$$ by
\begin{equation*} (2 \, 3 )(1 \, 6)( 2 \, 3) \end{equation*}
or by
\begin{equation*} (3 \, 5) (1 \, 6) (1 \, 3) (1 \, 6) (1 \, 3) (3 \, 5) (5 \, 6)\text{,} \end{equation*}
but $$(1 \, 6)$$ will always be the product of an odd number of transpositions.
We will employ induction on $$r\text{.}$$ A transposition cannot be the identity; hence, $$r \gt 1\text{.}$$ If $$r=2\text{,}$$ then we are done. Suppose that $$r \gt 2\text{.}$$ In this case the product of the last two transpositions, $$\tau_{r-1} \tau_r\text{,}$$ must be one of the following cases:
\begin{align*} (a, b)(a, b) & = \identity\\ (b, c)(a, b) & = (a, c)(b, c)\\ (c, d)(a, b) & = (a, b)(c, d)\\ (a, c)(a, b) & = (a, b)(b, c)\text{,} \end{align*}
where $$a\text{,}$$ $$b\text{,}$$ $$c\text{,}$$ and $$d$$ are distinct.
The first equation simply says that a transposition is its own inverse. If this case occurs, delete $$\tau_{r-1} \tau_r$$ from the product to obtain
\begin{equation*} \identity = \tau_1 \tau_2 \cdots \tau_{r - 3} \tau_{r - 2}\text{.} \end{equation*}
By induction $$r - 2$$ is even; hence, $$r$$ must be even.
In each of the other three cases, we can replace $$\tau_{r - 1} \tau_r$$ with the right-hand side of the corresponding equation to obtain a new product of $$r$$ transpositions for the identity. In this new product the last occurrence of $$a$$ will be in the next-to-the-last transposition. We can continue this process with $$\tau_{r - 2} \tau_{r - 1}$$ to obtain either a product of $$r - 2$$ transpositions or a new product of $$r$$ transpositions where the last occurrence of $$a$$ is in $$\tau_{r - 2}\text{.}$$ If the identity is the product of $$r - 2$$ transpositions, then again we are done, by our induction hypothesis; otherwise, we will repeat the procedure with $$\tau_{r - 3} \tau_{r - 2}\text{.}$$
At some point either we will have two adjacent, identical transpositions canceling each other out or $$a$$ will be shuffled so that it will appear only in the first transposition. However, the latter case cannot occur, because the identity would not fix $$a$$ in this instance. Therefore, the identity permutation must be the product of $$r-2$$ transpositions and, again by our induction hypothesis, we are done.
Suppose that
\begin{equation*} \sigma = \sigma_1 \sigma_2 \cdots \sigma_m = \tau_1 \tau_2 \cdots \tau_n\text{,} \end{equation*}
where $$m$$ is even. We must show that $$n$$ is also an even number. The inverse of $$\sigma$$ is $$\sigma_m \cdots \sigma_1\text{.}$$ Since
\begin{equation*} \identity = \sigma \sigma_m \cdots \sigma_1 = \tau_1 \cdots \tau_n \sigma_m \cdots \sigma_1\text{,} \end{equation*}
$$n$$ must be even by Lemma 5.14. The proof for the case in which $$\sigma$$ can be expressed as an odd number of transpositions is left as an exercise.
In light of Theorem 5.15, we define a permutation to be even if it can be expressed as an even number of transpositions and odd if it can be expressed as an odd number of transpositions.
### SubsectionThe Alternating Groups
One of the most important subgroups of $$S_n$$ is the set of all even permutations, $$A_n\text{.}$$ The group $$A_n$$ is called the alternating group on $$n$$ letters.
Since the product of two even permutations must also be an even permutation, $$A_n$$ is closed. The identity is an even permutation and therefore is in $$A_n\text{.}$$ If $$\sigma$$ is an even permutation, then
\begin{equation*} \sigma = \sigma_1 \sigma_2 \cdots \sigma_r\text{,} \end{equation*}
where $$\sigma_i$$ is a transposition and $$r$$ is even. Since the inverse of any transposition is itself,
\begin{equation*} \sigma^{-1} = \sigma_r \sigma_{r-1} \cdots \sigma_1 \end{equation*}
is also in $$A_n\text{.}$$
Let $$A_n$$ be the set of even permutations in $$S_n$$ and $$B_n$$ be the set of odd permutations. If we can show that there is a bijection between these sets, they must contain the same number of elements. Fix a transposition $$\sigma$$ in $$S_n\text{.}$$ Since $$n \geq 2\text{,}$$ such a $$\sigma$$ exists. Define
\begin{equation*} \lambda_{\sigma} : A_n \rightarrow B_n \end{equation*}
by
\begin{equation*} \lambda_{\sigma} ( \tau ) = \sigma \tau \text{.} \end{equation*}
Suppose that $$\lambda_{\sigma} ( \tau ) = \lambda_{\sigma} ( \mu )\text{.}$$ Then $$\sigma \tau = \sigma \mu$$ and so
\begin{equation*} \tau = \sigma^{-1} \sigma \tau = \sigma^{-1} \sigma \mu = \mu\text{.} \end{equation*}
Therefore, $$\lambda_{\sigma}$$ is one-to-one. We will leave the proof that $$\lambda_{\sigma}$$ is surjective to the reader.
###### Example5.18.
The group $$A_4$$ is the subgroup of $$S_4$$ consisting of even permutations. There are twelve elements in $$A_4\text{:}$$
\begin{align*} & (1) && (1 \, 2)(3 \, 4) && (1 \, 3)(2 \, 4) && (1 \, 4)(2 \, 3)\\ & (1 \, 2 \, 3) && (1 \, 3 \, 2) && (1 \, 2 \, 4) && (1 \, 4 \, 2)\\ & (1 \, 3 \, 4) && (1 \, 4 \, 3) && (2 \, 3 \, 4) && (2 \, 4 \, 3)\text{.} \end{align*}
One of the end-of-chapter exercises will be to write down all the subgroups of $$A_4\text{.}$$ You will find that there is no subgroup of order 6. Does this surprise you?
### SubsectionHistorical Note
Lagrange first thought of permutations as functions from a set to itself, but it was Cauchy who developed the basic theorems and notation for permutations. He was the first to use cycle notation. Augustin-Louis Cauchy (1789–1857) was born in Paris at the height of the French Revolution. His family soon left Paris for the village of Arcueil to escape the Reign of Terror. One of the family's neighbors there was Pierre-Simon Laplace (1749–1827), who encouraged him to seek a career in mathematics. Cauchy began his career as a mathematician by solving a problem in geometry given to him by Lagrange. Cauchy wrote over 800 papers on such diverse topics as differential equations, finite groups, applied mathematics, and complex analysis. He was one of the mathematicians responsible for making calculus rigorous. Perhaps more theorems and concepts in mathematics have the name Cauchy attached to them than that of any other mathematician.
| 5,811
| 16,066
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.65625
| 5
|
CC-MAIN-2022-05
|
latest
|
en
| 0.716494
|
https://normgoldblatt.com/how-to-solve-sum-of-cubes-92
| 1,675,686,058,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00554.warc.gz
| 443,324,346
| 17,111
|
# How to solve sum of cubes
When solving algebraic equations of various sorts, this factorization formula comes in helpful. This formula is also simple to remember and may be done in a couple of
• Clarify mathematic problems
• Mathematics Homework Assistant
• Do math problems
• Trustworthy Support
Our team is here to provide you with the support you need to succeed.
• Solve mathematic equations
If you want to enhance your academic performance, you need to be willing to put in the work.
• SOLVING
Math is a way of solving problems by using numbers and equations.
## Sums and Differences of Cubes, & Perfect Squares
The formula to find the sum of cubes of n natural numbers is S = [n 2 (n + 1) 2 ]/4, where n is the count of natural numbers that we take. For example, if you want to find the sum of cubes of 7
Solve math problem
Get Help with Homework
## Sum and Difference of Cubes
2^ [3^3 + 5^3] = [2^19]^ [2^3]. Numbers are related and have to be factored if possible to get the cube of a sum of cubes. Otherwise, to solve a sum of cubes, cube each result and add
## Factoring the Sum of Cubes
You just have to change two little signs to make it work. The sum of two cubes equals the sum of its roots times the squares of its roots minus the product of the roots, which
## A lot of happy people
It helps a lot for the student who need help by the use of this, the picture part still needs to be more flexible with the angles and the rotation of the camera, but if the picture is taken on the correct angle it's flawless. I do multiple peoples homework, usually I can do my own homework without a calculator but since I now have more I need a fast way to get it done.
Michael Coleman
I hope i met this app since i was in my senior high school days. Keep up the Great work you guys are doing. Very interesting and supportful Thank you â£ï¸’¯Š.
Robert French
Awesome and easy to use as it provide all basic solution of math by just clicking the picture of problem, it works really well! It gives the correct awncer every time and even shows you how to do it.
Don Morrissey
| 503
| 2,099
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.875
| 4
|
CC-MAIN-2023-06
|
latest
|
en
| 0.944445
|
https://onelib.org/engineering-statistics-online-course
| 1,638,816,302,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00207.warc.gz
| 492,139,476
| 9,138
|
OneLIB.org
# Engineering Statistics Online Course
## Free Online Course: Fundamentals of Engineering Statistical ...
Freemium www.classcentral.com
This course provides fundamental concepts in probability and statistical inference, with application to engineering contexts.
## Probability and Statistics Online Courses | Coursera
Freemium www.coursera.org
Build goal-oriented learning strategies with comprehensive skills data. Learn more. Arizona State University. Gain software engineering skills in ...
## Online Statistics Courses | Harvard University
Popular online-learning.harvard.edu
Browse the latest online statistics courses from Harvard University, including "Principles, Statistical and Computational Tools for Reproducible Data Science" ...
## Learn Statistics with Online Courses and Lessons | edX
New www.edx.org
Learn statistics with free online courses and classes to build your skills and advance your career. Gain an understanding of standard deviation, probability ...
## Engineering Statistics • Summer 2018 • ELO Online Courses • Iowa ...
Popular courses.elo.iastate.edu
Course Description. Statistics for engineering problem solving. Principles of engineering data collection; descriptive statistics; elementary probability ...
## Statistical Methods Course | Engineering Courses | Purdue Online ...
Popular engineering.purdue.edu
Learning Objective: Provide the students with the fundamentals of probability, statistical methods, and data analysis. Description: Descriptive statistics; elementary ...
## Probability and Statistics in Engineering | Civil and Environmental ...
Hot ocw.mit.edu
Course Description. This class covers quantitative analysis of uncertainty and risk for engineering applications. Fundamentals of probability, random processes, ...
## Top Statistics Courses Online - Updated [June 2021] | Udemy
Hot www.udemy.com
Learn how to use statistics to interpret complex data sets from a top-rated data science instructor. Whether you're interested in data analysis, business analytics, ...
## Probability & Statistics in Civil Engineering | UMassOnline
Freemium www.umassonline.net
This course will introduce the field of probability and statistics, and demonstrate Online. Level: Undergraduate. Subject: Civil and Environmental Engineering.
## Statistics and Probability | Khan Academy
Free www.khanacademy.org
Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a 501(c)(3) nonprofit organization. Donate or volunteer today!
## Online Statistics Review Course - Chris Mack, Gentleman Scientist
Freemium www.lithoguru.com
This is an online review for a standard introductory undergraduate probability and statistics course. In many disciplines, such engineering, undergraduates are ...
## Fundamentals of Engineering Statistical Analysis - YouTube
Best www.youtube.com
Fundamentals of Engineering Statistical Analysis” is a free online course on Janux that is open ...
## Statistical Methods in Engineering and the ... - Stanford Online
New online.stanford.edu
This course is an introduction to statistics with an emphasis on modern engineering applications. Students explore concepts of probability theory, discrete and ...
## 55+ Best Free Statistics Courses in 2021 - Guru99
Free www.guru99.com
Statistics, Number Crunching, and Data Sciences are the skill to 2, Why Numbers Matter - Online Course, 4.9/5, Free, Learn how to 49, Fundamentals of Engineering Statistical Analysis, 4.9/5, Free, This course provides ...
## Introductory Statistics for Physical Sciences and Engineering ...
Hot uwm.edu
Introductory Statistics for Physical Sciences and Engineering Students. Course Details. Department & Course Number. IND ENG 367, LEC 201. Class Number.
| 730
| 3,785
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.890625
| 3
|
CC-MAIN-2021-49
|
latest
|
en
| 0.747566
|
https://www.allinterview.com/showanswers/178890/one-b-o-t-unit.html
| 1,571,633,508,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00134.warc.gz
| 790,675,846
| 8,010
|
One B.O.T Unit ??
Answers were Sorted based on User's Feedback
One B.O.T Unit ??..
1 kWh = 1 BOT Unit
Is This Answer Correct ? 199 Yes 11 No
One B.O.T Unit ??..
1000W×3600= 1 BOT or = 1kWh
Is This Answer Correct ? 70 Yes 2 No
One B.O.T Unit ??..
1 kwh
Is This Answer Correct ? 29 Yes 5 No
One B.O.T Unit ??..
1 B.O.T. unit = 746 WH
Is This Answer Correct ? 58 Yes 42 No
One B.O.T Unit ??..
1000 Watt hour
Is This Answer Correct ? 11 Yes 1 No
One B.O.T Unit ??..
1 kwh
Is This Answer Correct ? 7 Yes 0 No
One B.O.T Unit ??..
1Khw
Is This Answer Correct ? 8 Yes 1 No
One B.O.T Unit ??..
1BOT = 1kwh
Is This Answer Correct ? 0 Yes 0 No
One B.O.T Unit ??..
BOT means the kilo watt hour is called a bourd of trade unite.
1BOT=1kwh
=1000 *3600
=3.6*10^6
Is This Answer Correct ? 0 Yes 0 No
More Electrical Engineering Interview Questions
i need to give section engineer (electrical engineer) for RRB. Please suggest some book
why star connected transformers used in tneb?
WAHAT IS ELECTRICITY
In separetly excited m/c,we excite the field by a dc source,what not with ac source?
how the speed decreases of a generator in power station if load increases
diff between synchronous gen and alternators
why we are using the 2 slope in differential relay?but in busbar differential we are using only one slope. what is the reason.
What is the diffrece between CAPACITOR BANK & CAPACITOR OF MOTOR?
if i get a drop in my 1st year and the i clear my 2nd year and 3rd year all clear and i dont have any back logs so can i be able to sit in campus selection??
Some times to change of DOR of a motor, interchange of R & B is not effected and RYB take same current, but interchange of R&Y is effected ie change the DOR. My doubt is what is the problem in motor and how it is happen.
Differentiate between Low and High impedance Protection
Dear Sir how much cable size required from 0.75 kw to 50 kw for electric motor or pump
Categories
• Civil Engineering (5062)
• Mechanical Engineering (4327)
• Electrical Engineering (16561)
• Electronics Communications (3564)
• Chemical Engineering (959)
• Aeronautical Engineering (214)
• Bio Engineering (96)
• Metallurgy (361)
• Industrial Engineering (261)
• Instrumentation (2996)
• Automobile Engineering (332)
• Mechatronics Engineering (90)
• Marine Engineering (107)
• Power Plant Engineering (170)
• Textile Engineering (198)
• Production Engineering (0)
• Satellite Systems Engineering (90)
• Engineering AllOther (1376)
| 692
| 2,502
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.828125
| 3
|
CC-MAIN-2019-43
|
longest
|
en
| 0.858204
|
https://123dok.net/document/q054or3v-factorisation-un-peu-de-tout.html
| 1,695,949,839,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-40/segments/1695233510462.75/warc/CC-MAIN-20230928230810-20230929020810-00392.warc.gz
| 81,545,185
| 43,215
|
• Aucun résultat trouvé
# FACTORISATION (Un peu de tout...)
N/A
N/A
Protected
Partager "FACTORISATION (Un peu de tout...)"
Copied!
14
0
0
En savoir plus ( Page)
Texte intégral
(1)
Corrigé de l’exercice 1
Factoriser chacune des expressions littérales suivantes : A=−(−8x−7)2+ 100
A=−(−8x−7)2+ 102
A= (10−8x−7)×(10−(−8x−7)) A= (−8x+ 10−7)×(10 + 8x+ 7) A= (−8x+ 10−7)×(8x+ 10 + 7)
A= (−8x+ 3)×(8x+ 17) B = 36x2−36x+ 9
B = (6x)2−2×6x×3 + 32 B= (6x−3)2
C= (7x+ 8)×(3x+ 1) + (3x+ 1)×(3x+ 3) C= (3x+ 1)×(7x+ 8 + 3x+ 3)
C= (3x+ 1)×(7x+ 3x+ 8 + 3) C= (3x+ 1)×(10x+ 11)
D=x2−49 D=x2−72
D= (x+ 7)×(x−7)
E= (−10x+ 10)2+ (9x+ 2)×(−10x+ 10) E = (−10x+ 10) × (−10x+ 10) + (9x+ 2) × (−10x+ 10)
E= (−10x+ 10)×(−10x+ 10 + 9x+ 2) E= (−10x+ 10)×(−10x+ 9x+ 10 + 2)
E= (−10x+ 10)×(−x+ 12) F =−(5x−4) + (5x−4)×(x−2) F =−(5x−4)×1 + (5x−4)×(x−2) F = (5x−4)×(−1 +x−2)
F = (5x−4)×(x−1−2) F = (5x−4)×(x−3)
Corrigé de l’exercice 2
Factoriser chacune des expressions littérales suivantes : A= 25x2−20x+ 4
A= (5x)2−2×5x×2 + 22 A= (5x−2)2
B = (5x−1)×(6x−3) + (9x−5)×(5x−1) B = (5x−1)×(6x−3 + 9x−5)
B = (5x−1)×(6x+ 9x−3−5) B= (5x−1)×(15x−8) C=−16x2+ 64
C= 82−(4x)2
C= (4x+ 8)×(−4x+ 8) D= 64x2−(8x−10)2 D= (8x)2−(8x−10)2
D= (8x+ 8x−10)×(8x−(8x−10)) D= (16x−10)×(8x−8x+ 10)
D= (16x−10)×10
E= (4x+ 2)×(x−9)−(x−9) E= (4x+ 2)×(x−9)−(x−9)×1 E= (x−9)×(4x+ 2−1)
E= (x−9)×(4x+ 1)
F = (4x−7)×(2x+ 9) + (2x+ 9)2
F = (4x−7)×(2x+ 9) + (2x+ 9)×(2x+ 9) F = (2x+ 9)×(4x−7 + 2x+ 9)
F = (2x+ 9)×(4x+ 2x−7 + 9) F = (2x+ 9)×(6x+ 2)
Corrigé de l’exercice 3
Factoriser chacune des expressions littérales suivantes : A= (−x+ 5)×(7x+ 7) + (−3x−6)×(7x+ 7) A= (7x+ 7)×(−x+ 5−3x−6)
A= (7x+ 7)×(−x−3x+ 5−6)
A= (7x+ 7)×(−4x−1) B = 16x2−80x+ 100
(2)
B = (4x)2−2×4x×10 + 102 B= (4x−10)2
C= (x−6)2−49 C= (x−6)2−72
C= (x−6 + 7)×(x−6−7) C= (x+ 1)×(x−13) D=−49x2+ 9
D= 32−(7x)2
D= (7x+ 3)×(−7x+ 3)
E= (5x+ 10)×(7x+ 7)−(5x+ 10) E= (5x+ 10)×(7x+ 7)−(5x+ 10)×1 E= (5x+ 10)×(7x+ 7−1)
E= (5x+ 10)×(7x+ 6)
F = (7x+ 1)×(3x+ 6) + (7x+ 1)2
F = (7x+ 1)×(3x+ 6) + (7x+ 1)×(7x+ 1) F = (7x+ 1)×(3x+ 6 + 7x+ 1)
F = (7x+ 1)×(3x+ 7x+ 6 + 1) F = (7x+ 1)×(10x+ 7)
Corrigé de l’exercice 4
Factoriser chacune des expressions littérales suivantes : A= 4x2−25
A= (2x)2−52
A= (2x+ 5)×(2x−5)
B = (3x+ 9)×(7x−10)−(9x+ 4)×(3x+ 9) B = (3x+ 9)×(7x−10−(9x+ 4))
B = (3x+ 9)×(7x−10−9x−4) B = (3x+ 9)×(7x−9x−10−4)
B= (3x+ 9)×(−2x−14) C= 16x2−24x+ 9
C= (4x)2−2×4x×3 + 32 C= (4x−3)2
D= 64−(2x+ 8)2 D= 82−(2x+ 8)2
D= (8 + 2x+ 8)×(8−(2x+ 8)) D= (2x+ 8 + 8)×(8−2x−8) D= (2x+ 8 + 8)×(−2x+ 8−8)
D= (2x+ 16)×(−2x)
E= 6x−6 + (6x−6)×(7x−10) E= (6x−6)×1 + (6x−6)×(7x−10) E= (6x−6)×(1 + 7x−10)
E= (6x−6)×(7x+ 1−10) E= (6x−6)×(7x−9)
F = (10x−5)2+ (10x−5)×(−2x+ 2)
F = (10x−5)×(10x−5)+(10x−5)×(−2x+ 2) F = (10x−5)×(10x−5−2x+ 2)
F = (10x−5)×(10x−2x−5 + 2) F = (10x−5)×(8x−3)
Corrigé de l’exercice 5
Factoriser chacune des expressions littérales suivantes : A=−49 + (−8x+ 1)2
A=−72+ (−8x+ 1)2
A= (−8x+ 1 + 7)×(−8x+ 1−7) A= (−8x+ 8)×(−8x−6)
B = (−6x−2) × (−6x−6) + (10x+ 9) × (−6x−2)
B = (−6x−2)×(−6x−6 + 10x+ 9) B = (−6x−2)×(−6x+ 10x−6 + 9)
B = (−6x−2)×(4x+ 3) C= 81x2+ 90x+ 25
C= (9x)2+ 2×9x×5 + 52 C= (9x+ 5)2
D=−36x2+ 81 D= 92−(6x)2
(3)
D= (6x+ 9)×(−6x+ 9)
E=−(6x−4)×(10x+ 4) + 10x+ 4 E=−(6x−4)×(10x+ 4) + (10x+ 4)×1 E= (10x+ 4)×(−(6x−4) + 1)
E= (10x+ 4)×(−6x+ 4 + 1) E= (10x+ 4)×(−6x+ 5)
F = (−2x−3)2+ (7x+ 9)×(−2x−3)
F = (−2x−3)×(−2x−3)+(7x+ 9)×(−2x−3) F = (−2x−3)×(−2x−3 + 7x+ 9)
F = (−2x−3)×(−2x+ 7x−3 + 9) F = (−2x−3)×(5x+ 6)
Corrigé de l’exercice 6
Factoriser chacune des expressions littérales suivantes : A=−(−2x+ 10)2+ 64x2
A=−(−2x+ 10)2+ (8x)2
A= (8x−2x+ 10)×(8x−(−2x+ 10)) A= (6x+ 10)×(8x+ 2x−10)
A= (6x+ 10)×(10x−10)
B = −(−2x−7) × (6x+ 6) + (−2x−7) × (−8x+ 7)
B = (−2x−7)×(−(6x+ 6)−8x+ 7) B = (−2x−7)×(−6x−6−8x+ 7) B = (−2x−7)×(−6x−8x−6 + 7)
B= (−2x−7)×(−14x+ 1) C=−4x2+ 25
C= 52−(2x)2
C= (2x+ 5)×(−2x+ 5)
D= 64x2+ 48x+ 9
D= (8x)2+ 2×8x×3 + 32 D= (8x+ 3)2
E= (10x+ 3)2+ (2x−10)×(10x+ 3)
E= (10x+ 3)×(10x+ 3) + (2x−10)×(10x+ 3) E= (10x+ 3)×(10x+ 3 + 2x−10)
E= (10x+ 3)×(10x+ 2x+ 3−10) E= (10x+ 3)×(12x−7)
F = (5x+ 3)×(6x+ 3) + 6x+ 3 F = (5x+ 3)×(6x+ 3) + (6x+ 3)×1 F = (6x+ 3)×(5x+ 3 + 1)
F = (6x+ 3)×(5x+ 4)
Corrigé de l’exercice 7
Factoriser chacune des expressions littérales suivantes : A= 49x2−25
A= (7x)2−52
A= (7x+ 5)×(7x−5)
B = (−4x+ 2)×(5x−5) + (5x−5)×(9x+ 7) B = (5x−5)×(−4x+ 2 + 9x+ 7)
B = (5x−5)×(−4x+ 9x+ 2 + 7) B= (5x−5)×(5x+ 9)
C= 9x2+ 42x+ 49
C= (3x)2+ 2×3x×7 + 72 C= (3x+ 7)2
D= (3x+ 4)2−36x2 D= (3x+ 4)2−(6x)2
D= (3x+ 4 + 6x)×(3x+ 4−6x) D= (3x+ 6x+ 4)×(3x−6x+ 4)
D= (9x+ 4)×(−3x+ 4)
E= (−2x+ 8)2+ (−2x+ 8)×(−10x+ 9) E = (−2x+ 8) × (−2x+ 8) + (−2x+ 8) × (−10x+ 9)
E= (−2x+ 8)×(−2x+ 8−10x+ 9) E= (−2x+ 8)×(−2x−10x+ 8 + 9)
E= (−2x+ 8)×(−12x+ 17) F =−(x+ 6)×(5x−1) + 5x−1
(4)
F =−(x+ 6)×(5x−1) + (5x−1)×1 F = (5x−1)×(−(x+ 6) + 1)
F = (5x−1)×(−x−6 + 1)
F = (5x−1)×(−x−5)
Corrigé de l’exercice 8
Factoriser chacune des expressions littérales suivantes : A= (−6x+ 5)×(10x−4) + (−6x+ 5)×(3x+ 4) A= (−6x+ 5)×(10x−4 + 3x+ 4)
A= (−6x+ 5)×(10x+ 3x−4 + 4) A= (−6x+ 5)×13x
B =−9x2+ 25 B = 52−(3x)2
B= (3x+ 5)×(−3x+ 5) C=−16x2+ (6x+ 6)2 C=−(4x)2+ (6x+ 6)2
C= (6x+ 6 + 4x)×(6x+ 6−4x) C= (6x+ 4x+ 6)×(6x−4x+ 6)
C= (10x+ 6)×(2x+ 6) D= 25x2+ 100x+ 100
D= (5x)2+ 2×5x×10 + 102 D= (5x+ 10)2
E= 2x+ 7 + (2x+ 7)×(6x−10) E= (2x+ 7)×1 + (2x+ 7)×(6x−10) E= (2x+ 7)×(1 + 6x−10)
E= (2x+ 7)×(6x+ 1−10) E= (2x+ 7)×(6x−9)
F = (−x−7)×(−2x+ 7)−(−2x+ 7)2
F = (−x−7)×(−2x+ 7)−(−2x+ 7)×(−2x+ 7) F = (−2x+ 7)×(−x−7−(−2x+ 7))
F = (−2x+ 7)×(−x−7 + 2x−7) F = (−2x+ 7)×(−x+ 2x−7−7)
F = (−2x+ 7)×(x−14)
Corrigé de l’exercice 9
Factoriser chacune des expressions littérales suivantes : A= 25x2−9
A= (5x)2−32
A= (5x+ 3)×(5x−3) B =−(4x−4)2+ 100x2 B =−(4x−4)2+ (10x)2
B = (10x+ 4x−4)×(10x−(4x−4)) B = (14x−4)×(10x−4x+ 4)
B= (14x−4)×(6x+ 4)
C= (2x+ 5)×(x−10) + (x−10)×(−4x−6) C= (x−10)×(2x+ 5−4x−6)
C= (x−10)×(2x−4x+ 5−6) C= (x−10)×(−2x−1) D= 9x2−36x+ 36
D= (3x)2−2×3x×6 + 62 D= (3x−6)2
E= (10x+ 7)×(5x+ 1) + (5x+ 1)2
E= (10x+ 7)×(5x+ 1) + (5x+ 1)×(5x+ 1) E= (5x+ 1)×(10x+ 7 + 5x+ 1)
E= (5x+ 1)×(10x+ 5x+ 7 + 1) E= (5x+ 1)×(15x+ 8)
F = 4x+ 8−(4x+ 8)×(x−7) F = (4x+ 8)×1−(4x+ 8)×(x−7) F = (4x+ 8)×(1−(x−7))
F = (4x+ 8)×(1−x+ 7) F = (4x+ 8)×(−x+ 1 + 7)
F = (4x+ 8)×(−x+ 8)
Corrigé de l’exercice 10
Factoriser chacune des expressions littérales suivantes :
(5)
A= 36x2−25 A= (6x)2−52
A= (6x+ 5)×(6x−5) B = 100x2+ 160x+ 64 B = (10x)2+ 2×10x×8 + 82
B= (10x+ 8)2
C= (8x+ 7)×(2x−6) + (6x+ 6)×(8x+ 7) C= (8x+ 7)×(2x−6 + 6x+ 6)
C= (8x+ 7)×(2x+ 6x−6 + 6) C= (8x+ 7)×8x
D=−(−9x+ 7)2+ 49x2 D=−(−9x+ 7)2+ (7x)2
D= (7x−9x+ 7)×(7x−(−9x+ 7))
D= (−2x+ 7)×(7x+ 9x−7) D= (−2x+ 7)×(16x−7) E= (3x+ 9)×(8x−4) + 8x−4 E= (3x+ 9)×(8x−4) + (8x−4)×1 E= (8x−4)×(3x+ 9 + 1)
E= (8x−4)×(3x+ 10)
F =−(−8x+ 4)2+ (−8x+ 4)×(6x+ 4)
F = −(−8x+ 4) × (−8x+ 4) + (−8x+ 4) × (6x+ 4)
F = (−8x+ 4)×(−(−8x+ 4) + 6x+ 4) F = (−8x+ 4)×(8x−4 + 6x+ 4) F = (−8x+ 4)×(8x+ 6x−4 + 4)
F = (−8x+ 4)×14x
Corrigé de l’exercice 11
Factoriser chacune des expressions littérales suivantes : A= (−9x−4)×(9x−9)−(8x−2)×(−9x−4) A= (−9x−4)×(9x−9−(8x−2))
A= (−9x−4)×(9x−9−8x+ 2) A= (−9x−4)×(9x−8x−9 + 2)
A= (−9x−4)×(x−7) B = 100x2+ 80x+ 16
B = (10x)2+ 2×10x×4 + 42 B= (10x+ 4)2
C= 49x2−4 C= (7x)2−22
C= (7x+ 2)×(7x−2) D= (8x−10)2−49 D= (8x−10)2−72
D= (8x−10 + 7)×(8x−10−7) D= (8x−3)×(8x−17)
E= 4x+ 8 + (10x+ 2)×(4x+ 8) E= (4x+ 8)×1 + (10x+ 2)×(4x+ 8) E= (4x+ 8)×(1 + 10x+ 2)
E= (4x+ 8)×(10x+ 1 + 2) E= (4x+ 8)×(10x+ 3)
F = (−6x−3)×(−2x+ 5) + (−6x−3)2
F = (−6x−3) × (−2x+ 5) + (−6x−3) × (−6x−3)
F = (−6x−3)×(−2x+ 5−6x−3) F = (−6x−3)×(−2x−6x+ 5−3)
F = (−6x−3)×(−8x+ 2)
Corrigé de l’exercice 12
Factoriser chacune des expressions littérales suivantes : A=−9 + (10x+ 1)2
A=−32+ (10x+ 1)2
A= (10x+ 1 + 3)×(10x+ 1−3) A= (10x+ 4)×(10x−2)
B = (−2x+ 8)×(−7x−5)+(−2x+ 8)×(7x+ 5)
B = (−2x+ 8)×(−7x−5 + 7x+ 5) B = (−2x+ 8)×(−7x+ 7x−5 + 5)
B = (−2x+ 8)×0 C= 4x2+ 4x+ 1
C= (2x)2+ 2×2x×1 + 12
(6)
C= (2x+ 1)2 D= 4x2−64 D= (2x)2−82
D= (2x+ 8)×(2x−8)
E= 10x+ 9 + (4x+ 7)×(10x+ 9) E= (10x+ 9)×1 + (4x+ 7)×(10x+ 9) E= (10x+ 9)×(1 + 4x+ 7)
E= (10x+ 9)×(4x+ 1 + 7) E= (10x+ 9)×(4x+ 8)
F = (−x+ 7)2−(−x+ 7)×(8x+ 10)
F = (−x+ 7)×(−x+ 7)−(−x+ 7)×(8x+ 10) F = (−x+ 7)×(−x+ 7−(8x+ 10))
F = (−x+ 7)×(−x+ 7−8x−10) F = (−x+ 7)×(−x−8x+ 7−10)
F = (−x+ 7)×(−9x−3)
Corrigé de l’exercice 13
Factoriser chacune des expressions littérales suivantes : A= (9x+ 9)×(x+ 3) + (10x+ 8)×(x+ 3)
A= (x+ 3)×(9x+ 9 + 10x+ 8) A= (x+ 3)×(9x+ 10x+ 9 + 8)
A= (x+ 3)×(19x+ 17) B = 4x2−16x+ 16
B = (2x)2−2×2x×4 + 42 B= (2x−4)2
C=x2−81 C=x2−92
C= (x+ 9)×(x−9) D=−(4x−8)2+ 9 D=−(4x−8)2+ 32
D= (3 + 4x−8)×(3−(4x−8)) D= (4x+ 3−8)×(3−4x+ 8)
D= (4x+ 3−8)×(−4x+ 3 + 8) D= (4x−5)×(−4x+ 11)
E=−(5x+ 1)×(8x+ 2) + (5x+ 1)2
E=−(5x+ 1)×(8x+ 2) + (5x+ 1)×(5x+ 1) E= (5x+ 1)×(−(8x+ 2) + 5x+ 1)
E= (5x+ 1)×(−8x−2 + 5x+ 1) E= (5x+ 1)×(−8x+ 5x−2 + 1)
E= (5x+ 1)×(−3x−1) F = 6x−8 + (6x−8)×(4x+ 5) F = (6x−8)×1 + (6x−8)×(4x+ 5) F = (6x−8)×(1 + 4x+ 5)
F = (6x−8)×(4x+ 1 + 5) F = (6x−8)×(4x+ 6)
Corrigé de l’exercice 14
Factoriser chacune des expressions littérales suivantes : A=−(−5x−4)2+ 16
A=−(−5x−4)2+ 42
A= (4−5x−4)×(4−(−5x−4)) A= (−5x+ 4−4)×(4 + 5x+ 4) A= (−5x+ 4−4)×(5x+ 4 + 4)
A=−5x×(5x+ 8)
B = (−8x−3) × (10x−4) + (−8x−3) × (−10x−2)
B = (−8x−3)×(10x−4−10x−2) B = (−8x−3)×(10x−10x−4−2)
B = (−8x−3)×(−6) C=−9x2+ 49
C= 72−(3x)2
C= (3x+ 7)×(−3x+ 7) D= 9x2−48x+ 64
D= (3x)2−2×3x×8 + 82 D= (3x−8)2
(7)
E= (3x+ 5)×(4x+ 2) + (3x+ 5)2
E= (3x+ 5)×(4x+ 2) + (3x+ 5)×(3x+ 5) E= (3x+ 5)×(4x+ 2 + 3x+ 5)
E= (3x+ 5)×(4x+ 3x+ 2 + 5) E= (3x+ 5)×(7x+ 7)
F =−(4x+ 8)×(3x+ 3) + 4x+ 8 F =−(4x+ 8)×(3x+ 3) + (4x+ 8)×1 F = (4x+ 8)×(−(3x+ 3) + 1)
F = (4x+ 8)×(−3x−3 + 1) F = (4x+ 8)×(−3x−2)
Corrigé de l’exercice 15
Factoriser chacune des expressions littérales suivantes : A= 4x2+ 32x+ 64
A= (2x)2+ 2×2x×8 + 82 A= (2x+ 8)2
B = (−4x+ 3)×(4x+ 5) + (4x+ 4)×(−4x+ 3) B = (−4x+ 3)×(4x+ 5 + 4x+ 4)
B = (−4x+ 3)×(4x+ 4x+ 5 + 4) B= (−4x+ 3)×(8x+ 9)
C= 25x2−25 C= (5x)2−52
C= (5x+ 5)×(5x−5) D= (−6x+ 5)2−4x2 D= (−6x+ 5)2−(2x)2
D= (−6x+ 5 + 2x)×(−6x+ 5−2x)
D= (−6x+ 2x+ 5)×(−6x−2x+ 5) D= (−4x+ 5)×(−8x+ 5)
E= (−7x−6)2−(−2x+ 3)×(−7x−6)
E = (−7x−6) × (−7x−6) − (−2x+ 3) × (−7x−6)
E= (−7x−6)×(−7x−6−(−2x+ 3)) E= (−7x−6)×(−7x−6 + 2x−3) E= (−7x−6)×(−7x+ 2x−6−3)
E= (−7x−6)×(−5x−9) F = (4x−6)×(4x+ 9) + 4x−6 F = (4x−6)×(4x+ 9) + (4x−6)×1 F = (4x−6)×(4x+ 9 + 1)
F = (4x−6)×(4x+ 10)
Corrigé de l’exercice 16
Factoriser chacune des expressions littérales suivantes : A= 4x2+ 20x+ 25
A= (2x)2+ 2×2x×5 + 52 A= (2x+ 5)2
B =−x2+ 9 B = 32x2
B= (x+ 3)×(−x+ 3) C= (8x+ 7)2−4
C= (8x+ 7)2−22
C= (8x+ 7 + 2)×(8x+ 7−2) C= (8x+ 9)×(8x+ 5)
D= (3x+ 4)×(10x+ 2) + (−8x+ 5)×(3x+ 4) D= (3x+ 4)×(10x+ 2−8x+ 5)
D= (3x+ 4)×(10x−8x+ 2 + 5) D= (3x+ 4)×(2x+ 7)
E= (2x+ 3)×(7x+ 6) + (7x+ 6)2
E= (2x+ 3)×(7x+ 6) + (7x+ 6)×(7x+ 6) E= (7x+ 6)×(2x+ 3 + 7x+ 6)
E= (7x+ 6)×(2x+ 7x+ 3 + 6) E= (7x+ 6)×(9x+ 9)
F =−(4x+ 2) + (8x+ 6)×(4x+ 2) F =−(4x+ 2)×1 + (8x+ 6)×(4x+ 2) F = (4x+ 2)×(−1 + 8x+ 6)
F = (4x+ 2)×(8x−1 + 6) F = (4x+ 2)×(8x+ 5)
(8)
Corrigé de l’exercice 17
Factoriser chacune des expressions littérales suivantes : A= (2x+ 10)×(−4x+ 6) + (2x+ 10)×(9x−1) A= (2x+ 10)×(−4x+ 6 + 9x−1)
A= (2x+ 10)×(−4x+ 9x+ 6−1) A= (2x+ 10)×(5x+ 5)
B = 49x2−4 B = (7x)2−22
B= (7x+ 2)×(7x−2) C=−36 + (−4x+ 2)2 C=−62+ (−4x+ 2)2
C= (−4x+ 2 + 6)×(−4x+ 2−6) C= (−4x+ 8)×(−4x−4) D=x2+ 2x+ 1
D=x2+ 2×x×1 + 12
D= (x+ 1)2
E= 10x−3−(8x+ 6)×(10x−3) E= (10x−3)×1−(8x+ 6)×(10x−3) E= (10x−3)×(1−(8x+ 6))
E= (10x−3)×(1−8x−6) E= (10x−3)×(−8x+ 1−6)
E= (10x−3)×(−8x−5) F = (3x+ 6)2+ (3x+ 6)×(8x+ 7)
F = (3x+ 6)×(3x+ 6) + (3x+ 6)×(8x+ 7) F = (3x+ 6)×(3x+ 6 + 8x+ 7)
F = (3x+ 6)×(3x+ 8x+ 6 + 7) F = (3x+ 6)×(11x+ 13)
Corrigé de l’exercice 18
Factoriser chacune des expressions littérales suivantes : A=x2−64
A=x2−82
A= (x+ 8)×(x−8) B = 9x2−24x+ 16
B = (3x)2−2×3x×4 + 42 B= (3x−4)2
C=−(−5x−8)2+ 9 C=−(−5x−8)2+ 32
C= (3−5x−8)×(3−(−5x−8)) C= (−5x+ 3−8)×(3 + 5x+ 8) C= (−5x+ 3−8)×(5x+ 3 + 8)
C= (−5x−5)×(5x+ 11)
D= (10x+ 5)×(2x+ 8) + (2x+ 9)×(2x+ 8) D= (2x+ 8)×(10x+ 5 + 2x+ 9)
D= (2x+ 8)×(10x+ 2x+ 5 + 9) D= (2x+ 8)×(12x+ 14) E= 7x+ 9 + (7x+ 9)×(8x+ 7) E= (7x+ 9)×1 + (7x+ 9)×(8x+ 7) E= (7x+ 9)×(1 + 8x+ 7)
E= (7x+ 9)×(8x+ 1 + 7) E= (7x+ 9)×(8x+ 8)
F =−(10x+ 4)×(−7x+ 6) + (−7x+ 6)2 F = −(10x+ 4) × (−7x+ 6) + (−7x+ 6) × (−7x+ 6)
F = (−7x+ 6)×(−(10x+ 4)−7x+ 6) F = (−7x+ 6)×(−10x−4−7x+ 6) F = (−7x+ 6)×(−10x−7x−4 + 6)
F = (−7x+ 6)×(−17x+ 2)
Corrigé de l’exercice 19
Factoriser chacune des expressions littérales suivantes : A= 81−(5x−4)2
A= 92−(5x−4)2
A= (9 + 5x−4)×(9−(5x−4)) A= (5x+ 9−4)×(9−5x+ 4)
(9)
A= (5x+ 9−4)×(−5x+ 9 + 4) A= (5x+ 5)×(−5x+ 13) B = 100x2+ 80x+ 16
B = (10x)2+ 2×10x×4 + 42 B= (10x+ 4)2
C= (7x+ 2)×(2x+ 4) + (2x+ 4)×(8x+ 2) C= (2x+ 4)×(7x+ 2 + 8x+ 2)
C= (2x+ 4)×(7x+ 8x+ 2 + 2) C= (2x+ 4)×(15x+ 4) D=−100x2+ 81
D= 92−(10x)2
D= (10x+ 9)×(−10x+ 9)
E= (10x+ 5)×(2x+ 3) + (10x+ 5)2
E= (10x+ 5)×(2x+ 3) + (10x+ 5)×(10x+ 5) E= (10x+ 5)×(2x+ 3 + 10x+ 5)
E= (10x+ 5)×(2x+ 10x+ 3 + 5) E= (10x+ 5)×(12x+ 8)
F =−(x+ 2) + (x+ 2)×(8x+ 8) F =−(x+ 2)×1 + (x+ 2)×(8x+ 8) F = (x+ 2)×(−1 + 8x+ 8)
F = (x+ 2)×(8x−1 + 8) F = (x+ 2)×(8x+ 7)
Corrigé de l’exercice 20
Factoriser chacune des expressions littérales suivantes : A = (−5x+ 3) × (2x+ 1) + (−5x+ 3) × (−10x−3)
A= (−5x+ 3)×(2x+ 1−10x−3) A= (−5x+ 3)×(2x−10x+ 1−3)
A= (−5x+ 3)×(−8x−2) B =−16x2+ 49
B = 72−(4x)2
B= (4x+ 7)×(−4x+ 7) C=−4x2+ (3x+ 9)2 C=−(2x)2+ (3x+ 9)2
C= (3x+ 9 + 2x)×(3x+ 9−2x) C= (3x+ 2x+ 9)×(3x−2x+ 9)
C= (5x+ 9)×(x+ 9)
D= 100x2−200x+ 100
D= (10x)2−2×10x×10 + 102 D= (10x−10)2
E= (−9x+ 7)×(4x+ 10) + (4x+ 10)2
E= (−9x+ 7)×(4x+ 10)+(4x+ 10)×(4x+ 10) E= (4x+ 10)×(−9x+ 7 + 4x+ 10)
E= (4x+ 10)×(−9x+ 4x+ 7 + 10) E= (4x+ 10)×(−5x+ 17)
F = (8x+ 3)×(10x+ 4)−(10x+ 4) F = (8x+ 3)×(10x+ 4)−(10x+ 4)×1 F = (10x+ 4)×(8x+ 3−1)
F = (10x+ 4)×(8x+ 2)
Corrigé de l’exercice 21
Factoriser chacune des expressions littérales suivantes : A = (−7x−9) × (−10x−7) − (−7x−9) × (2x+ 9)
A= (−7x−9)×(−10x−7−(2x+ 9)) A= (−7x−9)×(−10x−7−2x−9) A= (−7x−9)×(−10x−2x−7−9)
A= (−7x−9)×(−12x−16) B =−81x2+ 36
B = 62−(9x)2
B = (9x+ 6)×(−9x+ 6) C= (−6x+ 4)2−36x2 C= (−6x+ 4)2−(6x)2
C= (−6x+ 4 + 6x)×(−6x+ 4−6x) C= (−6x+ 6x+ 4)×(−6x−6x+ 4)
(10)
C= 4×(−12x+ 4) D= 36x2−12x+ 1
D= (6x)2−2×6x×1 + 12 D= (6x−1)2
E= (3x−10)×(3x+ 4) + 3x+ 4 E= (3x−10)×(3x+ 4) + (3x+ 4)×1
E= (3x+ 4)×(3x−10 + 1) E= (3x+ 4)×(3x−9)
F = (−8x−2)2+ (3x+ 3)×(−8x−2)
F = (−8x−2)×(−8x−2)+(3x+ 3)×(−8x−2) F = (−8x−2)×(−8x−2 + 3x+ 3)
F = (−8x−2)×(−8x+ 3x−2 + 3) F = (−8x−2)×(−5x+ 1)
Corrigé de l’exercice 22
Factoriser chacune des expressions littérales suivantes : A= (−7x−3)×(3x+ 5) + (3x+ 5)×(−4x−6) A= (3x+ 5)×(−7x−3−4x−6)
A= (3x+ 5)×(−7x−4x−3−6) A= (3x+ 5)×(−11x−9) B =−(−x+ 9)2+ 16
B =−(−x+ 9)2+ 42
B = (4−x+ 9)×(4−(−x+ 9)) B = (−x+ 4 + 9)×(4 +x−9) B = (−x+ 4 + 9)×(x+ 4−9)
B= (−x+ 13)×(x−5) C= 81x2−9
C= (9x)2−32
C= (9x+ 3)×(9x−3) D= 36x2−12x+ 1
D= (6x)2−2×6x×1 + 12 D= (6x−1)2
E=−(5x+ 9)×(3x−4) + (3x−4)2
E=−(5x+ 9)×(3x−4) + (3x−4)×(3x−4) E= (3x−4)×(−(5x+ 9) + 3x−4)
E= (3x−4)×(−5x−9 + 3x−4) E= (3x−4)×(−5x+ 3x−9−4)
E= (3x−4)×(−2x−13) F = 9x+ 9 + (9x+ 9)×(2x+ 7) F = (9x+ 9)×1 + (9x+ 9)×(2x+ 7) F = (9x+ 9)×(1 + 2x+ 7)
F = (9x+ 9)×(2x+ 1 + 7) F = (9x+ 9)×(2x+ 8)
Corrigé de l’exercice 23
Factoriser chacune des expressions littérales suivantes : A=−(3x−10)2+ 4
A=−(3x−10)2+ 22
A= (2 + 3x−10)×(2−(3x−10)) A= (3x+ 2−10)×(2−3x+ 10) A= (3x+ 2−10)×(−3x+ 2 + 10)
A= (3x−8)×(−3x+ 12) B =−9x2+ 100
B = 102−(3x)2
B= (3x+ 10)×(−3x+ 10)
C= (5x+ 8)×(x+ 5)−(x+ 5)×(4x+ 2)
C= (x+ 5)×(5x+ 8−(4x+ 2)) C= (x+ 5)×(5x+ 8−4x−2) C= (x+ 5)×(5x−4x+ 8−2)
C= (x+ 5)×(x+ 6) D= 81x2+ 36x+ 4
D= (9x)2+ 2×9x×2 + 22 D= (9x+ 2)2
E= 5x−9 + (8x+ 5)×(5x−9) E= (5x−9)×1 + (8x+ 5)×(5x−9) E= (5x−9)×(1 + 8x+ 5)
(11)
E= (5x−9)×(8x+ 1 + 5) E= (5x−9)×(8x+ 6)
F = (−7x−10)×(4x−6) + (−7x−10)2
F = (−7x−10) × (4x−6) + (−7x−10) ×
(−7x−10)
F = (−7x−10)×(4x−6−7x−10) F = (−7x−10)×(4x−7x−6−10)
F = (−7x−10)×(−3x−16)
Corrigé de l’exercice 24
Factoriser chacune des expressions littérales suivantes : A=−49x2+ 49
A= 72−(7x)2
A= (7x+ 7)×(−7x+ 7) B = 4x2+ 28x+ 49
B = (2x)2+ 2×2x×7 + 72 B= (2x+ 7)2
C = (8x−10) × (−10x+ 9) + (8x−10) × (−5x−10)
C= (8x−10)×(−10x+ 9−5x−10) C= (8x−10)×(−10x−5x+ 9−10)
C= (8x−10)×(−15x−1) D=−(−2x+ 8)2+ 49
D=−(−2x+ 8)2+ 72
D= (7−2x+ 8)×(7−(−2x+ 8))
D= (−2x+ 7 + 8)×(7 + 2x−8) D= (−2x+ 7 + 8)×(2x+ 7−8)
D= (−2x+ 15)×(2x−1)
E= (−6x−4)2−(−9x+ 8)×(−6x−4)
E = (−6x−4) × (−6x−4) − (−9x+ 8) × (−6x−4)
E= (−6x−4)×(−6x−4−(−9x+ 8)) E= (−6x−4)×(−6x−4 + 9x−8) E= (−6x−4)×(−6x+ 9x−4−8)
E= (−6x−4)×(3x−12)
F = 4x+ 10 + (10x+ 4)×(4x+ 10) F = (4x+ 10)×1 + (10x+ 4)×(4x+ 10) F = (4x+ 10)×(1 + 10x+ 4)
F = (4x+ 10)×(10x+ 1 + 4) F = (4x+ 10)×(10x+ 5)
Corrigé de l’exercice 25
Factoriser chacune des expressions littérales suivantes : A= (x−6)×(−3x+ 7) + (x−6)×(4x+ 8)
A= (x−6)×(−3x+ 7 + 4x+ 8) A= (x−6)×(−3x+ 4x+ 7 + 8)
A= (x−6)×(x+ 15) B = 64x2−160x+ 100 B = (8x)2−2×8x×10 + 102
B= (8x−10)2 C= (−6x+ 2)2x2
C= (−6x+ 2 +x)×(−6x+ 2−x) C= (−6x+x+ 2)×(−6xx+ 2)
C= (−5x+ 2)×(−7x+ 2) D=−100x2+ 9
D= 32−(10x)2
D= (10x+ 3)×(−10x+ 3) E=−(7x+ 4) + (7x+ 4)×(7x+ 7) E=−(7x+ 4)×1 + (7x+ 4)×(7x+ 7) E= (7x+ 4)×(−1 + 7x+ 7)
E= (7x+ 4)×(7x−1 + 7) E= (7x+ 4)×(7x+ 6)
F = (8x−5)2+ (8x−5)×(9x−10)
F = (8x−5)×(8x−5) + (8x−5)×(9x−10) F = (8x−5)×(8x−5 + 9x−10)
F = (8x−5)×(8x+ 9x−5−10) F = (8x−5)×(17x−15)
(12)
Corrigé de l’exercice 26
Factoriser chacune des expressions littérales suivantes : A= 49x2+ 56x+ 16
A= (7x)2+ 2×7x×4 + 42 A= (7x+ 4)2
B = 64x2−81 B = (8x)2−92
B= (8x+ 9)×(8x−9)
C= (x−5)×(−3x+ 1)−(8x−8)×(−3x+ 1) C= (−3x+ 1)×(x−5−(8x−8))
C= (−3x+ 1)×(x−5−8x+ 8) C= (−3x+ 1)×(x−8x−5 + 8)
C= (−3x+ 1)×(−7x+ 3) D=−(3x−2)2+ 1
D=−(3x−2)2+ 12
D= (1 + 3x−2)×(1−(3x−2)) D= (3x+ 1−2)×(1−3x+ 2) D= (3x+ 1−2)×(−3x+ 1 + 2)
D= (3x−1)×(−3x+ 3)
E= (8x+ 6)2+ (8x+ 6)×(−x−4)
E= (8x+ 6)×(8x+ 6) + (8x+ 6)×(−x−4) E= (8x+ 6)×(8x+ 6−x−4)
E= (8x+ 6)×(8xx+ 6−4) E= (8x+ 6)×(7x+ 2)
F = (4x−6)×(7x+ 8) + 7x+ 8 F = (4x−6)×(7x+ 8) + (7x+ 8)×1 F = (7x+ 8)×(4x−6 + 1)
F = (7x+ 8)×(4x−5)
Corrigé de l’exercice 27
Factoriser chacune des expressions littérales suivantes : A= 49x2−14x+ 1
A= (7x)2−2×7x×1 + 12 A= (7x−1)2
B = 9x2−64 B = (3x)2−82
B= (3x+ 8)×(3x−8) C=−(4x−3)2+ 25x2 C=−(4x−3)2+ (5x)2
C= (5x+ 4x−3)×(5x−(4x−3)) C= (9x−3)×(5x−4x+ 3)
C= (9x−3)×(x+ 3)
D= (2x+ 3)×(3x+ 2) + (3x+ 2)×(2x−7) D= (3x+ 2)×(2x+ 3 + 2x−7)
D= (3x+ 2)×(2x+ 2x+ 3−7) D= (3x+ 2)×(4x−4)
E= (−9x+ 5)2−(−9x+ 5)×(10x+ 4)
E = (−9x+ 5) × (−9x+ 5) − (−9x+ 5) × (10x+ 4)
E= (−9x+ 5)×(−9x+ 5−(10x+ 4)) E= (−9x+ 5)×(−9x+ 5−10x−4) E= (−9x+ 5)×(−9x−10x+ 5−4)
E= (−9x+ 5)×(−19x+ 1) F = (7x+ 10)×(9x+ 6) + 9x+ 6 F = (7x+ 10)×(9x+ 6) + (9x+ 6)×1 F = (9x+ 6)×(7x+ 10 + 1)
F = (9x+ 6)×(7x+ 11)
Corrigé de l’exercice 28
Factoriser chacune des expressions littérales suivantes : A = −(−5x+ 8) × (10x+ 4) + (−5x+ 8) × (6x+ 8)
A= (−5x+ 8)×(−(10x+ 4) + 6x+ 8)
A= (−5x+ 8)×(−10x−4 + 6x+ 8) A= (−5x+ 8)×(−10x+ 6x−4 + 8)
(13)
A= (−5x+ 8)×(−4x+ 4) B = 4x2−20x+ 25
B = (2x)2−2×2x×5 + 52 B= (2x−5)2
C= 16x2−81 C= (4x)2−92
C= (4x+ 9)×(4x−9) D=−(2x−2)2+ 36 D=−(2x−2)2+ 62
D= (6 + 2x−2)×(6−(2x−2)) D= (2x+ 6−2)×(6−2x+ 2)
D= (2x+ 6−2)×(−2x+ 6 + 2) D= (2x+ 4)×(−2x+ 8)
E= (6x+ 6)×(5x−10) + 5x−10 E= (6x+ 6)×(5x−10) + (5x−10)×1 E= (5x−10)×(6x+ 6 + 1)
E= (5x−10)×(6x+ 7)
F = (x−2)2+ (−3x+ 4)×(x−2)
F = (x−2)×(x−2) + (−3x+ 4)×(x−2) F = (x−2)×(x−2−3x+ 4)
F = (x−2)×(x−3x−2 + 4) F = (x−2)×(−2x+ 2)
Corrigé de l’exercice 29
Factoriser chacune des expressions littérales suivantes : A= (5x+ 10)×(−7x+ 3)−(5x+ 10)×(−x+ 8) A= (5x+ 10)×(−7x+ 3−(−x+ 8))
A= (5x+ 10)×(−7x+ 3 +x−8) A= (5x+ 10)×(−7x+x+ 3−8)
A= (5x+ 10)×(−6x−5) B = 9x2−25
B = (3x)2−52
B= (3x+ 5)×(3x−5) C=−(−6x−7)2+ 36x2 C=−(−6x−7)2+ (6x)2
C= (6x−6x−7)×(6x−(−6x−7)) C=−7×(6x+ 6x+ 7)
C=−7×(12x+ 7)
D= 64x2+ 80x+ 25
D= (8x)2+ 2×8x×5 + 52 D= (8x+ 5)2
E= (x−4)×(−6x−10) + (x−4)2
E= (x−4)×(−6x−10) + (x−4)×(x−4) E= (x−4)×(−6x−10 +x−4)
E= (x−4)×(−6x+x−10−4) E= (x−4)×(−5x−14) F = 5x−9 + (5x−9)×(6x+ 7) F = (5x−9)×1 + (5x−9)×(6x+ 7) F = (5x−9)×(1 + 6x+ 7)
F = (5x−9)×(6x+ 1 + 7) F = (5x−9)×(6x+ 8)
Corrigé de l’exercice 30
Factoriser chacune des expressions littérales suivantes : A= 100x2−64
A= (10x)2−82
A= (10x+ 8)×(10x−8)
B = (−8x+ 8)×(−x−8)−(−8x+ 8)×(5x−9) B = (−8x+ 8)×(−x−8−(5x−9))
B = (−8x+ 8)×(−x−8−5x+ 9) B = (−8x+ 8)×(−x−5x−8 + 9)
B = (−8x+ 8)×(−6x+ 1) C= 81x2−54x+ 9
C= (9x)2−2×9x×3 + 32 C= (9x−3)2
D=−16 + (9x+ 10)2
(14)
D=−42+ (9x+ 10)2
D= (9x+ 10 + 4)×(9x+ 10−4) D= (9x+ 14)×(9x+ 6) E= 5x+ 5 + (5x+ 5)×(5x−7) E= (5x+ 5)×1 + (5x+ 5)×(5x−7) E= (5x+ 5)×(1 + 5x−7)
E= (5x+ 5)×(5x+ 1−7)
E= (5x+ 5)×(5x−6)
F = (−6x+ 10)2+ (−6x+ 10)×(−10x−7) F = (−6x+ 10) × (−6x+ 10) + (−6x+ 10) × (−10x−7)
F = (−6x+ 10)×(−6x+ 10−10x−7) F = (−6x+ 10)×(−6x−10x+ 10−7)
F = (−6x+ 10)×(−16x+ 3)
Références
Documents relatifs
Bien souvent, dans nos vies cliniques très occupées, les médecins de famille se servent du temps comme d’une béquille pour justifier des lacunes dans nos
Vous pouvez utiliser sans d´ emonstration les r´ esultats du cours figurant dans l’appendice en indiquant le num´ ero du r´ esultat que vous utilisez.. Premi`
[r]
- Une adresse IP (Internet Protocol) est une adresse permettant non seulement d’identifier une machine ou tout autre équipement de manière unique dans un réseau mais aussi
[r]
[r]
[r]
Avec une population de 1 718 patients atteints d’un myélome multiple non traité auparavant, l’essai randomisé de phase III avec double placebo comparant, en première ligne
Par imparité, elle est dérivable strictement croissante dans R avec les limites −∞ et +∞.. C'est donc
Exercice 1 D´esignons par {B(t)} t∈[0,1] un mouvement brownien standard et par Z une variable al´eatoire gaussienne centr´ee
Bases hilbertienne : Soit H un espace de Hilbert sur R 1 muni du produit scalaire h·, ·i et de la norme associ´ ee k
[r]
[r]
La question 2 est de principe similaire aux questions 3 et 4 de l’exercice 4.16 : il s’agit de combiner l’´ egalit´ e d´ emontr´ ee via Taylor-Lagrange et les hypoth` eses de
Par utilisation de la relation de Chasles, la propriété est vraie lorsque f est en escalier sur [a,
La factorisation permet de résoudre de nombreux problèmes comme la résolution des équations, des inéquations, les études de signes, etc. 1) La mise en évidence : Si une mise
Universit´ e de Lille, M2 Math´ ematiques - Parcours Math´ ematiques Appliqu´ ees, Premier Exercice de l’Oral de Rattrapage ”Int´ egrale d’Itˆ o, formule d’Itˆ o
c) L’artiste veut maintenant recouvrir sa construction avec de la couleur or. Quelle est l’aire en m 2 de la surface qu’il va ainsi recouvrir ?. d) Une fois la construction
c) L’artiste veut maintenant recouvrir sa construction avec de la couleur or. Quelle est l’aire en m 2 de la surface qu’il va ainsi recouvrir ?. d) Une fois la construction
ressentir des sentiments nouveaux, se sentir bien, épanoui et même si aimer veut dire aussi avoir mal, l’amour fait partie du quotidien pour certains, il faut donc vivre
et avec les aventures de deux enfants dans l'album Mes premières découvertes sur l'eau , nous avons aussi parlé de l'eau gelée, de l'évaporation de l'eau, de l'eau dans
Montrer, en utilisant la notation , que les suites suivantes tendent vers 0.. (b) En déduire que la somme d'une suite convergente et d'une suite divergente
The complete single scattering calculation (Born approxima- tion) outperforms asymptotic approximations in the case of small wavelength anomalies (compared to the wavelength of
Sélectionne ta langue
Le site sera traduit dans la langue de ton choix.
Langues suggérées pour toi :
Autres langues
| 16,600
| 23,630
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.921875
| 4
|
CC-MAIN-2023-40
|
latest
|
en
| 0.175359
|
https://www.wyzant.com/resources/blogs/438231/converting_your_old_sat_score_to_the_new_sat_score
| 1,576,093,826,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00076.warc.gz
| 771,947,999
| 12,201
|
# Converting Your OLD SAT Score to the NEW SAT Score
As of March 5, 2016, the new SAT is being offered. This means that those who took the old 2400-point SAT may want to know what their new 1600-point SAT score would have been, and vice versa. This information is critical for when you research and apply to scholarships and colleges that use a different version of the SAT than the one you took. Here, we give a more accurate formula and way of switching between old SAT scores and new SAT scores.
Many conversion tables available online use a single multiplier to scale between the new 1600 SAT and the old 2400 SAT. This just means you multiply by 3/2 to go from new SAT to old SAT and divide to go in the other direction. This is a fine method for a rough estimate, but the new SAT and old SAT weight math-type skills and verbal-type skills differently. Therefore, a more accurate conversion will convert the section scores separately, which we present below. We'll explain the reasons below, and why you would want to use conversions in the first place.
Converting from Old SAT to New SAT
Do you have your old SAT score, including the scores for each section: Writing, Mathematics, and Critical Reading?
If so, you can use these three formulas to get what your score would be on the new SAT:
1. New Math Section Score = Old Math Section Score
2. New Verbal Section Score = (Old Critical Reading Section Score + Old Writing Score) / 2
3. New Total Score = New Math Section Score + New Verbal Section Score
In other words, the Math section scores are the same between the new test and old test. The new Verbal section is the average of the older Reading and Writing scores.
Example: Suppose a student has a total old SAT score of 1900. The breakdown is 510 in Writing, 730 in Mathematics, and 660 in Critical Reading. Using this formula he would calculate:
New Math Section Score = 730
New Verbal Section Score = (510 + 660) / 2 = 585
New Total Score = 730 + 585 = 1315
Converting from New SAT to Old SAT
Do you have a new SAT score, but want to convert to the old SAT, perhaps for a scholarship or college that still uses the old standard? In this case, you'll have your total new SAT score, your new Math Section score, and your new Verbal Section score (officially called "Evidence-Based Reading and Writing.")
Use these three formulas:
1. Old Math Section Score = New Math Section Score
2. Old Critical Reading Section Score = Old Writing Section Score = New Verbal Section Score
3. Old Total Score = New Math Section Score + 2 * New Verbal Section Score
In other words, the Math Section scores are the same between the old test and new test. The old Critical Reading score and old Writing score are estimated to be the same, equal to the new Verbal score.
Example: Suppose another student has a total new SAT score of 1400. Her breakdown is 610 in Verbal and 790 in Math.
Old Math Section Score = 790
Old Critical Reading Section Score = 610
Old Writing Section Score = 610
Old Total Score = 790 + 610 + 610 = 2010
If you want to know why this formula is more accurate, when to use this formula, and what some mathematical properties of this formula are, read on!
Why This Formula Is More Accurate
You might have seen some conversion tables online that take the total score and multiply it by a fixed number. These conversion tables are fine as a rough estimate, but they don't account for the fact that the new SAT and the old SAT weight verbal-type skills and math-type skills differently.
The new SAT weights math skills as 50% of the total score, while the old SAT only weighted the math skills about 33%. Our formula accounts for this difference in weighting. It does this by converting each section score separately instead of the total score all at once. This makes the most difference for students who have substantially different math and verbal skills.
When You'll Want to Convert Between Scores
On one hand, the new SAT and the old SAT are different tests. No single test captures all the information from other tests. Comparing your score on the two tests is, in some ways, like comparing your marathon speed with your 100-meter-sprint speed. While the two speeds are probably correlated, the tests are different, and no one test fully summarizes the other.
On the other hand, scores from the two tests are indisputably related. They both aim to test similar concepts, they have similar functions as college admissions tests, and they both keep some of the same multiple-choice features. If you do well on one test, you'll tend to do well on the other. Therefore, it absolutely makes sense to talk about converting between one score and another.
The concept we use in the conversion above is called theoretical equivalence. That is, if you were to perform as well on one test as the other, what would your total score and section scores be? This gives us a formula where the math section remains the same, and the verbal sections map onto each other.
You can use this conversion if you're administering scholarships or admissions and want the same standards across the board. If you're intuitively used to thinking in terms of Old SAT scores, this conversion lets you understand New SAT scores better.
However, you should be aware of one caveat if you are using conversion tables to predict test scores. The caveat is that you'll experience regression to the mean. If you did better than average on the old sat (above 1500), you'll do just a tad lower than on the new SAT than your conversion chart score. Likewise, if you did worse than average (less than 1500) on the old SAT, you'll do just a tad better on the new SAT. The reason for this is that new test doesn't test exactly the same things as the old test, and for the new subjects being tested, you are statistically more likely to do more average. Thus, you should expect your score to shrink towards the average.
\$145p/h
Michael W.
UCLA Instructor / SAT ACT Expert
400+ hours
if (isMyPost) { }
| 1,319
| 6,013
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.875
| 4
|
CC-MAIN-2019-51
|
latest
|
en
| 0.94669
|
https://encyclopediaofmath.org/index.php?title=Lindenbaum_method&diff=29659&oldid=29658
| 1,627,943,374,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00423.warc.gz
| 251,743,652
| 13,939
|
# Lindenbaum method (propositional language)
Lindenbaum method is named after the Polish logician Adolf Lindenbaum who prematurely and without a clear trace disappeared in the turmoil of the Second World War at the age of about 37. (Cf.[15].) The method is based on the symbolic nature of formalized languages of deductive systems and opens a gate for applications of algebra to logic and, thereby, to Abstract algebraic logic.
## Lindenbaum's theorem
A formal propositional language, say $\mathcal{L}$, is understood as a nonempty set $Vr_\mathcal{L}$ of symbols $p_0, p_1,... p_{\gamma}...$ called propositional variables and a finite set $\Pi$ of symbols $F_0, F_1,..., F_n$ called logical connectives. By $\overline{\overline{Vr_\mathcal{L}}}$ we denote the cardinality of $Vr_\mathcal{L}$. For each connective $F_i$, there is a natural number $\#(F_i)$ called the arity of the connective $F_i$. The notion of a statement (or a formula) is defined as follows:
$(f_1)$ Each variable $p\in\mathcal{V}$ is a formula; $(f_2)$ If $F_i$ is a connective of the arity 0, then $F_i$ is a formula; $(f_3)$ If $A_1, A_2,..., A_n$, $n\geq 1$, are formulas, and $F_n$ is a connective of arity $n$, then the symbolic expression $F_{n}A_{1}A_{2}... A_n$ is a formula; $(f_4)$ A formula can be constructed only according to the rules $(f_1)-(f_3)$.
The set of formulas will be denoted by $Fr_\mathcal{L}$ and $P(Fr_\mathcal{L})$ denotes the power set of $Fr_\mathcal{L}$. Given a set $X \subseteq Fr_\mathcal{L}$, we denote by $Vr(X)$ the set of propositional variables that occur in the formulas of $X$. Two formulas are counted equal if they are represented by two copies of the same string of symbols. (This is the key observation on which Theorem 1 is grounded.) Another key observation (due to Lindenbaum) is that $Fr_\mathcal{L}$ along with the connectives $\Pi$ can be regarded as an algebra of the similarity type associated with $\mathcal{L}$, which exemplifies an $\mathcal{L}$-algebra. We denote this algebra by $\mathfrak{F}_\mathcal{L}$. The importance of $\mathfrak{F}_\mathcal{L}$ can already be seen from the following observation.
Theorem 1. Algebra $\mathfrak{F}_\mathcal{L}$ is a free algebra of rank $\overline{\overline{\mathcal{V}}}$ with free generators $\mathcal{V}$ in the class $($variety$)$ of all $\mathcal{L}$-algebras. In other words, $\mathfrak{F}_\mathcal{L}$ is an absolutely free algebra of this class.
A useful feature of the set $Fr_\mathcal{L}$ is that it is closed under (simultaneous) substitution. More than that, any substitution $\sigma$ is an endomorphism
$\sigma: \mathfrak{F}_\mathcal{L}\longrightarrow \mathfrak{F}_\mathcal{L}$.
A monotone deductive system (or a deductive system or simply a system) is a relation between subsets and elements of $Fr_\mathcal{L}$. Each such system $\vdash_S$ is subject to the following conditions: For all $X,Y \subseteq \mathfrak{Fr}_\mathcal{L}$,
$(s_1)$ if $A \in X$, then $X \ \vdash_\mathcal{S} \ A$; $(s_2)$ if $X \ \vdash_\mathcal{S} \ B$ for all $B \in Y$, and $Y \ \vdash_\mathcal{S} \ A$, then $X \ \vdash_\mathcal{S} \ A$; $(s_3)$ if $X \ \vdash_\mathcal{S} \ A$, then for every substitution $\sigma$, $\sigma[X] \ \vdash_\mathcal{S} \ \sigma(A)$.
If $A$ is a formula and $\sigma$ is a substitution, $\sigma(A)$ is called a substitution instance of $A$. Thus, by $\sigma[X]$ above, one means the instances of the formulas of $X$ with respect to $\sigma$.
Given two sets $Y$ and $X$, we write
$\quad \quad \quad Y \sqsubseteq X$
if $Y$ is a finite (may be empty) subset of $X$.
A deductive system is said to be finitary if, in addition, it satisfies the following:
$(s_4)$ if $X \ \vdash_\mathcal{S} \ A$, then there is $Y \sqsubseteq X$ such that $Y \ \vdash_\mathcal{S} \ A$.
We note that the monotonicity property
$\quad \quad \quad \quad$ if $X \subseteq Y$ and $X \ \vdash_\mathcal{S} \ A$, then $Y \ \vdash_\mathcal{S} \ A$
is not postulated, because it follows from $(s_1)$ and $(s_2)$.
Each deductive system $\vdash_\mathcal{S}$ induces the (monotone structural) consequence operator $Cn_{\mathcal{S}}$ defined on the power set of $Fr_\mathcal{L}$ as follows: For every $X \subseteq Fr_\mathcal{L}$,
$\quad \quad \quad \quad A \in Cn_\mathcal{S}(X) \Longleftrightarrow X \ \vdash_\mathcal{S} \ A, \quad \quad \quad \quad \quad \quad \quad \quad (1)$
so that the following conditions are fulfilled: For all $X,Y \subseteq Fr_\mathcal{L}$ and any substitution $\sigma$,
$(c_1)$ $X \subseteq Cn_\mathcal{S}(X);$ (Reflexivity) $(c_2)$ $Cn_\mathcal{S}(Cn_\mathcal{S}(X)) = Cn_\mathcal{S}(X);$ (Idenpotency) $(c_3)$ if $X \subseteq Y$, then $Cn_\mathcal{S}(X) \subseteq Cn_\mathcal{S}(Y);$ (Monotonicity) $(c_4)$ $\sigma[Cn_\mathcal{S}(X)] \subseteq Cn_\mathcal{S}(\sigma[X]).$ (Structurality)
If $\vdash_\mathcal{S}$ is finitary, then
$(c_5)$ $Cn_\mathcal{S}(X) = \bigcup\lbrace Cn_\mathcal{S}(Y) \ | \ Y \Subset X \rbrace$
in which case $Cn_{\mathcal{S}}$ is called finitary.
Conversely, if an operator $Cn:\cal{P}(Fr_\mathcal{L})\rightarrow \cal{P}(Fr_\mathcal{L})$ satisfies the conditions $(c_1)-(c_4)$ (with $Cn$ instead of $Cn_\mathcal{S}$), then the equivalence
$\quad \quad \quad \quad X \ \vdash_\mathcal{S} \ A \Longleftrightarrow A \in {Cn}(X)$
defines a deductive system, $\mathcal{S}$. Thus (1) allows one to use the deductive system and consequence operator (in a fixed formal language) interchangeably or even in one and the same context. For instance, we call $T_\mathcal{S} = Cn_\mathcal{L}(\emptyset)$ the set of theorems of the system $\vdash_\mathcal{S}$ (i.e. $\mathcal{S}$-theorems), and given a subset $X \subseteq Fr_\mathcal{S}$, $Cn-|mathcal{S}{X}$ is called the $\mathcal{S}$-theory generated by $X$. A subset $X \subseteq Fr_\mathcal{S}$, as well as the theory $Cn_\mathcal{S}(X)$, is called inconsistent if $Cn_\mathcal{S}(X) = Fr_\mathcal{S}$; otherwise both are consistent. Thus, given a system $\vdash_\mathcal{S}$, $T_\mathcal{S}$ is one of the system's theories; that is to say, if $X \subseteq T_\mathcal{S}$ and $X \vdash_\mathcal{S} A$, then $A \in T_\mathcal{S}$. This simple observation sheds light on the central idea of Lindenbaum method, which will be explained soon. For now, let us fix the ordered pair $\left<\mathcal{F}_\mathcal{L},T\mathcal{L}\right>$ and call it a Lindenbaum matrix. (The full definition will be given later.) We note that an operator $Cn$ satisfying $(c_1)-(c_3)$ can be obtained from a "closure system" over $Fr_\mathcal{L}$; that is for any subset $\cal{A}\subseteq P(Fr_\mathcal{L})$, which is closed under arbitrary intersection, we define:
$\quad \quad \quad \quad Cn_\mathcal{A}(X)=\cap \lbrace Y \ | \ X \subseteq Y \mbox{ and } Y \in \cal{A} \rbrace.$
Another way of defining deductive systems is through the use of logical matrices. Given a language $\mathcal{L}$, a logical $\mathcal{L}$-matrix (or simply a matrix) is a pair $\mathcal{M} = \left<\mathfrak{A},\mathcal{F}\right>$, where $\mathfrak{A}$ is an $\mathcal{L}$-algebra and $\mathcal{F}\subseteq|\mathfrak{A}|$, where the latter is the universe of $\mathfrak{A}$. The set $\mathcal{F}$ is called the filter of the matrix and its elements are called designated. Given a matrix $\mathcal{M} = \left<\mathfrak{A},\mathcal{F}\right>$, the cardinality of $|\mathfrak{A}|$ is also the cardinality of $\mathcal{M}$.
Given a matrix $\mathcal{M}=\left<\mathfrak{A},\mathcal{F}\right>$, any homomorphism of $\mathfrak{A}$ into $\mathfrak{A}$ is called a valuation (or an assignment). Each such homomorphism can be obtained simply by assigning elements of $|\mathfrak{A}|$ to the variables of $Vr_\mathcal{L}$, since, by virtue of Theorem 1, any $v: Vr_\mathcal{L} \longrightarrow |\mathfrak{A}|$ can be extended uniquely to a homomorphism $\hat{v}: \mathfrak{A} \longrightarrow \mathfrak{A}$. Usually, $v$ is meant under a valuation (or an assignment) of variables in a matrix.
Now let $\sigma$ be a substitution and $v$ be any assignment in an algebra {\mathfrak{A}}. Then, defining
$\quad \quad \quad \quad v_{\sigma}=v\circ\sigma, \quad \quad \quad \quad \quad \quad \quad \quad (2)$
we observe that $v_{\sigma}$ is also an assignment in $\mathfrak{A}$.
With each matrix $\mathcal{M}=\left<\mathfrak{A},\mathcal{F}\right>$, we associate a relation $\models_\mathcal{M}$ between subsets of $Fr_\mathcal{L}$ and formulas of $Fr_\mathcal{L}$. Namely we define
$\quad \quad \quad \quad X \ \models_\mathcal{M} \ A \Longleftrightarrow \text{ for every assignment } v, \text{ if } v[X]\subseteq \mathcal{F}, \text{ then } v(A)\in \mathcal{F}$.
Then, we observe that the following properties hold:
$(m_1)$ if $A \in X$, then $X \ \models_\mathcal{M} \ A$ $(m_2)$ if $X\models_\mathcal{M} B$ for all $B\in Y$, and $Y \ \models_\mathcal{M} \ A$, then $X \ \models_\mathcal{M} \ A.$
Also, with help of the definition (2), we derive the following:
$(m_3)$ if $X \ \models_\mathcal{M} \ A$, then for every substitution $\sigma$, $\sigma[X] \ \models_\mathcal{M} \ \sigma(A)$.
Comparing the condition $(m_1)-(m_3)$ with $(s_1)-(s_3)$, we conclude that every matrix defines a structural deductive system and hence, in view of (1), a structural consequence operator.
Given a system $\mathcal{S}$, suppose a matrix $\mathcal{M}=\left<\mathfrak{A},\mathcal{F}\right>$ satisfies the condition
$\quad \quad \quad \quad$ if $X \ \vdash_\mathcal{S} A$ and $v[X] \subseteq \mathcal{F}$, then $v(A) \in \mathcal{F} \quad \quad \quad \quad (3)$
Then the filter $\mathcal{F}$ is called an $\mathcal{S}$-filter and the matrix $\mathcal{M}$ is called an $\mathcal{S}$-matrix (or an $\mathcal{S}$-model). In view of (3), $\mathcal{S}$-matrices are an important tool in showing that $X \ \vdash_\mathcal{S} \ A$ does not hold. This idea has been employed in proving that one axiom is independent from a group of others in the search for an independent axiomatic system, as well as for semantic completeness results.
As Lindenbaum's famous theorem below explains, every structural system $\mathcal{S}$ has an $\mathcal{S}$-model.
Theorem 2. For any structural deductive system $\mathcal{S}$, the matrix $\left<Fr_\mathcal{L},Cn_\mathcal{S}(\emptyset)\right>$ is an $\mathcal{S}$-model. Moreover, for any formula $A$,
$\quad \quad \quad \quad A \in T_\mathcal{S} \Longleftrightarrow v(A)\in Cn_\mathcal{S}(\emptyset)$ for any valuation $v$.
A matrix $\left<\mathfrak{A},\mathcal{F}\right>$ is said to be weakly adequate for a deductive system $\mathcal{S}$ if for any formula $A$,
$\quad \quad \quad \quad A \in T_\mathcal{S} \Longleftrightarrow v(A)\in \mathcal{F}$ for any valuation $v$.
Thus, according to Theorem 2, every structural system $\mathcal{S}$ has a weakly adequate $\mathcal{S}$-matrix of cardinality less than or equal to $\overline{\overline{\mathcal{V}}}+\aleph_0$.
An $\mathcal{S}$-matrix is called strongly adequate for $\mathcal{S}$ if for any set $X \subseteq Fr_\mathcal{L}$ and any formula $A$,
$\quad \quad \quad \quad X \ \vdash_\mathcal{S} \ A \Longleftrightarrow X \ \models_\mathcal{M} \ A. \quad \quad \quad \quad (4)$
We note that, if $\overline{\overline{\mathcal{V}}} \leq \aleph_{0}$, Theorem 2 cannot be improved to include strong adequacy of an denumerable matrix, for if $\mathcal{S} = IPC$ (intuitionistic propositional calculus), there is no denumerable matrix $\mathcal{M}$ with (4). (Cf.[21].)
#### Historical remarks
A. Tarski seems to be the first who promoted "the view of matrix formation as a general method of constructing systems" [9]. However, matrices had been employed earlier, e.g., by P. Bernays [q] and others either in the search for an independent axiomatic system or for defining a system different from classical logic. Also, later on J.C.C. McKinsey [10] used matrices to prove independence of logical connectives in intuitionistic propositional logic.
Theorem 2 was discovered by A. Lindenbaum. Although this theorem was not published by the author, it had been known in Warsaw-Lvov logic circles at the time. In a published form it appeared for the first time in [9] without proof. Its proof appeared later on in the two independent publications of [8] and [6].
## Wójcicki's theorems
We get more {\mathcal{S}}-matrices, noticing the following. Let $\Sigma_\mathcal{S}$ be an $\mathcal{S}$-theory. The pair $\left<Fr_\mathcal{L},\Sigma_\mathcal{S} \right>$ is called a Lindenbaum matrix relative to $\mathcal{S}$. We observe that for any substitution $\sigma$,
$\quad \quad \quad \quad$ if $X \ \vdash_\mathcal{S} \ A$ and $\sigma[X] \subseteq \Sigma_\mathcal{S}$, then $\sigma(A) \in \Sigma_\mathcal{S}$}.
That is to say, any Lindenbaum matrix relative to a system $\mathcal{S}$ is an $\mathcal{S}$-model.
A deductive system $\mathcal{S}$ is said to be uniform if, given a set $X \subseteq Fr_\mathcal{S}$ and a consistent set $Y \subseteq Fr_\mathcal{S}$, $X \cup Y \ \vdash_\mathcal{S} \ A$ and $Vr(Y) \cap Vr(A) = \emptyset$ imply $X \ \vdash_\mathcal{S} \ A$. A system $\mathcal{S}$ is couniform if for any collection $\{X_{i}\}_{i\in I}$ of formulas with $Vr(X_i) \cap Vr(X_j) = \emptyset$, providing $i \neq j$, if the set $\cup\{X_{i}\}_{i\in I}$ is inconsistent, then at least one $X_{i}$ is inconsistent as well.
Theorem 3 (Wójcicki) A structural deductive system $\mathcal{S}$ has a strongly adequate matrix if and only if $\mathcal{S}$ is both uniform and couniform.
For the "if" implication of the statement, the matrix of Theorem 2 is not enough. However, it is possible to extend the original language $\mathcal{L}$ to $\mathcal{L}^{+}$ in such a way that the natural extension $Cn_{\mathcal{S}^{+}}$ of $Cn_{\mathcal{S}}$ onto $\mathcal{L}^{+}$ allows one to define a Lindenbaum matrix $\left<\mathfrak{F}_{\mathcal{L}^{+}},Cn_{\mathcal{S}^{+}}(X)\right>$, for some $X \subseteq Fr_{\mathcal{L}^{+}}$, which is strongly adequate for $\mathcal{S}$. (Cf.[20] for detail.)
A pair $\left<\mathfrak{A}, where \ \{\mathcal{F}_{i}\}_{i\in I}\right>$, $\mathfrak{A}$ is an $\mathcal{L}$-algebra and each $\mathcal{F}_{i}\subseteq|\mathfrak{A}|$, is called a generalized matrix (or a $g$-matrix for short). A $g$-matrix is a $g$-$\mathcal{S}$-model (or a $g$-$\mathcal{S}$-matrix) if each $\left<\mathfrak{A},\mathcal{F}_{i}\right>$ is an $\mathcal{S}$-model. (In [4] a $g$-matrix is called an atlas.)
Theorem 4(Wójcicki) For every structural deductive system $\mathcal{S}$, there is a $g$-$\mathcal{S}$-matrix $\mathcal{M}$ of cardinality $\overline{\overline{\mathcal{V}}}+\aleph_{0}$, which is strongly adequate for $\mathcal{S}$.
Indeed, let $\{\Sigma_\mathcal{S}\}$ be the collection of all $\mathcal{S}$-theories. Then the $g$-matrix $\left<Fr_\mathcal{L},\{\Sigma_\mathcal{S}\}\right>$ is strongly adequate for $\mathcal{S}$. (Cf.[20],[4] for detail.)
We note that, alternatively, one could use the notion of abundle of matrices; a bundle is a set $\{\left<\mathfrak{A},\mathcal{F}_{i}\right> | i\in I \}$, where $\mathfrak{A}$ is an $\mathcal{L}$-algebra and each $\mathcal{F}_{i}$ is a filter of $\mathfrak{A}$.
### Historical remarks
Theorem 3 was the result of the correction by R. Wójcicki of an erroneous assertion in [7], where the important question on the strong adequacy of a system was raised.
T. Smiley [14] was perhaps the first to propose $g$-matrices (known as Smiley matrices) defined as pairs $\left<\mathfrak{A},Cn \right>$, where $\mathfrak{A}$ is an $\mathcal{L}$-algebra and an operator $Cn: \mathcal{P}(|\mathfrak{A}|) \rightarrow \mathcal{P}(|\mathfrak{A}|)$ satisfies the conditions $(c_1)-(c_3)$ (with $Cn$ instead if $Cn_\mathcal{S}$). Then, Smiley defined $x_1,..., x_n \ \vdash \ y$ if and only of $y \in Cn(\{x_1,...,x_n\})$, where it is assumed that $|\mathfrak{A}| \subseteq U$, where $U$ is a universal set of sentences.
## References
[1] Paul Bernays, Untersuchung des Aussagenkalküls der “Principia Mathematica”, Math. Z. 25 (1926), 305–320.
[2] W. J. Blok and Don Pigozzi, Algebraizable logics, Mem. Amer. Math. Soc. 77 (1989), no. 396, vi+78. MR 973361 (90d:03140)
[3] Stanley Burris and H. P. Sankappanavar, A course in universal algebra, Graduate Texts in Mathematics, vol. 78, Springer-Verlag, New York, 1981. MR 648287 (83k:08001)
[4] J. Michael Dunn and Gary M. Hardegree, Algebraic methods in philosophical logic, Oxford Logic Guides, vol. 41, The Clarendon Press Oxford University
Press, New York, 2001, Oxford Science Publications. MR 1858927 (2002j:03001)
[5] J. M. Font, R. Jansana, and D. Pigozzi, A survey of abstract algebraic logic, Studia Logica 74 (2003), no. 1-2, 13–97, Abstract algebraic logic, Part II (Barcelona, 1997). MR 1996593 (2004m:03241)
[6] Hans Hermes, Zur Theorie der aussagenlogischen Matrizen, Math. Z. 53 (1951), 414–418. MR 0040241 (12,663c)
[7] J. Łós and R. Suszko, Remarks on sentential logics, Nederl. Akad.Wetensch. Proc. Ser. A 61 = Indag. Math. 20 (1958), 177–183. MR 0098670 (20 #5125)
[8] Jerzy Łós, On logical matrices, Trav. Soc. Sci. Lett. Wrocław. Ser. B. (1949), no. 19, 42. MR 0089812 (19,724b)
[9] Jan Łukasiewicz and Alfred Tarski, Untersuchungen über den Aussagenkalkül, Comptes rendus des séances de la Société des Sciences et des Lettres de Varsovie, CI III 23 (1930), 30–50.
[10] J. C. C. McKinsey, Proof of the independence of the primitive symbols of Heyting’s calculus of propositions, J. Symbolic Logic 4 (1939), 155–158. MR 0000805 (1,131f)
[11] Iwao Nishimura, On formulas of one variable in intuitionistic propositional calculus., J. Symbolic Logic 25 (1960), 327–331 (1962). MR 0142456 (26 #25)
[12] Helena Rasiowa, An algebraic approach to non-classical logics, North-Holland Publishing Co., Amsterdam, 1974, Studies in Logic and the Foundations of Mathematics, Vol. 78. MR 0446968 (56 #5285)
[13] Helena Rasiowa and Roman Sikorski, The mathematics of metamathematics, third ed., PWN—Polish Scientific Publishers, Warsaw, 1970, Monografie Matematyczne, Tom 41. MR 0344067 (49 #8807)
[14] Timothy Smiley, The independence of connectives, J. Symbolic Logic 27 (1962), 426–436. MR 0172784 (30 #3003)
[15] Stanisław J. Surma, On the origin and subsequent applications of the concept of the Lindenbaum algebra, Logic, methodology and philosophy of science, VI (Hannover, 1979), Stud. Logic Foundations Math., vol. 104, North-Holland, Amsterdam, 1982, pp. 719–734. MR 682440 (84g:01045)
[16] Alfred Tarski, Grundzüge der systemenkalkül. Erter teil, Fundamenta Mathematica 25 (1935), 503–526.
[17] Alfred Tarski, Grundzüge der systemenkalkül. Zweiter teil, Fundamenta Mathematica 26 (1936), 283–301.
[18] Alfred Tarski, A remark on functionally free algebras, Ann. of Math. (2) 47 (1946), 163–165. MR 0015038 (7,360a)
[19] Alfred Tarski, Logic, Semantics, Metamathematics. Papers from 1923 to 1938, Oxford at the Clarendon Press, 1956, Translated by J. H. Woodger. MR 007829 (17,1171a)
[20] Ryszard Wójcicki, Theory of logical calculi, Synthese Library, vol. 199, Kluwer Academic Publishers Group, Dordrecht, 1988, Basic theory of consequence operations. MR 1009788 (90j:03001)
[21] Andrzej Wrónski, On cardinalities of matrices strongly adequate for the intuitionistic propositional logic, Rep. Math. Logic (1974), no. 3, 67–72. MR 0387011 (52 #7858)
How to Cite This Entry:
Lindenbaum method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Lindenbaum_method&oldid=29658
| 6,323
| 19,354
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.546875
| 4
|
CC-MAIN-2021-31
|
latest
|
en
| 0.850013
|
https://bilakniha.cvut.cz/en/predmet10867002.html
| 1,685,830,637,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-23/segments/1685224649343.34/warc/CC-MAIN-20230603201228-20230603231228-00597.warc.gz
| 150,597,952
| 3,754
|
CZECH TECHNICAL UNIVERSITY IN PRAGUE
STUDY PLANS
2022/2023
UPOZORNĚNÍ: Jsou dostupné studijní plány pro následující akademický rok.
# Ordinary Differential Equations
Code Completion Credits Range
W01T002 ZK 60B
Garant předmětu:
Lecturer:
Tomáš Neustupa
Tutor:
Tomáš Neustupa
Supervisor:
Department of Technical Mathematics
Synopsis:
The course is a continuation of Mathematics III or any undergraduate one-semester course in ordinary differential equations. It provides, in a greater depth, a review of concepts and techniques for solving first order equations. Then autonomous systems, geometric aspects of the two-dimensional phase space and stability of solutions are among the main topics studied.
Requirements:
Syllabus of lectures:
1-2. Survey of solution methods for ordinary differential equations of the first order. Geometrical meaning of a differential equation. Equations in differentials.
3-4. Autonomous systems. Explosion of solutions (blow-up). Global solutions. The method of apriori estimates.
5-6. Dynamical systems. Semigroups. Basic notions and properties.
7-8. Partial differential equations of the first order (optional).
9-10. Hamiltonian systems and systems with a damping. Conservative, dissipative systems.
11-12. Stability of linear and nonlinear systems. Tests for obtaining stability. Atractors.
13-14. Stability and linearization. Stability and Lyapunov functions.
Syllabus of tutorials:
1-2. Survey of solution methods for ordinary differential equations of the first order. Geometrical meaning of a differential equation. Equations in differentials.
3-4. Autonomous systems. Explosion of solutions (blow-up). Global solutions. The method of apriori estimates.
5-6. Dynamical systems. Semigroups. Basic notions and properties.
7-8. Partial differential equations of the first order (optional).
9-10. Hamiltonian systems and systems with a damping. Conservative, dissipative systems.
11-12. Stability of linear and nonlinear systems. Tests for obtaining stability. Atractors.
13-14. Stability and linearization. Stability and Lyapunov functions.
Study Objective:
Study materials:
[1] Stanley J. Farlow: An introduction to differential equations and their applications. McGraw-Hill, Inc., New York 1994. ISBN 0-07-020030-0.
Note:
Time-table for winter semester 2022/2023:
Time-table is not available yet
Time-table for summer semester 2022/2023:
Time-table is not available yet
The course is a part of the following study plans:
Data valid to 2023-06-03
Aktualizace výše uvedených informací naleznete na adrese https://bilakniha.cvut.cz/en/predmet10867002.html
| 611
| 2,612
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.515625
| 3
|
CC-MAIN-2023-23
|
longest
|
en
| 0.780259
|
https://sts-math.com/post_20.html
| 1,553,613,177,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-13/segments/1552912205534.99/warc/CC-MAIN-20190326135436-20190326161436-00261.warc.gz
| 616,059,563
| 4,732
|
Which expression stands for 32 more than a number d
32 + d is the equation in this given question or the word problem in mathematical form.
For example:
Given:
A+B+C = 47
B+C+D = 53
A+C+D = 55
A+B+D = 49
Notice that in the given equations, each of the variables is mentioned three times. So if get the total of all given equations we may get thrice the value of a set of variables
(A+B+C+D). (A+B+C+D)+ (A+B+C+D) + (A+B+C+D) = 204
3(A+B+C+D) = 204
A+B+C+D = 68
The sum of all variables is 68. If we subtract the given equation, which has one missing variable, from the value of the set computed, we may get the value of the missing variable.
(A+B+C+D) - (A+B+C) = D
68 - 47 = D
21 = D
(A+B+C+D) - (B+C+D) = A
68 - 53 = A
15 = A
(A+B+C+D) - (A+D+C) = B
68 - 55 = B
13 = B
(A+B+C+D) - (D+A+B) = C
68 - 49 = C
19 = C
RELATED:
| 338
| 1,067
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.25
| 4
|
CC-MAIN-2019-13
|
latest
|
en
| 0.702967
|
https://assignmentgrade.com/question/640034/
| 1,611,241,831,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00265.warc.gz
| 220,278,855
| 7,422
|
QUESTION POSTED AT 16/04/2020 - 06:59 PM
Half of 32 = 16
By Pythagoras
x=( 16^2+ 4^2)^0.5
x= (256+16)^0.5 = 272^0.5= 16.5
## Related questions
### What is the least number that rounds up to 300 when rounding to the nearest hundred?
QUESTION POSTED AT 02/06/2020 - 01:56 AM
### Help me answer this thanks
QUESTION POSTED AT 02/06/2020 - 01:40 AM
### How do I solve this problem involving simplifying exponents, squaring, and fractions
QUESTION POSTED AT 02/06/2020 - 01:36 AM
### Use a graphing calculator to solve the equation -3 cost= 1 in the interval from . Round to the nearest hundredth.
QUESTION POSTED AT 02/06/2020 - 01:33 AM
### Find an irrational number that is between 7.7 and 7.9. Explain why it is irrational. Include the decimal approximation of the irrational number to the nearest hundredth. (3 points)
QUESTION POSTED AT 02/06/2020 - 01:29 AM
### Xy=-42 what is the answer to it ?
QUESTION POSTED AT 02/06/2020 - 01:28 AM
### A county's population in 1991 was 147 million. In 1998 it was 153 million. Estimate the population in 2017 using the exponential growth formula. Round your answer to the nearest million.
QUESTION POSTED AT 02/06/2020 - 01:21 AM
### What is the distance between points (6,2) and (4,1) to the nearest tenth?
QUESTION POSTED AT 02/06/2020 - 01:20 AM
### Calculate the average rate of change for the given graph from x = 3 to x = 5 and select the correct answer below. A. 5 B. 3 C. 1 D. 0
QUESTION POSTED AT 02/06/2020 - 01:12 AM
### _____ 18. The scatter plot shows the study times and test scores for a number of students. How long did the person who scored 81 study? Type your answer in the blank to the left. A. 50 minutes B. 81 minutes C. 16 minutes D. 100 minutes
QUESTION POSTED AT 01/06/2020 - 05:03 PM
### _____ 20. Brandon needs \$480 to buy a TV and stereo system for his room. He received \$60 in cash for birthday presents. He plans to save \$30 per week from his part-time job. To find how many weeks w it will take to have \$480, solve 60 + 30w = 480. Type your answer in the blank to the left. A. 16 weeks B. 13 weeks C. 15 weeks D. 14 weeks
QUESTION POSTED AT 01/06/2020 - 04:55 PM
### 14. Tell whether the sequence 1 3 , 0, 1, −2 … is arithmetic, geometric, or neither. Find the next three terms of the sequence. Type your answer in the blank to the left. A. neither; 7, -20, 61 B. geometric;7, -20, 61 C. arithmetic; − 1 3 , 1 1 3 , 3 D. geometric;−3 1 3 , −5 5 9 , −9 7 27
QUESTION POSTED AT 01/06/2020 - 04:49 PM
### Sandy has a measuring cup that can measure to the nearest tenth of a milliliter of sandy measures the oil in each container the greatest amount of oil would measure how many milliliters
QUESTION POSTED AT 01/06/2020 - 04:49 PM
### How would i solve something like 15/3?
QUESTION POSTED AT 01/06/2020 - 04:47 PM
### During a game of pool, Kyle hit a ball and it followed the path from B to C and then to D. The angles formed by the path are congruent. To the nearest tenth of an inch, which of the following represents the total distance the ball traveled before it stopped at point D?
QUESTION POSTED AT 01/06/2020 - 04:47 PM
QUESTION POSTED AT 01/06/2020 - 04:46 PM
### Carlene is saving her money to buy a \$500 desk. She deposits \$400 into an account with an annual interest rate of 6% compounded continuously. The equation 400e^0.06t=500 represents the situation, where t is the number of years the money needs to remain in the account. About how long must Carlene wait to have enough money to buy the desk? Use a calculator and round your answer to the nearest whole number.
QUESTION POSTED AT 01/06/2020 - 04:45 PM
### The population of a town grew from 20,000 to 28,000. The continuous growth rate is 15%. The equation mc024-1.jpg represents the situation, where t is the number of years the population has been growing. About how many years has the population of the town been growing? Use a calculator and round your answer to the nearest whole number.
QUESTION POSTED AT 01/06/2020 - 04:44 PM
### What is the APY for money invested at each rate? (A) 13% compounded quarterly (B) 12% compounded continuously (A) APY_____ (Round to three decimal places as needed.)
QUESTION POSTED AT 01/06/2020 - 04:43 PM
### A painter leans a 25-ft ladder against a building. The base of the ladder is 7 ft from the building. To the nearest foot, how high on the building does the ladder reach? 20 feet 18 feet 26 feet 24 feet
QUESTION POSTED AT 01/06/2020 - 04:42 PM
### The percentage score on a test varies directly as the number of correct responses. Stan answered 29 questions correctly and earned a score of 87. What would Stan's percentage score have been if he had answered 25 questions correctly?
QUESTION POSTED AT 01/06/2020 - 04:39 PM
### Jamal uses the steps below to solve the equation 6x – 4 = 8. Step 1: 6x – 4 + 4 = 8 + 4 Step 2: 6x + 0 = 12 Step 3: 6x = 12 Step 4: = Step 5: 1x = 2 Step 6: x = 2 Which property justifies Step 3 of his work?
QUESTION POSTED AT 01/06/2020 - 04:39 PM
### An investment company pays 8% compounded semiannually. You want to have \$12,000 in the future. (A) How much should you deposit now to have that amount 5 years from now? (Round to the nearest cent) (B) How much should you deposit now to have that amount 10 years from now? (Round to the nearest cent)
QUESTION POSTED AT 01/06/2020 - 04:38 PM
### An investment company pays 8% compounded semiannually. You want to have \$12,000 in the future. How much should you deposit now to have that amount 5 years from now? (Round to the nearest cent)
QUESTION POSTED AT 01/06/2020 - 04:36 PM
### What is 2x+1/3y=240 plzz tell me the answer man
QUESTION POSTED AT 01/06/2020 - 04:36 PM
### Use the continuous compound interest formula to find the indicated value. A=90,000; P=65,452; r=9.1%; t=? t=years (Do not round until the final answer. Then round to two decimal places as needed.)
QUESTION POSTED AT 01/06/2020 - 04:36 PM
### A ball is thrown from initial height of 2 feet with an initial upward velocity of 35 ft./s. The ball's height H (in feet) after T seconds is given by the following. H=2+35T-16T^2 Find all values of T for which the ball's height is 20 feet. Round your answer(s) to the nearest hundredth.
QUESTION POSTED AT 01/06/2020 - 04:33 PM
### Can someone help me answer these 2 questions?
QUESTION POSTED AT 01/06/2020 - 04:31 PM
### Two buildings stand 90ft apart at their closest points. At those points, the angle of depression from the top of the taller building to the top of the shorter building is 12 degrees. How much taller is the taller building? Round your answer to the nearest foot.
QUESTION POSTED AT 01/06/2020 - 04:27 PM
### A 12-ft long ladder is leadning against a wall and makes an 80 degree angle with the ground. How high up the wall does the ladder reach, and how far is the base of the ladder from the base of the wall? Round to the nearest inch.
QUESTION POSTED AT 01/06/2020 - 04:27 PM
| 2,073
| 6,969
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.359375
| 3
|
CC-MAIN-2021-04
|
latest
|
en
| 0.893317
|
http://www.societyofrobots.com/robotforum/index.php?topic=14823.0
| 1,417,037,953,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2014-49/segments/1416931007510.17/warc/CC-MAIN-20141125155647-00095-ip-10-235-23-156.ec2.internal.warc.gz
| 831,853,285
| 14,241
|
### Author Topic: Arduino Port Manipulation (Read 5984 times)
0 Members and 2 Guests are viewing this topic.
#### DTM22
• Jr. Member
• Posts: 32
##### Arduino Port Manipulation
« on: November 17, 2011, 02:59:59 PM »
I'm attempting to use port manipulation for a project I'm working on where the timing is very crucial. Ive read several tutorials about it online but I am still unclear on how to read whether the appropriate INPUT pins read HIGH or LOW. So far I've determined I need to write: "DDRD=B00111100;" to set Pins 1,2,6 and 7 as INPUTS and the rest as OUTPUTS.
The only ones I'm actually using are 6 and 7. I read not to change pins 1 and 2 as altering their state could disrupt the functionality of the micro controller.
Ive determined I need to use the "PIND" function but Im not certain how to write the appropriate code for what I want to do....
Does anyone have any knowledge about Port Manipulation that could help me out? I need to check when pins 6 and 7 respectively change state from LOW to HIGH.. Any help would be much appreciated!
#### rbtying
• Supreme Robot
• Posts: 452
##### Re: Arduino Port Manipulation
« Reply #1 on: November 17, 2011, 03:10:18 PM »
So, pins 6 and 7 in Arduino are PD6 and PD7 in the ATMega328P header definitions.
To set them as inputs:
Code: [Select]
`DDRD &= ~( ( 1 << 6 ) | ( 1 << 7 ) );`
This does a bitwise AND to set the 6th and 7th bits of DDRD to 0, thereby putting them in input mode.
To check the values:
Code: [Select]
`bool pin_6_is_high = PIND & ( 1 << 6 );bool pin_7_is_high = PIND & ( 1 << 7 );`
DDRD, PIND, and PORTD are not functions, they're registers in the memory of the microcontroller. As such, you manipulate them via bitwise operations--it is rather dangerous to do as you said ("DDRD=B00111100;") as you're not taking into account the current state of the register, which may lead to unexpected consequences.
Some useful things to know:
Setting a bit in a register:
Code: [Select]
`REGISTER |= ( 1 << BIT );`
Clearing a bit in a register:
Code: [Select]
`REGISTER &= ~( 1 << BIT );`
Flipping a bit in a register:
Code: [Select]
`REGISTER ^= ( 1 << BIT );`
Checking a bit in a register:
Code: [Select]
`bool status = REGISTER & ( 1 << BIT );`
#### DTM22
• Jr. Member
• Posts: 32
##### Re: Arduino Port Manipulation
« Reply #2 on: November 17, 2011, 03:43:47 PM »
Thanks alot! Now that I know how to check the value of a PIN I have just one more question...how would I go about checking the pins state and having a particular function performed when they are read as a HIGH. typical "If" function, "while"?? how would I write that in code?
#### joe61
• Supreme Robot
• Posts: 417
##### Re: Arduino Port Manipulation
« Reply #3 on: November 17, 2011, 05:53:35 PM »
I depends a little on what the input will be. For most things other than a button press you can use a pin change interrupt. For example, if you have a sensor connected to pin 6, which signals an event by putting 5V on the pin, then a pin change interrupt is fine. You don't want to mix button presses and interrupts though, because buttons bounce when pressed, and you're likely to get more than one interrupt for any given press.
The ATmega328 data sheet gives details in section 12 "External Interrupts", but basically you can set it either pin to trigger an interrupt when it toggles. See also the avr-libc page on interrupt handling
I haven't done much with these interrupts, so take this for what it's worth. Hook up LEDs to pins PB0 and PB1, and toggle PD6 and PD7 high and low.
Code: [Select]
`#include <avr/io.h>#include <avr/interrupt.h>#include <stdint.h>#define pin6 (1 << PD6)#define pin7 (1 << PD7)ISR (PCINT2_vect){ static uint8_t pd6 = 0; static uint8_t pd7 = 0; uint8_t tmp6; uint8_t tmp7; // Get the current state of the two pins tmp6 = (PIND & (1 << PD6)); tmp7 = (PIND & (1 << PD7)); if (tmp6 != pd6) { // PD6 toggled, save current value and // toggle associated LED pd6 = tmp6; PORTB ^= (1 << PB0); } if (tmp7 != pd7) { // PD7 toggled, save current value and // toggle associated LED pd7 = tmp7; PORTB ^= (1 << PB1); }}int main(){ // Set PD6 and PD7 as inputs DDRD &= ~((1 << PD6) | (1 << PD7)); // Set PB0 and PB1 as output (LED indicators) DDRB |= (1 << PB0) | (1 << PB1); // Specify pins PD6 and PD7 as interrupt // sources. See data sheet 12.2.6 PCMSK2 |= (1 << PCINT23) | (1 << PCINT22); // enable pin change interrupt for pins // PCINT23 - 16 PCICR |= (1 << PCIE2); // Enable interrupts globaly sei (); for (;;) { }}`
This is just throwaway code that may not even work under some circumstances. The interrupt handler could be more efficient, etc, But hopefully it gives the general idea
Joe
« Last Edit: November 17, 2011, 05:56:25 PM by joe61 »
#### DTM22
• Jr. Member
• Posts: 32
##### Re: Arduino Port Manipulation
« Reply #4 on: November 18, 2011, 09:15:44 AM »
Thanks, but Id rather not use interrupts if I can avoid it, is there any other way to perform what I want?
#### joe61
• Supreme Robot
• Posts: 417
##### Re: Arduino Port Manipulation
« Reply #5 on: November 18, 2011, 10:24:09 AM »
Ignore this, I hit tab out of habit and wound up posting before I was done.
« Last Edit: November 18, 2011, 10:27:45 AM by joe61 »
#### joe61
• Supreme Robot
• Posts: 417
##### Re: Arduino Port Manipulation
« Reply #6 on: November 18, 2011, 10:26:59 AM »
Sure, you can just poll the pins at whatever interval you like, for example
Code: [Select]
`int main(){ DDRD &= ~ ((1 << PD6) | (1 << PD7)); for (;;) { if (PIND & (1 <<PD6)) handlePD6 (); if (PIND & (1 <<PD7)) handlePD7 (); _delay_ms (100); }}`
Where handlePD[67] are functions you write to do what you want.
Joe
#### DTM22
• Jr. Member
• Posts: 32
##### Re: Arduino Port Manipulation
« Reply #7 on: November 18, 2011, 02:19:10 PM »
Thanks, Im trying to apply this to my code but I keep getting the error message saying PD6 and PD7 were not declared in this scope, how do I got about declaring them?
#### Soeren
• Supreme Robot
• Posts: 4,672
##### Re: Arduino Port Manipulation
« Reply #8 on: November 18, 2011, 02:27:16 PM »
Hi,
No offense ment, but why are you posting software questions in "electronics"?
They really ought to go into "software".
Quote from: joe61
You don't want to mix button presses and interrupts though, because buttons bounce when pressed, and you're likely to get more than one interrupt for any given press.
Interrupt control is the best way for lots of button pres decoding cases (depending on the app of course). Contact bounce is easily handled in an ISR.
Regards,
Søren
A rather fast and fairly heavy robot with quite large wheels needs what? A lot of power?
Engineering is based on numbers - not adjectives
#### DTM22
• Jr. Member
• Posts: 32
##### Re: Arduino Port Manipulation
« Reply #9 on: November 18, 2011, 02:39:04 PM »
Sorry about that, Ive moved the post over to the appropriate section.
#### bens
• Expert Roboticist
• Supreme Robot
• Posts: 334
##### Re: Arduino Port Manipulation
« Reply #10 on: November 19, 2011, 11:43:35 PM »
Thanks, Im trying to apply this to my code but I keep getting the error message saying PD6 and PD7 were not declared in this scope, how do I got about declaring them?
PD6 and PD7 are automatically defined when you are using the ATmega168, but the ATmega328P include files replaced this style of pin definition with PORTD6 and PORTD7. If you still want to use "PD6" and "PD7", you can insert the following at the top of your program:
#define PD6 PORTD6
#define PD7 PORTD7
Alternatively, you can just use the numbers 6 and 7 for PD6 and PD7, respectively, as this is how they're ultimately defined on the ATmega328P.
- Ben
| 2,296
| 7,855
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.59375
| 3
|
CC-MAIN-2014-49
|
longest
|
en
| 0.905174
|
http://mechguru.com/machine-design/din-iso-286-tolerance-fundamental-deviation-tables-shaft/
| 1,555,655,217,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-18/segments/1555578527148.46/warc/CC-MAIN-20190419061412-20190419083412-00242.warc.gz
| 108,328,013
| 29,701
|
DIN ISO 286 Fundamental Deviation Tables for Tolerance Calculation of Shaft
You will require ISO 286 tolerance and fundamental deviation (FD) table for reading the shaft drawing with deviation symbol and IT grade numbers.
As you know that ISO 286 is an equivalent standard of JIS B 0401, DIN ISO 286, BS EN 20286, CSN EN 20286 so the table will work with those standards as well.
The table is also important for limits and fits calculation using ANSI B 4.2 standard.
Shaft Size Range (mm) Fundamental Deviation Values (in mm) for FD classes d e f g j5 j6 j7 k* k** m n p 0 to 3 -0.020 -0.014 -0.006 -0.002 -0.002 -0.002 -0.004 0.000 0 0.002 0.004 0.006 3 to 6 -0.030 -0.020 -0.010 -0.004 -0.002 -0.002 -0.004 0.001 0 0.004 0.008 0.012 6 to 10 -0.040 -0.025 -0.013 -0.008 -0.002 -0.002 -0.005 0.001 0 0.006 0.010 0.015 10 to 14 -0.050 -0.032 -0.016 -0.006 -0.003 -0.003 -0.006 0.001 0 0.007 0.012 0.018 14 to 18 -0.050 -0.032 -0.016 -0.006 -0.003 -0.003 -0.006 0.001 0 0.007 0.012 0.018 18 to 24 -0.065 -0.040 -0.020 -0.007 -0.003 -0.003 -0.008 0.002 0 0.008 0.015 0.022 24 to 30 -0.065 -0.040 -0.020 -0.007 -0.003 -0.003 -0.008 0.002 0 0.008 0.015 0.022 30 to 40 -0.080 -0.050 -0.025 -0.009 -0.004 -0.004 -0.010 0.002 0 0.009 0.017 0.026 40 to 50 -0.080 -0.050 -0.025 -0.009 -0.004 -0.004 -0.010 0.002 0 0.009 0.017 0.026 50 to 65 -0.100 -0.060 -0.030 -0.010 -0.005 -0.005 -0.012 0.002 0 0.011 0.020 0.032 65 to 80 -0.100 -0.060 -0.030 -0.010 -0.007 -0.007 -0.012 0.002 0 0.011 0.020 0.032 80 to 100 -0.120 -0.072 -0.036 -0.012 -0.009 -0.009 -0.015 0.003 0 0.013 0.023 0.037 100 to 120 -0.120 -0.072 -0.036 -0.012 -0.009 -0.009 -0.015 0.003 0 0.013 0.023 0.037 120 to 140 -0.145 -0.085 -0.043 -0.014 -0.011 -0.011 -0.018 0.003 0 0.015 0.027 0.043 140 to 160 -0.145 -0.085 -0.043 -0.014 -0.011 -0.011 -0.018 0.003 0 0.015 0.027 0.043 160 to 180 -0.145 -0.085 -0.043 -0.014 -0.011 -0.011 -0.018 0.003 0 0.015 0.027 0.043 180 to 200 -0.170 -0100 -0.050 -0.015 -0.013 -0.013 -0.021 0.004 0 0.017 0.031 0.050 200 to 250 -0.170 -0100 -0.050 -0.015 -0.013 -0.013 -0.021 0.004 0 0.017 0.031 0.050 250 to 315 -0.190 -0.011 -0.056 -0.017 -0.016 -0.016 -0.026 0.004 0 0.020 0.034 0.056 315 to 400 -0.210 -0.125 -0.062 -0.018 -0.018 -0.018 -0.028 0.004 0 0.021 0.037 0.062 400 to 500 -0.230 -0.135 -0.068 -0.020 -0.020 -0.020 -0.032 0.005 0 0.023 0.040 0.068 500 to 630 -0.260 -0.145 -0.076 -0.022 NA NA NA 0 0 0.026 0.044 0.078 630 to 800 -0.290 -0.160 -0.080 -0.024 NA NA NA 0 0 0.030 0.050 0.088 800 to 1000 -0.320 -0.170 -0.086 -0.026 NA NA NA 0 0 0.034 0.056 0.100 1000 to 1250 -0.350 -0.195 -0.098 -0.028 NA NA NA 0 0 0.040 0.066 0.120 1250 to 1600 -0.390 -0.220 -0.110 -0.030 NA NA NA 0 0 0.048 0.078 0.140 1600 to 2000 -0.430 -0.240 -0.120 -0.032 NA NA NA 0 0 0.058 0.092 0.170 2000 to 2500 -0.480 -0.260 -0.130 -0.034 NA NA NA 0 0 0.068 0.110 0.195 2500 to 3150 -0.520 -0.290 -0.145 -0.038 NA NA NA 0 0 0.076 0.135 0.240 d e f g j5 j6 j7 k* k** m n p
k* – Fundamental deviation values of k class for IT grades IT4, IT5, IT6 and IT7.
k** – Fundamental deviation values of k class EXCEPT for IT grades IT4, IT5, IT6 and IT7.
Fundamental deviation values of h class for all IT grades = 0 (ZERO ).
Fundamental deviation values of j class for all IT grades EXCEPT IT5, IT6, IT7 =
j5, j6, j7 columns of the table indicates the FD values of j class for IT grades 5, 6 and 7 respectively.
ISO 286 fundamental or standard tolerance table for various IT grades and nominal sizes will be shown in next article.
Shibashis Ghosh
Hi, I am Shibashis, a blogger by passion and engineer by profession. I have written most of the articles for mechGuru.com. For more than a decades i am closely associated with the engineering design/manufacturing simulation technologies.
Disclaimer: I work for Altair. mechGuru.com is my personal blog. Although i have tried to put my neutral opinion while writing about different competitor's technologies, still i would like you to read the articles by keeping my background in mind.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
| 1,829
| 4,094
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.53125
| 3
|
CC-MAIN-2019-18
|
latest
|
en
| 0.554856
|
https://www.enotes.com/homework-help/f-x-e-x-3-n-4-find-nth-maclaurin-polynomial-810508
| 1,669,830,027,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00670.warc.gz
| 815,842,431
| 18,720
|
# `f(x) = e^(x/3) , n=4` Find the n'th Maclaurin polynomial for the function.
Maclaurin series is a special case of Taylor series that is centered at a=0. The expansion of the function about `0` follows the formula:
`f(x)=sum_(n=0)^oo (f^n(0))/(n!) x^n`
or
`f(x)= f(0)+(f'(0)x)/(1!)+(f^2(0))/(2!)x^2+(f^3(0))/(3!)x^3+(f^4(0))/(4!)x^4 +... `
We may apply the formula for Maclaurin series to determine the Maclaurin polynomial of degree `n=4` for the given function `f(x)=e^(x/3)` .
Apply derivative formula for exponential function: `d/(dx) e^u = e^u * (du)/(dx)` to list `f^n(x)` as:
Let `u =x/3` then `(du)/(dx)= 1/3`
Applying the values on the derivative formula for exponential function, we get:
`d/(dx) e^(x/3) = e^(x/3) *(1/3)`
`= e^(x/3)/3 or 1/3e^(x/3)`
Applying `d/(dx) e^(x/3)= 1/3e^(x/3)` for each `f^n(x)` , we get:
`f'(x) = d/(dx) e^(x/3)`
`=1/3e^(x/3)`
`f^2(x) = d/(dx) (1/3e^(x/3))`
`=1/3 *d/(dx)e^(x/3)`
`=1/3 *(1/3e^(x/3))`
`=1/9e^(x/3)`
`f^3(x) = d/(dx) (1/9e^(x/3))`
`=1/9 *d/(dx) e^(x/3)`
`=1/9 *(1/3e^(x/3))`
`=1/27e^(x/3)`
`f^4(x) = d/(dx) (1/27e^(x/3))`
`=1/27 *d/(dx) e^(x/3)`
`=1/27 *(1/3e^(x/3))`
`=1/81e^(x/3)`
Plug-in `x=0` on each `f^n(x)` , we get:
`f(0)=e^(0/3) = 1`
`f'(0)=1/3e^(0/3) = 1/3`
`f^2(0)=1/9e^(0/3)=1/9`
`f^3(0)=1/27e^(0/3)=1/27`
`f^4(0)=1/81e^(0/3)=1/81`
Note: `e ^(0/3) = e^0 =1`.
Plug-in the values on the formula for Maclaurin series, we get:
`f(x)=sum_(n=0)^4 (f^n(0))/(n!) x^n`
`= 1+(1/3)/(1!)x+(1/9)/(2!)x^2+(1/27)/(3!)x^3+(1/81)/(4!)x^4`
`=1+1/3x+1/18x^2+1/162x^3+1/1944x^4`
The Maclaurin polynomial of degree n=4 for the given function `f(x)=e^(x/3)` will be:
`P_4(x)=1+1/3x+1/18x^2+1/162x^3+1/1944x^4`
Approved by eNotes Editorial Team
| 863
| 1,727
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.375
| 4
|
CC-MAIN-2022-49
|
latest
|
en
| 0.710164
|
https://leanprover-community.github.io/mathlib4_docs/Mathlib/Algebra/Group/Aut.html
| 1,702,065,729,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-50/segments/1700679100769.54/warc/CC-MAIN-20231208180539-20231208210539-00498.warc.gz
| 402,201,439
| 6,579
|
# Documentation
Mathlib.Algebra.Group.Aut
# Multiplicative and additive group automorphisms #
This file defines the automorphism group structure on AddAut R := AddEquiv R R and MulAut R := MulEquiv R R.
## Implementation notes #
The definition of multiplication in the automorphism groups agrees with function composition, multiplication in Equiv.Perm, and multiplication in CategoryTheory.End, but not with CategoryTheory.comp.
This file is kept separate from Data/Equiv/MulAdd so that GroupTheory.Perm is free to use equivalences (and other files that use them) before the group structure is defined.
## Tags #
@[reducible]
Type u_4
Equations
Instances For
@[reducible]
def MulAut (M : Type u_4) [Mul M] :
Type u_4
The group of multiplicative automorphisms.
Equations
Instances For
instance MulAut.instGroupMulAut (M : Type u_2) [Mul M] :
The group operation on multiplicative automorphisms is defined by g h => MulEquiv.trans h g. This means that multiplication agrees with composition, (g*h)(x) = g (h x).
Equations
instance MulAut.instInhabitedMulAut (M : Type u_2) [Mul M] :
Equations
• = { default := 1 }
@[simp]
theorem MulAut.coe_mul (M : Type u_2) [Mul M] (e₁ : ) (e₂ : ) :
(e₁ * e₂) = e₁ e₂
@[simp]
theorem MulAut.coe_one (M : Type u_2) [Mul M] :
1 = id
theorem MulAut.mul_def (M : Type u_2) [Mul M] (e₁ : ) (e₂ : ) :
e₁ * e₂ = MulEquiv.trans e₂ e₁
theorem MulAut.one_def (M : Type u_2) [Mul M] :
theorem MulAut.inv_def (M : Type u_2) [Mul M] (e₁ : ) :
e₁⁻¹ =
@[simp]
theorem MulAut.mul_apply (M : Type u_2) [Mul M] (e₁ : ) (e₂ : ) (m : M) :
(e₁ * e₂) m = e₁ (e₂ m)
@[simp]
theorem MulAut.one_apply (M : Type u_2) [Mul M] (m : M) :
1 m = m
@[simp]
theorem MulAut.apply_inv_self (M : Type u_2) [Mul M] (e : ) (m : M) :
e (e⁻¹ m) = m
@[simp]
theorem MulAut.inv_apply_self (M : Type u_2) [Mul M] (e : ) (m : M) :
e⁻¹ (e m) = m
def MulAut.toPerm (M : Type u_2) [Mul M] :
Monoid hom from the group of multiplicative automorphisms to the group of permutations.
Equations
• One or more equations did not get rendered due to their size.
Instances For
instance MulAut.applyMulDistribMulAction {M : Type u_4} [] :
The tautological action by MulAut M on M.
This generalizes Function.End.applyMulAction.
Equations
@[simp]
theorem MulAut.smul_def {M : Type u_4} [] (f : ) (a : M) :
f a = f a
instance MulAut.apply_faithfulSMul {M : Type u_4} [] :
MulAut.applyDistribMulAction is faithful.
Equations
def MulAut.conj {G : Type u_3} [] :
G →*
Group conjugation, MulAut.conj g h = g * h * g⁻¹, as a monoid homomorphism mapping multiplication in G into multiplication in the automorphism group MulAut G. See also the type ConjAct G for any group G, which has a MulAction (ConjAct G) G instance where conj G acts on G by conjugation.
Equations
• One or more equations did not get rendered due to their size.
Instances For
@[simp]
theorem MulAut.conj_apply {G : Type u_3} [] (g : G) (h : G) :
(MulAut.conj g) h = g * h * g⁻¹
@[simp]
theorem MulAut.conj_symm_apply {G : Type u_3} [] (g : G) (h : G) :
(MulEquiv.symm (MulAut.conj g)) h = g⁻¹ * h * g
@[simp]
theorem MulAut.conj_inv_apply {G : Type u_3} [] (g : G) (h : G) :
(MulAut.conj g)⁻¹ h = g⁻¹ * h * g
The group operation on additive automorphisms is defined by g h => AddEquiv.trans h g. This means that multiplication agrees with composition, (g*h)(x) = g (h x).
Equations
Equations
• = { default := 1 }
@[simp]
theorem AddAut.coe_mul (A : Type u_1) [Add A] (e₁ : ) (e₂ : ) :
(e₁ * e₂) = e₁ e₂
@[simp]
1 = id
theorem AddAut.mul_def (A : Type u_1) [Add A] (e₁ : ) (e₂ : ) :
e₁ * e₂ = AddEquiv.trans e₂ e₁
theorem AddAut.inv_def (A : Type u_1) [Add A] (e₁ : ) :
e₁⁻¹ =
@[simp]
theorem AddAut.mul_apply (A : Type u_1) [Add A] (e₁ : ) (e₂ : ) (a : A) :
(e₁ * e₂) a = e₁ (e₂ a)
@[simp]
theorem AddAut.one_apply (A : Type u_1) [Add A] (a : A) :
1 a = a
@[simp]
theorem AddAut.apply_inv_self (A : Type u_1) [Add A] (e : ) (a : A) :
e⁻¹ (e a) = a
@[simp]
theorem AddAut.inv_apply_self (A : Type u_1) [Add A] (e : ) (a : A) :
e (e⁻¹ a) = a
Monoid hom from the group of multiplicative automorphisms to the group of permutations.
Equations
• One or more equations did not get rendered due to their size.
Instances For
instance AddAut.applyDistribMulAction {A : Type u_4} [] :
The tautological action by AddAut A on A.
This generalizes Function.End.applyMulAction.
Equations
@[simp]
theorem AddAut.smul_def {A : Type u_4} [] (f : ) (a : A) :
f a = f a
instance AddAut.apply_faithfulSMul {A : Type u_4} [] :
AddAut.applyDistribMulAction is faithful.
Equations
def AddAut.conj {G : Type u_3} [] :
Additive group conjugation, AddAut.conj g h = g + h - g, as an additive monoid homomorphism mapping addition in G into multiplication in the automorphism group AddAut G (written additively in order to define the map).
Equations
• One or more equations did not get rendered due to their size.
Instances For
@[simp]
theorem AddAut.conj_apply {G : Type u_3} [] (g : G) (h : G) :
| 1,651
| 4,938
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.859375
| 3
|
CC-MAIN-2023-50
|
longest
|
en
| 0.58983
|
http://sites.cdnis.edu.hk/students/120035/2017/03/20/is-maths-invented-or-discovered/
| 1,511,414,319,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2017-47/segments/1510934806736.55/warc/CC-MAIN-20171123050243-20171123070243-00189.warc.gz
| 258,402,219
| 8,757
|
# Is Maths Invented or Discovered
The question over whether or not mathematics was a concept invented, or discovered, has been one long debated throughout history. On one hand, it seems almost intuitive to suppose that mathematics is discovered: after all, numbers exist regardless of whether or not an intelligent enough species were capable of quantifying, expressing and playing with these intrinsically existent numbers. On the other hand, when studying the intricacies of mathematics, it seems almost as if that it is simply a game, following rules; and when necessary, can be utilised in such a way that it may be able to reflect some aspects of the real world. Yet taking a more holistic point of view, the theory that mathematics is invented and follows basic rules, is, to some extent, fallible. This is because although to some it may appear that some of the rules of mathematics (such as square roots, squares, BEDMAS, exponents) are simply true because of definition, they are in fact true because in it’s most innate beginning, we built these ‘rules’ upon the very rational and real properties and functions of the ‘discovered’ numbers. The concepts of ‘squaring a number’ or ‘multiplying before adding’ often seem abstract; yet when you actually examine these concepts, it becomes clear that this process of simplification (that squaring a number is just that number times itself; or multiplying two numbers is just the first number adding itself the second number of times) is the reason behind why mathematics is endowed with this stigma of being abstract. Certainly there exists many abstract components of maths, such as negative numbers and imaginary numbers; and there really is no counterargument to how, or perhaps why, these concepts exist concretely in reality. That doesn’t, however, therefore immediately justify the argument that maths is simply a game, with rules. To me, this rather reductive perspective of looking at it completely disregards the very concrete foundations of mathematics; almost ignoring the fact that maths has played such an important role in representing reality. YET AGAIN, fundamentally, everything mathematics has built to represent and accurately reflect is because we humans have quantified and brought into existence quantifiable units, numbers, and expressions for such concepts. Mathematics and numbers can be used to accurately express the speed of light, speed of cars, lengths of polygons, angles of a shapes, etc; but the only reason they are able to do this with such eerie real life accuracy is because we’ve created these real life concepts, founded upon mathematical foundations. Speed isn’t an inherent concept. It’s something us as humans have artificially created; defined as distance (another artificial creation) over time (controversially another creation). Because speed and similarly, other concepts such as speed, are founded upon mathematical reasoning (that speed will always equal to the measurable distance of something divided by the time it takes for an object to cross this distance), it would thus be illogical and irrational to argue that mathematics works independently of reality, if this reality is indeed founded upon mathematics. This really does not make a lot of sense; please excuse my poor train of thought; this was a very stream-of-consciousness piece; and i keep getting confused with what i’m trying to say; and i can’t be bothered to properly organise my thoughts so sorry
| 674
| 3,471
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.890625
| 3
|
CC-MAIN-2017-47
|
latest
|
en
| 0.959931
|
https://www.clutchprep.com/physics/practice-problems/144426/for-the-circuit-shown-in-the-figure-below-we-wish-to-find-the-currents-i1-i2-and
| 1,632,739,000,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00407.warc.gz
| 706,148,559
| 33,380
|
Kirchhoff's Loop Rule Video Lessons
Concept
# Problem: For the circuit shown in the figure below, we wish to find the currents I1, I2, and I3.(a) Use Kirchhoff's rules to complete the equation for the upper loop. (Use any variable or symbol stated above as necessary. All currents are given in amperes. Do not enter any units.)(b) Use Kirchhoff's rules to complete the equation for the lower loop. (Use any variable or symbol stated above as necessary. All currents are given in amperes. Do not enter any units.)(c) Use Kirchhoff's rules to obtain an equation for the junction on the left side. (Use any variable or symbol stated above as necessary. All currents are given in amperes.)(d) Solve the junction equation for I3.(e) Using the equation found in part (d), eliminate I3 from the equation found in part (b). (Use any variable or symbol stated above as necessary. All currents are given in amperes. Do not enter any units.)(f) Solve the equations found in part (a) and part (e) simultaneously for the two unknowns for I1 and I2, respectively.(g) Substitute the answers found in part (f) into the junction equation found in part (d), solving for I3.
###### FREE Expert Solution
100% (464 ratings)
###### Problem Details
For the circuit shown in the figure below, we wish to find the currents I1, I2, and I3.
(a) Use Kirchhoff's rules to complete the equation for the upper loop. (Use any variable or symbol stated above as necessary. All currents are given in amperes. Do not enter any units.)
(b) Use Kirchhoff's rules to complete the equation for the lower loop. (Use any variable or symbol stated above as necessary. All currents are given in amperes. Do not enter any units.)
(c) Use Kirchhoff's rules to obtain an equation for the junction on the left side. (Use any variable or symbol stated above as necessary. All currents are given in amperes.)
(d) Solve the junction equation for I3.
(e) Using the equation found in part (d), eliminate I3 from the equation found in part (b). (Use any variable or symbol stated above as necessary. All currents are given in amperes. Do not enter any units.)
(f) Solve the equations found in part (a) and part (e) simultaneously for the two unknowns for I1 and I2, respectively.
(g) Substitute the answers found in part (f) into the junction equation found in part (d), solving for I3.
| 568
| 2,344
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.625
| 4
|
CC-MAIN-2021-39
|
latest
|
en
| 0.870042
|
http://settheory.net/sets/graphs
| 1,726,246,975,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00665.warc.gz
| 26,892,787
| 4,449
|
## 2.6. Graphs
A graph is a set of ordered pairs. Let us denote the class of graphs as gr:
gr R ⇔ (Set R ∧ ∀xR, Fnc x ∧ Dom x = V2).
For any binder Q and any graph G, the formula QzG, A(z0,z1) that binds the variable z = (z0, z1) on a binary structure A definite on G, can be seen as binding 2 variables z0, z1 on A(z0, z1), and thus be denoted with a pair of variables: Q(x,y)∈G, A(x,y).
The transpose of an ordered pair is
t(x,y) = (y,x)
The transpose of a graph R is the image of transposition over it:
tR = {(y,x)|(x,y) ∈ R}
We define the domain and the image of a graph as the respective images of π0 and π1 over it:
Dom R = {x|(x,y) ∈ R}
Im R = {y|(x,y) ∈ R} = Dom tR
### Currying notation
Graphs R can be expressed in curried forms as functors R and R:
xR(x) = {y | (z,y)∈Rz=x}.
yR(y) = {x | (x,z)∈Rz=y} = tR (y).
x,y, yR (x) ⇔ (x,y) ∈ R ⇔ xR (y)
xx ∈ Dom RR (x) ≠ ∅
Dom RE ⇒ Im R = ⋃xE R(x) ∧ ∀y, R(y) = {xE | (x,y) ∈ R}
They can also appear as functions
RE = (ExR(x))
RF = (FyR(y))
(R) = RDom R
(R) = RIm R.
### Functional graphs
The graph of a function f is defined by
Gr f = {(x,f(x)) | x ∈ Dom f}
x,y, (x,y) ∈ Gr f ⇔ (x ∈ Dom fy = f(x))
Dom f = Dom Gr f
Im f = Im Gr f
For any function f and any graph R,
Gr f ⊂ R ⇔ ∀x∈Dom f, f(x) ∈ R⃗(x) R ⊂ Gr f ⇔ (Dom R ⊂ Dom f ∧ ∀(x,y)∈R, y = f(x)) R = Gr f ⇔ (Dom R ⊂ Dom f ∧ ∀x∈Dom f, R⃗(x) = {f(x)} )
A graph R is functional if it is the graph of a function. This condition is equivalently written in either way
x∈ Dom R, !: R(x)
x,yR, x0 = y0x1 = y1.
It is then the graph of the unique function ℩R = ((Dom R) ∋ x ↦ ℩(R(x))).
For any set E we shall denote 𝛿E = Gr IdE.
### Indexed partitions
Two sets E and F are called disjoint when EF = ∅, or equivalently ∀xE, xF.
A family of sets (Ai)iI is called pairwise disjoint when ∀ijI, AiAj = ∅
For any graph R with Im RF, the family (R(y))yF is pairwise disjoint if and only if R is functional:
(∀yzF, ¬∃xR(y), xR(z)) ⇔ (∀(x,y)∈R, ∀zF, xRzy = z)
For any function f and any y we define the fiber of y under f as
f(y) = {x∈Dom f | f(x) = y} = (Gr f)(y)
When f : EF this defines a family fF = (f(y))yF of pairwise disjoint subsets of E:
y,zF, f(y) ∩ f(z) ≠ ∅ ⇒ ∃xf(y) ∩ f(z), y = f(x) = z
yF f(y) = E
Im f = {yF | f(y) ≠ ∅}.
An indexed partition of a set E is a family of nonempty, pairwise disjoint subsets of E, whose union is E.
In other words it is any family of the form f = f•Im f for any function f with domain E.
### Sum or disjoint union
The binder ∐ of sum of any family of sets (Ei)iI, gives a graph ∐iI Ei defined as
iI
Ei =
iI
{(i,x) | xEi}
i,x, (i,x) ∈
iI
Ei ⇔ (iIxEi)
(∀(i,x)∈
iI
Ei, A(i,x)) ⇔ (∀iI, ∀xEi, A(i,x))
(∀iI, EiEi) ⇔ ∐iI Ei ⊂ ∐iI Ei
E0⊔…⊔En−1 = ∐iVn Ei
This binder serves as inverse of currying : R = ∐iI Ei ⇔ (Dom RI ∧ ∀iI, R(i) = Ei).
It is also called disjoint union as the family of copies {(i,x) | xEi} of each Ei is pairwise disjoint, with the function from R to I given by π0.
### Direct and inverse images by a graph
The restriction of a graph R to a set A is defined as
R|A = {(x,y)∈R | xA} = ∐xA R(x)
The direct image of a set A by a graph R is
R(A) = Im R|A = ⋃xA R(x) = {y |(x,y)∈RxA} ⊂ Im R.
Dom RA ⇔ R|A = RR(A) = Im R
R(
iI
Ai) =
iI
R(Ai)
R(
iI
Ai) ⊂
iI
R(Ai)
ABR(A) ⊂ R(B)
Similarly, the inverse image or preimage of a set B by a graph R is
R(B) = tR(B) = ⋃yB R(y) = {x | (x,y)∈RyB} ⊂ Dom R.
### Direct image and preimage by a function
The direct image of a subset A ⊂ Dom f by a function f, denoted f[A] or f(A), is
f[A] = (Gr f)(A) = {f(x) | xA} ⊂ Im f
For any f : E → F and BF, the preimage of B by f, written f(B), is defined by
f(B) = (Gr f)(B) = {xE | f(x) ∈ B} =
yB
f(y)
f(y) = f({y})
f(∁F B) = ∁Ef(B)
For any family (Bi)iI of subsets of F, f(⋂iI Bi) = ⋂iI f(Bi) where intersections are respectively interpreted as subsets of F and E.
Set theory and Foundations of Mathematics 1. First foundations of mathematics 2. Set theory 2.1. Formalization of set theory 2.2. Set generation principle 2.3. Tuples 2.4. Uniqueness quantifiers 2.5. Families, Boolean operators on sets⇦ 2.6. Graphs ⇨ 2.7. Products and powerset 2.8. Injections, bijections 2.9. Properties of binary relations 2.10. Axiom of choice 2.A. Time in set theory 2.B. Interpretation of classes 2.C. Concepts of truth in mathematics 3. Algebra - 4. Arithmetic - 5. Second-order foundations
Other languages:
FR : 2.6. Graphes
| 1,732
| 4,377
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.3125
| 4
|
CC-MAIN-2024-38
|
latest
|
en
| 0.83303
|
https://casinoonlinewithbonus.com/poker-card-rankings/
| 1,718,424,392,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861583.78/warc/CC-MAIN-20240615031115-20240615061115-00248.warc.gz
| 131,089,426
| 41,640
|
Select Page
Poker card rankings
Understanding Poker Card Rankings: A Comprehensive Guide to Poker Card Rankings
Poker card rankings are a fundamental aspect of the game, determining the value of each hand and ultimately deciding the winner. Whether you are a beginner or an experienced player, having a comprehensive understanding of these rankings is crucial for success at the poker table. In this article, we will provide you with a detailed guide to poker card rankings, covering everything from the basic hierarchy of hands to the nuances of comparing and analyzing different combinations.
Fundamentally, poker card rankings adhere to a distinct order. The highest-ranking hand is the Royal Flush, consisting of the Ace, King, Queen, Jack, and Ten of the same suit.
This is followed by the Straight Flush, which is any five consecutive cards of the same suit. The next highest ranking is Four of a Kind, which is four cards of the same rank, such as four Aces.
Moving down the hierarchy, we have the Full House, which is a combination of three cards of the same rank and a pair of another rank. For example, three Kings and two Queens would constitute a Full House. Following this, we have the Flush, which consists of any five cards of the same suit, not in consecutive order.
The next ranking is the Straight, which is any five consecutive cards, regardless of their suit. It is important to note that in a Straight, the Ace can be used as both the highest and lowest card.
For instance, a Straight can be Ace, 2, 3, 4, 5 or 10, Jack, Queen, King, Ace.
Moving further down the hierarchy, we have Three of a Kind, which is three cards of the same rank, and two unrelated cards. For example, three Jacks and two unrelated cards would be a Three of a Kind. Following this, we have Two Pair, which is two pairs of cards of the same rank, such as two Aces and two Kings.
Next, we have One Pair, which is a single pair of cards of the same rank, with three unrelated cards. For instance, two Queens and three unrelated cards would constitute a One Pair.
Finally, the lowest ranking hand is the High Card, which is when a player’s hand does not fit into any of the above categories. In this case, the highest card in the hand determines its value.
Understanding these basic rankings is essential, but it is equally important to grasp the concept of hand strength and how it can vary depending on the game being played. In some poker variants, such as Texas Hold’em, players are dealt two private cards and must combine them with five community cards to form the best possible hand. In these cases, it is crucial to assess the potential of your hand and make informed decisions based on the community cards.
In summary, it is crucial for any poker player aiming to enhance their game to have a comprehensive knowledge of the hierarchical order of poker cards. By familiarizing yourself with the hierarchy of hands and understanding how to compare and analyze different combinations, you will be better equipped to make strategic decisions at the table. Remember, honing your skills and gaining hands-on experience are crucial to becoming proficient in understanding the hierarchy of poker cards, so go ahead and immerse yourself in games!
Mastering Poker Card Rankings: Strategies and Tips for Improving Your Poker Game
Mastering poker card rankings is a crucial aspect of improving your poker game and increasing your chances of winning. While understanding the basic hierarchy of hands is important, developing strategies and employing tips can take your gameplay to the next level.
One key strategy to master is starting hand selection. Not all hands are created equal, and knowing which ones to play can significantly impact your success. It is essential to consider factors such as your position at the table, the number of players, and the betting action before you. Starting with strong hands, such as high pairs or suited connectors, can give you an advantage right from the beginning.
Another important aspect of mastering poker card rankings is understanding the concept of pot odds.
This refers to the ratio of the current size of the pot to the cost of a contemplated call. By calculating and comparing the pot odds to the odds of completing your hand, you can make informed decisions about whether to continue in the hand or fold. This skill can help you avoid costly mistakes and maximize your profits in the long run.
Furthermore, observing your opponents and their betting patterns is crucial. Pay attention to how they play their hands and try to identify their tendencies. This information can help you make better decisions and exploit their weaknesses.
For example, if a player consistently bets aggressively with weaker hands, you can adjust your strategy to capitalize on this behavior.
Bluffing is another tactic that can prove effective in mastering the order of poker cards. By representing a stronger hand than you actually have, you can force your opponents to fold and win the pot. However, bluffing should be used selectively and based on the specific dynamics of the game. It is important to consider the board texture, your table image, and the tendencies of your opponents before attempting a bluff.
Besides strategies, there are various suggestions that can enhance your rankings in poker card games. First and foremost, practice and experience are essential.
The more you play, the more familiar you will become with different hands and their rankings. Additionally, studying and analyzing your gameplay can provide valuable insights into areas where you can improve.
Furthermore, managing your bankroll is crucial for long-term success in poker. Set limits on the amount of money you are willing to risk and avoid playing stakes that are beyond your bankroll. This discipline will help you avoid significant losses and ensure that you can continue playing and improving.
In summary, achieving proficiency in poker card rankings necessitates comprehending the hand hierarchy and employing successful tactics. By selecting strong starting hands, understanding pot odds, observing opponents, utilizing bluffing strategically, and following essential tips, you can enhance your poker game and increase your chances of winning. Remember, practice and experience are key, so keep playing, learning, and refining your skills to become a formidable poker player.
Comparing Poker Card Rankings: Analyzing Different Poker Hands and Their Rankings
Comparing poker card rankings is an essential skill for any poker player looking to improve their game. Understanding the hierarchy of different poker hands and their rankings allows players to assess the strength of their own hand and make informed decisions during gameplay.
One important aspect of comparing poker card rankings is understanding the concept of hand strength. While some hands may appear strong, they may actually be weaker when compared to other possible combinations.
For example, a pair of Aces may seem strong initially, but it can be easily beaten by hands such as a flush or a straight.
When assessing the hierarchy of poker cards, it is essential to take into account the scarcity and likelihood of each hand. For instance, a Royal Flush is the highest-ranking hand and is extremely rare to obtain. On the other hand, a High Card hand is the lowest-ranking and the most common. Understanding the odds and probabilities of different hands can help players make strategic decisions during the game.
Understanding the hierarchy within each hand category is another crucial factor when comparing poker card rankings.
For example, within the category of a Full House, the ranking is determined by the value of the three-of-a-kind component. Similarly, within the category of a Flush, the ranking is determined by the highest card in the hand.
Understanding the concept of kickers is essential when comparing the rankings of poker cards. A kicker is an unrelated card that is used to break ties between two hands of the same rank.
For example, if two players have a pair of Aces, the player with the higher kicker card will have the stronger hand.
Furthermore, players should be aware of the different variations of poker and how the rankings may differ in each game. For example, in some variants, such as Omaha Hold’em, players are dealt four private cards instead of two. This can lead to different combinations and rankings compared to traditional Texas Hold’em.
In essence, the ability to compare the rankings of poker cards is an essential skill for every poker player. Understanding the hierarchy of hands, considering the rarity and probability of each hand, recognizing the hierarchy within hand categories, and being aware of kickers are all crucial aspects of comparing rankings. By continuously practicing and honing this skill, players can make more informed decisions, increase their chances of winning, and ultimately become more successful at the poker table.
Welcome to the perfect place to compare the best online casinos with bonus on the market. Whether you're looking to hit the jackpot or experience of live casino tournament, there's a casino list out there for you.
Simsinos
4.5
Simsino is a new casino that was founded in early 2024. As a welcome offer, Simsino offers you a unique and competitive bonus. 100% wager free up to €500 + 250 free spins. In addition, the casino has many different promotions, such as a level system and cashback up to 25%. Sign up today and start winning!
4.5
Rant Casino
4.0
The welcome bonus is really generous, as new players can enjoy an incredible 100% bonus available up to €1,000!
And that's not all, because the second deposit bonus is 50% up to €100 and you can earn up to 25% cashback every week!
4.0
CasinoTogether
4.0
100% Welcome Bonus up to €300 + 100 Free Spins! CasinoTogether brings a whole new meaning to the word "community". Using innovative ideas such as the "Play Together" feature, a large selection of new and exciting offers every week and a selection of games that will please even the pickiest. Visit CasinoTogether today and discover a whole new world of online casinos!
4.0
ICE casino
4.0
At ICE CASINO, the excitement never ends, thanks to live gaming and a wide selection of slots and table games. Get 100% welcome bonus up to €1500 + 200 free spins + ADDITIONAL SURPRISE BONUSES on 20 games. Start playing now!
4.0
Vinyl Casino
5.0
RANT has opened a new and exciting Vinyl Casino with a great selection of games you love. Enjoy a wide range of deposit and withdrawal options. Join us now and take advantage of a welcome bonus of 100% up to €500 with an additional 200 free spins.
5.0
4.0
4.0
Touch casino
4.0
Touch Casino's welcome offer is great! On your first deposit you get a GIGANTIC bonus up to 150%. Just sign up, deposit at the cashier and register to get up to €750 extra to play with. You will love it!
4.0
Mr. Pacho Casino
4.5
Mr. Pacho Casino knows how to entertain players with its live gaming options and large collection of games. Get up to €3000 weekly cashback, plus a 100% welcome bonus up to €500 and 200 free spins. Are you ready to play?
4.5
Locowin Casino
4.5
Locowin comes with an outstanding welcome bonus. A total of 5 welcome bonuses that give €1850 + 500 free spins. Get started with an amazing bonus or raw money gaming experience with over 4200+ different slots and live casino games. See all other promotions on the website. Sing and win!
4.5
Evolve casino
4.0
Join Evolve Casino and claim your huge welcome bonus of €1000 + 100 free spins with low wagering. In addition, Evolve offers the most famous and favorite games, as well as live casino games that allow you to win big. Weekly Cashback is guaranteed and paid every Monday.
4.0
4.0
100% BONUS on the first deposit up to €1000, 100 free spins, 10% CASH back, lots of payment and withdrawal methods!
4.0
Vulkan Vegas Casino
4.0
100% BONUS on the first deposit up to €1000, 100 free spins, 10% CASH back, lots of payment and withdrawal methods!
4.0
Viggoslots casino
4.0
Join today and start playing with Viggoslots Casino: Get 100% WAGER FREE welcome bonus up to €1000 + 170 WAGER FREE SPINS and play top games, win big and withdraw easily!
4.0
bitStarz
4.5
BitStarz, an award-winning online casino, excels with seamless cryptocurrency transactions and a diverse selection of games, making it the best choice for players looking for a simple and fair gaming experience.
4.5
Poker
People play poker for a variety of reasons, as the game offers a unique blend of entertainment, skill, social interaction, and the potential to win money.
Blackjack
Playing blackjack can offer several benefits, both in terms of entertainment and potential profit, depending on individual preferences and approaches to the game.
Roulette
Roulette is a casino game that offers a unique blend of excitement, chance, and potential rewards. While it's primarily a game of luck, there are several aspects of roulette that players find appealing.
Slot games
People play slot games for various reasons, as these games offer a unique combination of entertainment, simplicity, and the chance to win prizes.
| 2,764
| 13,236
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.53125
| 3
|
CC-MAIN-2024-26
|
latest
|
en
| 0.957984
|
https://aviation.stackexchange.com/questions/24517/what-is-the-correct-formula-to-calculate-propeller-efficiency/84902#84902
| 1,718,715,766,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861752.43/warc/CC-MAIN-20240618105506-20240618135506-00854.warc.gz
| 104,855,831
| 38,811
|
# What is the correct formula to calculate propeller efficiency?
How to calculate in the right way the efficiency of a propeller?
If we know the engine power, speed of the plane and the thrust of its propeller, what is the correct method, (1) or (2), for calculating the efficiency of the propeller?
Assuming that Method 1 is the correct one, it appears that the efficiency of a propeller must satisfy the inequality:
Update:
It looks like Method 1 is correct and 2 is wrong as long as the page Performance of Propellers, MIT calculates the efficiency on an ideal propeller and gets the same inequality. If a propeller of diameter, d, delivers the thrust, T, while the plane travels at the speed, V, it always has a maximum possible efficiency that can be calculated, is below 1 and can not be improved. Not even an ideal propeller of diameter, d, has, in general, an efficiency that reaches 100%. The absolute minimum, reference power, is always $TV$ and not something else and the efficiency is always: $$TV/Power_{absorbed}$$ where the minimum absorbed power is calculated with Froude's Propeller Theory.
Propellers are designed for an optimal tip speed being a certain multiple of the air speed they are flying inside. The angle of the propellers aerofoil shape to the propellers axel changes as you move from the Center to the tip. The more pronounced the blade rotation the higher the intended tip speed.
From memory having the tip speed running at 1-3 times the the air flow speed is not efficient, but at least the propeller works over a wide range of air speeds.
Highly tuned propeller tips are moving at 6x the air speed. However, their effectiveness at moving air disappears rapidly if the air flow is too slow. (The propeller blades are stalled)
The last thing you want on a propeller plane is an efficient propeller.
I wasn't really able to follow the hyperlink. For one thing, it seems to suggest that the propeller efficiency just before takeoff is zero. I find that approach worthless.
Usually, I have found that if I need to know for sure if something it correct, at some point I will have to derive it myself, so it will usually take less time to do that then to search for what will be a non-authoritative answer anyway.
I would think a better way to handle this question would be purely from an input and output perspective, not a microscopic blade perspective. It shouldn't matter if there are 10 blades shaped like a Christmas tree. The only things you need to measure for efficiency ratios should be inputs and outputs. That's the whole idea behind such numbers.
Here, the input is power. The output is clearly thrust. So, a propeller assembly's efficiency should be the actual thrust as compared to a perfect magical conversion of power to thrust in which there was no wasted energy going elsewhere and not into thrust.
These ideas in mind, we start with $$T_0={d\over dt}mv=\dot{m}\Delta v,$$ where $$T_0$$ is the thrust of a perfect fan assembly, $$m$$ is the mass of the fluid passing through the swept area of the fan assembly, and $$\Delta v=v_{\rm out}-v_{\rm plane}$$ is the average difference in speed between the fluid exiting the fan and rate that the fan itself is moving through the fluid in the rest frame of the fluid.
The use of the word average here is intended to allow for a "spherical chicken," such that the effects of turbulent flow outside of the fan swept area affects the efficiency, but can cleverly be ignored in our actual calculations of efficiencies themselves. However, one could visualize the fan assembly to be a ducted fan with a constant cross-sectional area, where the air aft of the assembly could be of a higher mass density than the partial relative vacuum fore of it, just as in some automobile air intakes at high enough rpm. This way it is clear that the exhaust velocity isn't any speeds of airstreams near blades, which could be higher, but, rather, is the average velocity of all air just after the fan assembly.
Continuing, in the reference frame of the plane we have an increased kinetic power of the fluid due to the fan of $$P={\dot{m}\over2}(v_{\rm out}^2-v_{\rm plane}^2).$$ Mass flux analysis yields $$\dot{m}=\rho A v_{\rm out}$$ where $$\rho$$ is the mass density of the exhaust fluid excluding any fuels. This gives $$P={\rho A v_{\rm out}\over 2}(v^2_{\rm out}-v^2_{\rm plane}).$$
This is a cubic equation. They generally have one real solution and a few imaginary ones. However, we can use a trick from relativity, with $$\beta\equiv v_{\rm plane}/v_{\rm out}$$, yielding $$v_{\rm out}=\left[{2P\over \rho A(1-\beta^2)}\right]^{1/3}$$ and an ideal thrust of $$T_0=\rho A v_{\rm out}(v_{\rm out}-v_{\rm plane})$$ $$=\left[4P^2\rho A{1-\beta\over(1+\beta)^2}\right]^{1/3}.$$ Using $$A=\pi(D/2)^2$$, this is $$T_0=\left[\pi \rho P^2 D^2{1-\beta\over(1+\beta)^2}\right]^{1/3}$$ with $$\eta_{\rm P}=T/T_0$$ where $$T$$ is the observed thrust of a fan assembly using a power $$P$$.
The dependence upon $$\beta$$ alone, as if the power were fixed, is
I think this is "How to calculate in the right way the efficiency of a propeller."
To check if this analysis is reasonable, we can compute the efficiency of one of the first propellers. I did that in the answer to Why does this calculation show Gustave Whitehead's propellers were more than 100% efficient? and arrived at 81±13%, which seems to me to be a reasonable efficiency.
• The zero efficiency might seem worthless indeed, but uses an approach which is not valid for the static case. It produces a simplified formula which works reasonably well for flight conditions. How the static formula is derived can be found here Commented Mar 16, 2021 at 21:02
| 1,391
| 5,702
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.453125
| 3
|
CC-MAIN-2024-26
|
latest
|
en
| 0.938698
|
https://example.ng/9-examples-of-solutions/
| 1,701,715,514,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-50/segments/1700679100534.18/warc/CC-MAIN-20231204182901-20231204212901-00893.warc.gz
| 274,225,507
| 89,542
|
# 9 Examples of Solutions
Date:
## What is a Solution?
A solution is a homogeneous mixture of two or more components in which the size of the particle is smaller than 1nm. It is divided into two components which are solvents and solute.
## What is a solvent?
A solvent is a component that dissolves other components. These solvents are in the form of liquids like water.
## What is a solute
A solute is a substance that dissolves in a solvent, examples ice.
## Properties of solutions
The different properties of a Solution are listed below :
• The particles are invisible to human eyes.
• The components of a mixture can not be separated using filtration.
• It is a homogenous mixture
• The particles are so minute, having a diameter less than 1nm
• The solutes are inseparable from the mixture.
The concentration of a Solution
A concentration is the amount of solute contained in a given solution. The amount of solute and solvent in a given solution is never the same. A solution can either be concentrated, diluted or saturated; depending on the amount of solute present.
Mathematically,
The concentration of solute = Amount of solute /Amount of solution
## Examples of solutions
Here we are going to consider nine(9) examples of solutions.
• Saturated solution
• Supersaturated solution
• Aqueous solution
• Concentrated solution
• Isotonic solution
• Unsaturated solution
• Dilute solution
• Hypertonic solution
• Hypotonic solution
## Saturated solution
A saturated solution is a solution in which further addition of solute is not required to the solvent, meaning that the solvent can no longer dissolve the solute at a defined temperature
## Supersaturated solution
A supersaturated solution is a solution which contains an excess amount of solute in a solvent. The dissolution of the solute is carried out at high temperatures or pressure. In the end, the solution always leaves crystals of solute at the bottom of the container through a process called precipitation.
This kind of solution demands special conditions for it to occur. It is advantageous in the sense that the solution is subjected to heat to increase solubility for more solute to be added.
## Aqueous solution
The aqueous solution is a solution with a quantity of water in it, e.g is that of salt in water.
## Concentrated Solution
A concentrated solution is a solution with large amounts of solute in a given quantity of solvent example orange juice. It is otherwise referred to as a solution containing a maximum amount of solute that can be solved in a solvent. Since solubility is dependent on temperature, a solution that is concentrated at a particular defined temperature may not be concentrated at a higher temperature.
## Isotonic solution
An Isotonic solution is a solution that contains the same concentration of solute in it. As such water, These solutions have the same concentration of the solute in it. Therefore, water moves across the cell solvent travel across the cell from the solution in both directions in the beaker.
## Unsaturated solution
This is a type of solution that still allows for the addition of more solute at a defined temperature. Here, the concentration of solute is lower than equilibrium solubility, as such all the solute dissolves completely without leaving crystals.
Example :
• 0.01M hydrochloric acid is an unsaturated solution of hydrochloric acid in water
• Mist is an unsaturated (but close to saturated) solution of water vapour in the air.
• Adding a spoonful of sugar to a cup of hot coffee produces an unsaturated sugar solution.
• Vinegar is an unsaturated solution of acetic acid in water.
## Dilute solution
A Dilute solution is a solution in which a concentration of solute is decreased through the addition of more solvent without further addition of solute and then mixed thoroughly so as not to leave any traces of solute, making the all part of the solution to look the same or identical.
## Hypertonic solution
Hypertonic solution is a solution which has a high contraction of solutes than another solution. A perfect example of Hypertonic solution is the cell membrane in the plant where the solute concentration In relation to another solution on the opposite side of a cell membrane is termed tonicity.
When plant cells are in Hypertonic solution, the cell membrane which is flexible in nature pulls away from the cell wall which is rigid in nature. This is where the cells appear as pincushion resulting in the near malfunctioning of the plasmodesmata due to constriction.
## Hypotonic solution
In this type of solution, the solutes are of lower concentration than another lower solution compared to that of hypertonic solution. In a biological point of view, the solution on the external part of a cell is referred to as Hypotonic solution.
IGBAJI U.C.https://igbajiugabi.com
Igbaji Ugabi Chinwendu, a native of Ogoja in Cross River State, Nigeria, is a married individual blessed with children. As a Business Educator, he is profoundly interested in teaching and disseminating information. He holds the esteemed positions of Chief Executive Officer (CEO) and Director at Freemanbiz Communication and Writers King LTD, demonstrating his leadership and expertise in the field.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Share post:
Popular
### Best Christmas Decoration Ideas to prepare your Home for Celebrating this Season
Christmas Decoration Ideas for Celebrating this SeasonAs Christmas Eve...
| 1,158
| 5,538
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.875
| 3
|
CC-MAIN-2023-50
|
longest
|
en
| 0.925776
|
https://physics.stackexchange.com/questions/583091/does-the-energy-of-a-sound-wave-depend-on-frequency
| 1,660,078,188,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00633.warc.gz
| 431,031,921
| 65,471
|
# Does the energy of a sound wave depend on frequency?
The energy of electromagnetic waves is said to be dependent on frequency. Is the energy of a sound wave also dependent on frequency?
• Hi! Look here: en.wikipedia.org/wiki/Sound_energy Oct 1, 2020 at 7:17
• When you say "the energy of electromagnetic waves is said to be dependent on frequency" are you sure you aren't confusing the energy in a photon with the energy in the wave (composed of many photons)? Oct 1, 2020 at 16:54
## 1 Answer
First let's think what we mean by the energy of a wave. It can be defined as the amount of energy that an emitter has to give up, or a receiver take in, in order to emit or absorb the wave. If the wave is being emitted or absorbed continuously then we could talk about the energy crossing any given plane per unit time, or else the energy per unit length of the wave.
In the case of electromagnetic waves, the primary equation for this quantity is $${\bf S} = {\bf E} \times {\bf H}$$ This is called the Poynting vector and it gives the flux, which is amount of energy per unit area per unit time flowing past a plane at right angles to the vector. Notice that there is no mention of frequency in this formula. This means that two waves having fields $$\bf E$$ and $$\bf H$$ of the same amplitude will have the same energy flux, no matter what their frequencies may be.
Now in the question it said that energy was related to frequency. I think this is a reference to the formula $$E_p = h f$$ where $$E_p$$ is energy, $$f$$ is frequency and $$h$$ is Planck's constant. This is not a formula for the energy "in the wave"; it is a formula for how much of the energy in the wave is assigned to each photon. I put a subscript $$p$$ to act as a reminder of this (and to make sure we don't muddle it with electric field). It follows that the number of photons crossing a plane, per unit area and per unit time, is $$N = \frac{\bf S}{E_p} = \frac{ {\bf E} \times {\bf H} }{h f}.$$
Coming now to sound waves, the energy is to do with the kinetic energy and potential energy of the matter which is transmitting the wave. As the matter particles move to and fro, they have kinetic energy, and the restoring forces on them (pressure or tension) give rise to potential energy. The result is that the energy flux is proportional to the square of the amplitude. The energy per unit volume is $$\frac{p^2}{2 \rho_0 v_s^2} + \frac{1}{2} \rho v^2$$ where $$p$$ is pressure, $$\rho$$ is density, $$v_s$$ is the speed of sound and $$v$$ is the speed of the movement in the medium. Multiply this formula by the speed of sound in order to get energy flux.
Now the question was, does this depend on frequency? The answer is that it may do, but this depends also on other things. If the amplitude of the wave displacements is fixed and the frequency is increased, then the kinetic energy of the particle motion will be increased, and therefore so will the energy flux. On the other hand, if the amplitude of the speed oscillations of the vibrating thing producing the wave stays fixed as the frequency changes, then the kinetic energy will not change and neither will the energy flux. In that case the position amplitude goes down as the frequency goes up and the two effects cancel.
Finally, in quantum physics we can associate particles with sound waves. The particles are called phonons. Each phonon has an energy related to its frequency by $$E_p = h f$$. That formula is pretty much universal for all types of wave motion.
| 828
| 3,509
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.640625
| 4
|
CC-MAIN-2022-33
|
longest
|
en
| 0.94956
|
http://forum.bodybuilding.com/printthread.php?t=2661631&pp=30&page=1
| 1,524,387,860,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-17/segments/1524125945552.45/warc/CC-MAIN-20180422080558-20180422100558-00367.warc.gz
| 117,461,456
| 2,808
|
# Finding distances with a compass
• 05-09-2007, 05:32 PM
DTRG
Finding distances with a compass
i just found this compass/clinometer, the suunto tandem. im reading the manual and it has this on finding distances using the compass:
You measure a hill at 0*(magnetic north). The angle between the curve of the road and the hill is 64*. The angle between the curve of the road and the oil derrick is 15*. A line is drawn at a 90* angle to the 64* bearing line from the curve of the road toward the oil derrick bearing line. The distance, as measured on the chart on the compass is 1 mile. Then your position is cot 15* x 1 mile(1.6km) = the distance along the corrected bearing line of 64*.
uhh, what? i just finished trig class so i know the basics of this stuff but im confused as to how you can find a distance without knowing any of the other sides. Because, i can measure a 15* angle from here to my computer, and its not 3.7 miles.
heres a link to the manual (page 10)
[url]http://www.suunto.com/media/suunto/manuals/multilangual/tandem_users_guide_36b13.pdf[/url]
• 05-09-2007, 07:58 PM
resurrected
[QUOTE=DTRG;41582391]i just found this compass/clinometer, the suunto tandem. im reading the manual and it has this on finding distances using the compass:
You measure a hill at 0*(magnetic north). The angle between the curve of the road and the hill is 64*. The angle between the curve of the road and the oil derrick is 15*. A line is drawn at a 90* angle to the 64* bearing line from the curve of the road toward the oil derrick bearing line. The distance, as measured on the chart on the compass is 1 mile. Then your position is cot 15* x 1 mile(1.6km) = the distance along the corrected bearing line of 64*.
uhh, what? i just finished trig class so i know the basics of this stuff but im confused as to how you can find a distance without knowing any of the other sides. Because, i can measure a 15* angle from here to my computer, and its not 3.7 miles.
heres a link to the manual (page 10)
[url]http://www.suunto.com/media/suunto/manuals/multilangual/tandem_users_guide_36b13.pdf[/url][/QUOTE]
My son had to learn this stuff in the military.
He tried to teach me it, but of course I go by sight instead of compasses.
| 610
| 2,234
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.90625
| 4
|
CC-MAIN-2018-17
|
latest
|
en
| 0.874039
|
https://studysoup.com/tsg/15174/conceptual-physics-12-edition-chapter-14-problem-27e
| 1,624,024,829,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-25/segments/1623487637721.34/warc/CC-MAIN-20210618134943-20210618164943-00224.warc.gz
| 508,356,851
| 12,461
|
×
Get Full Access to Conceptual Physics - 12 Edition - Chapter 14 - Problem 27e
Get Full Access to Conceptual Physics - 12 Edition - Chapter 14 - Problem 27e
×
# How does the concept of buoyancy complicate the old
ISBN: 9780321909107 29
## Solution for problem 27E Chapter 14
Conceptual Physics | 12th Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Conceptual Physics | 12th Edition
4 5 1 362 Reviews
14
1
Problem 27E
How does the concept of buoyancy complicate the old question “Which weighs more, a pound of lead or a pound of feathers”?
Step-by-Step Solution:
Solution 27E Step 1 The buoyant force is a force exerted on a body when it is submerged in a fluid. So, the actual weight of the body is the resultant of the gravitational force and the buoyant force. Buoyant force acts opposite to the gravitational force.
Step 2 of 2
##### ISBN: 9780321909107
The answer to “How does the concept of buoyancy complicate the old question “Which weighs more, a pound of lead or a pound of feathers”?” is broken down into a number of easy to follow steps, and 22 words. The full step-by-step solution to problem: 27E from chapter: 14 was answered by , our top Physics solution expert on 04/03/17, 08:01AM. Conceptual Physics was written by and is associated to the ISBN: 9780321909107. This textbook survival guide was created for the textbook: Conceptual Physics, edition: 12. Since the solution to 27E from 14 chapter was answered, more than 693 students have viewed the full step-by-step answer. This full solution covers the following key subjects: pound, lead, Concept, feathers, complicate. This expansive textbook survival guide covers 45 chapters, and 4650 solutions.
#### Related chapters
Unlock Textbook Solution
Enter your email below to unlock your verified solution to:
How does the concept of buoyancy complicate the old
| 484
| 1,958
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.84375
| 3
|
CC-MAIN-2021-25
|
latest
|
en
| 0.893765
|
https://www.enotes.com/homework-help/write-two-column-proof-given-ll-b-c-b-d-prove-c-ll-441615
| 1,516,393,640,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-05/segments/1516084888113.39/warc/CC-MAIN-20180119184632-20180119204632-00668.warc.gz
| 964,515,689
| 10,870
|
# Write a two column proof. Given: a ll b, a`_|_` c, and b`_|_` d Prove: c ll d Statements: Reasons
Images:
This image has been Flagged as inappropriate Click to unflag
Image (1 of 1)
justaguide | Certified Educator
It is given that lines a and b are parallel. If the slope of line a is S, the slope of line b is also S.
Line c is perpendicular to a. The product of the slope of perpendicular lines is -1. The slope of line c is S' = -1/S. Similarly, as the line b is perpendicular to line d, the slope of line d is also S' = -1/S.
Two lines with the same slope are parallel. This is the case with lines c and d, hence they are parallel.
llltkl | Student
Introduce angles 1, 2, 3 and 4 in the diagram such that angles 1 and 4 are right angles (see the attached image).
Statements: Reasons:
---------------------------------------------------------------------
1. a ll b 1. Given
2. `/_1` =` /_2` 2. If two lines are parallel, then their alternate interior angles are congruent.
3. `/_1 ` = 90° 3. Given
4.` /_2` = 90° 4. Transitive property
5. `/_4` = 90° 5. Given
6. `/_3` = 180-90=90° 6. Linear pair of adjacent angles are
supplementary.
7. `/_2` = `/_3` 7. All right angles are equal.
8. c ll d 8. If a pair of alternate interior angles are
congruent, then the lines are parallel.
Images:
This image has been Flagged as inappropriate Click to unflag
Image (1 of 1)
| 433
| 1,606
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.671875
| 4
|
CC-MAIN-2018-05
|
latest
|
en
| 0.825066
|
http://www.britannica.com/print/topic/477493
| 1,430,145,730,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2015-18/segments/1429246658376.88/warc/CC-MAIN-20150417045738-00290-ip-10-235-10-82.ec2.internal.warc.gz
| 392,196,193
| 27,887
|
# probability and statistics
probability and statistics, the branches of mathematics concerned with the laws governing random events, including the collection, analysis, interpretation, and display of numerical data. Probability has its origin in the study of gambling and insurance in the 17th century, and it is now an indispensable tool of both social and natural sciences. Statistics may be said to have its origin in census counts taken thousands of years ago; as a distinct scientific discipline, however, it was developed in the early 19th century as the study of populations, economies, and moral actions and later in that century as the mathematical tool for analyzing such numbers. For technical information on these subjects, see probability theory and statistics.
## Games of chance
The modern mathematics of chance is usually dated to a correspondence between the French mathematicians Pierre de Fermat and Blaise Pascal in 1654. Their inspiration came from a problem about games of chance, proposed by a remarkably philosophical gambler, the chevalier de Méré. De Méré inquired about the proper division of the stakes when a game of chance is interrupted. Suppose two players, A and B, are playing a three-point game, each having wagered 32 pistoles, and are interrupted after A has two points and B has one. How much should each receive?
Fermat and Pascal proposed somewhat different solutions, though they agreed about the numerical answer. Each undertook to define a set of equal or symmetrical cases, then to answer the problem by comparing the number for A with that for B. Fermat, however, gave his answer in terms of the chances, or probabilities. He reasoned that two more games would suffice in any case to determine a victory. There are four possible outcomes, each equally likely in a fair game of chance. A might win twice, AA; or first A then B might win; or B then A; or BB. Of these four sequences, only the last would result in a victory for B. Thus, the odds for A are 3:1, implying a distribution of 48 pistoles for A and 16 pistoles for B.
Pascal thought Fermat’s solution unwieldy, and he proposed to solve the problem not in terms of chances but in terms of the quantity now called “expectation.” Suppose B had already won the next round. In that case, the positions of A and B would be equal, each having won two games, and each would be entitled to 32 pistoles. A should receive his portion in any case. B’s 32, by contrast, depend on the assumption that he had won the first round. This first round can now be treated as a fair game for this stake of 32 pistoles, so that each player has an expectation of 16. Hence A’s lot is 32 + 16, or 48, and B’s is just 16.
Games of chance such as this one provided model problems for the theory of chances during its early period, and indeed they remain staples of the textbooks. A posthumous work of 1665 by Pascal on the “arithmetic triangle” now linked to his name (see binomial theorem) showed how to calculate numbers of combinations and how to group them to solve elementary gambling problems. Fermat and Pascal were not the first to give mathematical solutions to problems such as these. More than a century earlier, the Italian mathematician, physician, and gambler Girolamo Cardano calculated odds for games of luck by counting up equally probable cases. His little book, however, was not published until 1663, by which time the elements of the theory of chances were already well known to mathematicians in Europe. It will never be known what would have happened had Cardano published in the 1520s. It cannot be assumed that probability theory would have taken off in the 16th century. When it began to flourish, it did so in the context of the “new science” of the 17th-century scientific revolution, when the use of calculation to solve tricky problems had gained a new credibility. Cardano, moreover, had no great faith in his own calculations of gambling odds, since he believed also in luck, particularly in his own. In the Renaissance world of monstrosities, marvels, and similitudes, chance—allied to fate—was not readily naturalized, and sober calculation had its limits.
## Risks, expectations, and fair contracts
In the 17th century, Pascal’s strategy for solving problems of chance became the standard one. It was, for example, used by the Dutch mathematician Christiaan Huygens in his short treatise on games of chance, published in 1657. Huygens refused to define equality of chances as a fundamental presumption of a fair game but derived it instead from what he saw as a more basic notion of an equal exchange. Most questions of probability in the 17th century were solved, as Pascal solved his, by redefining the problem in terms of a series of games in which all players have equal expectations. The new theory of chances was not, in fact, simply about gambling but also about the legal notion of a fair contract. A fair contract implied equality of expectations, which served as the fundamental notion in these calculations. Measures of chance or probability were derived secondarily from these expectations.
Probability was tied up with questions of law and exchange in one other crucial respect. Chance and risk, in aleatory contracts, provided a justification for lending at interest, and hence a way of avoiding Christian prohibitions against usury. Lenders, the argument went, were like investors; having shared the risk, they deserved also to share in the gain. For this reason, ideas of chance had already been incorporated in a loose, largely nonmathematical way into theories of banking and marine insurance. From about 1670, initially in the Netherlands, probability began to be used to determine the proper rates at which to sell annuities. Jan de Wit, leader of the Netherlands from 1653 to 1672, corresponded in the 1660s with Huygens, and eventually he published a small treatise on the subject of annuities in 1671.
Annuities in early modern Europe were often issued by states to raise money, especially in times of war. They were generally sold according to a simple formula such as “seven years purchase,” meaning that the annual payment to the annuitant, promised until the time of his or her death, would be one-seventh of the principal. This formula took no account of age at the time the annuity was purchased. Wit lacked data on mortality rates at different ages, but he understood that the proper charge for an annuity depended on the number of years that the purchaser could be expected to live and on the presumed rate of interest. Despite his efforts and those of other mathematicians, it remained rare even in the 18th century for rulers to pay much heed to such quantitative considerations. Life insurance, too, was connected only loosely to probability calculations and mortality records, though statistical data on death became increasingly available in the course of the 18th century. The first insurance society to price its policies on the basis of probability calculations was the Equitable, founded in London in 1762.
## Probability as the logic of uncertainty
The English clergyman Joseph Butler, in his very influential Analogy of Religion (1736), called probability “the very guide of life.” The phrase, however, did not refer to mathematical calculation but merely to the judgments made where rational demonstration is impossible. The word probability was used in relation to the mathematics of chance in 1662 in the Logic of Port-Royal, written by Pascal’s fellow Jansenists, Antoine Arnauld and Pierre Nicole. But from medieval times to the 18th century and even into the 19th, a probable belief was most often merely one that seemed plausible, came on good authority, or was worthy of approval. Probability, in this sense, was emphasized in England and France from the late 17th century as an answer to skepticism. Man may not be able to attain perfect knowledge but can know enough to make decisions about the problems of daily life. The new experimental natural philosophy of the later 17th century was associated with this more modest ambition, one that did not insist on logical proof.
Almost from the beginning, however, the new mathematics of chance was invoked to suggest that decisions could after all be made more rigorous. Pascal invoked it in the most famous chapter of his Pensées, “Of the Necessity of the Wager,” in relation to the most important decision of all, whether to accept the Christian faith. One cannot know of God’s existence with absolute certainty; there is no alternative but to bet (“il faut parier”). Perhaps, he supposed, the unbeliever can be persuaded by consideration of self-interest. If there is a God (Pascal assumed he must be the Christian God), then to believe in him offers the prospect of an infinite reward for infinite time. However small the probability, provided only that it be finite, the mathematical expectation of this wager is infinite. For so great a benefit, one sacrifices rather little, perhaps a few paltry pleasures during one’s brief life on Earth. It seemed plain which was the more reasonable choice.
The link between the doctrine of chance and religion remained an important one through much of the 18th century, especially in Britain. Another argument for belief in God relied on a probabilistic natural theology. The classic instance is a paper read by John Arbuthnot to the Royal Society of London in 1710 and published in its Philosophical Transactions in 1712. Arbuthnot presented there a table of christenings in London from 1629 to 1710. He observed that in every year there was a slight excess of male over female births. The proportion, approximately 14 boys for every 13 girls, was perfectly calculated, given the greater dangers to which young men are exposed in their search for food, to bring the sexes to an equality of numbers at the age of marriage. Could this excellent result have been produced by chance alone? Arbuthnot thought not, and he deployed a probability calculation to demonstrate the point. The probability that male births would by accident exceed female ones in 82 consecutive years is (0.5)82. Considering further that this excess is found all over the world, he said, and within fixed limits of variation, the chance becomes almost infinitely small. This argument for the overwhelming probability of Divine Providence was repeated by many—and refined by a few. The Dutch natural philosopher Willem ’sGravesande incorporated the limits of variation of these birth ratios into his mathematics and so attained a still more decisive vindication of Providence over chance. Nicolas Bernoulli, from the famous Swiss mathematical family, gave a more skeptical view. If the underlying probability of a male birth was assumed to be 0.5169 rather than 0.5, the data were quite in accord with probability theory. That is, no Providential direction was required.
Apart from natural theology, probability came to be seen during the 18th-century Enlightenment as a mathematical version of sound reasoning. In 1677 the German mathematician Gottfried Wilhelm Leibniz imagined a utopian world in which disagreements would be met by this challenge: “Let us calculate, Sir.” The French mathematician Pierre-Simon de Laplace, in the early 19th century, called probability “good sense reduced to calculation.” This ambition, bold enough, was not quite so scientific as it may first appear. For there were some cases where a straightforward application of probability mathematics led to results that seemed to defy rationality. One example, proposed by Nicolas Bernoulli and made famous as the St. Petersburg paradox, involved a bet with an exponentially increasing payoff. A fair coin is to be tossed until the first time it comes up heads. If it comes up heads on the first toss, the payment is 2 ducats; if the first time it comes up heads is on the second toss, 4 ducats; and if on the nth toss, 2n ducats. The mathematical expectation of this game is infinite, but no sensible person would pay a very large sum for the privilege of receiving the payoff from it. The disaccord between calculation and reasonableness created a problem, addressed by generations of mathematicians. Prominent among them was Nicolas’s cousin Daniel Bernoulli, whose solution depended on the idea that a ducat added to the wealth of a rich man benefits him much less than it does a poor man (a concept now known as decreasing marginal utility; see utility and value: Theories of utility).
Probability arguments figured also in more practical discussions, such as debates during the 1750s and ’60s about the rationality of smallpox inoculation. Smallpox was at this time widespread and deadly, infecting most and carrying off perhaps one in seven Europeans. Inoculation in these days involved the actual transmission of smallpox, not the cowpox vaccines developed in the 1790s by the English surgeon Edward Jenner, and was itself moderately risky. Was it rational to accept a small probability of an almost immediate death to reduce greatly a large probability of death by smallpox in the indefinite future? Calculations of mathematical expectation, as by Daniel Bernoulli, led unambiguously to a favourable answer. But some disagreed, most famously the eminent mathematician and perpetual thorn in the flesh of probability theorists, the French mathematician Jean Le Rond d’Alembert. One might, he argued, reasonably prefer a greater assurance of surviving in the near term to improved prospects late in life.
## The probability of causes
Swiss commemorative stamp of mathematician Jakob Bernoulli, issued 1994, displaying the formula and the graph for the law of large numbers, first proved by Bernoulli in 1713.Many 18th-century ambitions for probability theory, including Arbuthnot’s, involved reasoning from effects to causes. Jakob Bernoulli, uncle of Nicolas and Daniel, formulated and proved a law of large numbers to give formal structure to such reasoning. This was published in 1713 from a manuscript, the Ars conjectandi, left behind at his death in 1705. There he showed that the observed proportion of, say, tosses of heads or of male births will converge as the number of trials increases to the true probability p, supposing that it is uniform. His theorem was designed to give assurance that when p is not known in advance, it can properly be inferred by someone with sufficient experience. He thought of disease and the weather as in some way like drawings from an urn. At bottom they are deterministic, but since one cannot know the causes in sufficient detail, one must be content to investigate the probabilities of events under specified conditions.
The English physician and philosopher David Hartley announced in his Observations on Man (1749) that a certain “ingenious Friend” had shown him a solution of the “inverse problem” of reasoning from the occurrence of an event p times and its failure q times to the “original Ratio” of causes. But Hartley named no names, and the first publication of the formula he promised occurred in 1763 in a posthumous paper of Thomas Bayes, communicated to the Royal Society by the British philosopher Richard Price. This has come to be known as Bayes’s theorem. But it was the French, especially Laplace, who put the theorem to work as a calculus of induction, and it appears that Laplace’s publication of the same mathematical result in 1774 was entirely independent. The result was perhaps more consequential in theory than in practice. An exemplary application was Laplace’s probability that the sun will come up tomorrow, based on 6,000 years or so of experience in which it has come up every day.
Laplace and his more politically engaged fellow mathematicians, most notably Marie-Jean-Antoine-Nicolas de Caritat, marquis de Condorcet, hoped to make probability into the foundation of the moral sciences. This took the form principally of judicial and electoral probabilities, addressing thereby some of the central concerns of the Enlightenment philosophers and critics. Justice and elections were, for the French mathematicians, formally similar. In each, a crucial question was how to raise the probability that a jury or an electorate would decide correctly. One element involved testimonies, a classic topic of probability theory. In 1699 the British mathematician John Craig used probability to vindicate the truth of scripture and, more idiosyncratically, to forecast the end of time, when, due to the gradual attrition of truth through successive testimonies, the Christian religion would become no longer probable. The Scottish philosopher David Hume, more skeptically, argued in probabilistic but nonmathematical language beginning in 1748 that the testimonies supporting miracles were automatically suspect, deriving as they generally did from uneducated persons, lovers of the marvelous. Miracles, moreover, being violations of laws of nature, had such a low a priori probability that even excellent testimony could not make them probable. Condorcet also wrote on the probability of miracles, or at least faits extraordinaires, to the end of subduing the irrational. But he took a more sustained interest in testimonies at trials, proposing to weigh the credibility of the statements of any particular witness by considering the proportion of times that he had told the truth in the past, and then use inverse probabilities to combine the testimonies of several witnesses.
Laplace and Condorcet applied probability also to judgments. In contrast to English juries, French juries voted whether to convict or acquit without formal deliberations. The probabilists began by supposing that the jurors were independent and that each had a probability p greater than 1/2 of reaching a true verdict. There would be no injustice, Condorcet argued, in exposing innocent defendants to a risk of conviction equal to risks they voluntarily assume without fear, such as crossing the English Channel from Dover to Calais. Using this number and considering also the interest of the state in minimizing the number of guilty who go free, it was possible to calculate an optimal jury size and the majority required to convict. This tradition of judicial probabilities lasted into the 1830s, when Laplace’s student Siméon-Denis Poisson used the new statistics of criminal justice to measure some of the parameters. But by this time the whole enterprise had come to seem gravely doubtful, in France and elsewhere. In 1843 the English philosopher John Stuart Mill called it “the opprobrium of mathematics,” arguing that one should seek more reliable knowledge rather than waste time on calculations that merely rearrange ignorance.
## Political arithmetic
During the 19th century, statistics grew up as the empirical science of the state and gained preeminence as a form of social knowledge. Population and economic numbers had been collected, though often not in a systematic way, since ancient times and in many countries. In Europe the late 17th century was an important time also for quantitative studies of disease, population, and wealth. In 1662 the English statistician John Graunt published a celebrated collection of numbers and observations pertaining to mortality in London, using records that had been collected to chart the advance and decline of the plague (see the table). In the 1680s the English political economist and statistician William Petty published a series of essays on a new science of “political arithmetic,” which combined statistical records with bold—some thought fanciful—calculations, such as, for example, of the monetary value of all those living in Ireland. These studies accelerated in the 18th century and were increasingly supported by state activity, though ancien régime governments often kept the numbers secret. Administrators and savants used the numbers to assess and enhance state power but also as part of an emerging “science of man.” The most assiduous, and perhaps the most renowned, of these political arithmeticians was the Prussian pastor Johann Peter Süssmilch, whose study of the divine order in human births and deaths was first published in 1741 and grew to three fat volumes by 1765. The decisive proof of Divine Providence in these demographic affairs was their regularity and order, perfectly arranged to promote man’s fulfillment of what he called God’s first commandment, to be fruitful and multiply. Still, he did not leave such matters to nature and to God, but rather he offered abundant advice about how kings and princes could promote the growth of their populations. He envisioned a rather spartan order of small farmers, paying modest rents and taxes, living without luxury, and practicing the Protestant faith. Roman Catholicism was unacceptable on account of priestly celibacy.
Natural and Political Observations (1662)
"Table of casualties," statistics on mortality in London 1647-60, from John Graunt, Natural and Political Observations (1662)
The years of our Lord 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 abortive, and stillborn 335 329 327 351 389 381 384 433 483 419 463 467 421 544 aged 916 835 889 696 780 834 864 974 743 892 869 1,176 909 1,095 ague, and fever 1,260 884 751 970 1,038 1,212 1,282 1,371 689 875 999 1,800 2,303 2,148 apoplex, and sodainly 68 74 64 74 106 111 118 86 92 102 113 138 91 67 bleach 1 3 7 2 1 blasted 4 1 6 6 4 5 5 3 8 bleeding 3 2 5 1 3 4 3 2 7 3 5 4 7 2 bloudy flux, scouring, and flux 155 176 802 289 833 762 200 386 168 368 362 233 346 251 burnt, and scalded 3 6 10 5 11 8 5 7 10 5 7 4 6 6 calenture 1 1 2 1 1 3 cancer, gangrene, and fistula 26 29 31 19 31 53 36 37 73 31 24 35 63 52 wolf 8 canker, sore-mouth, and thrush 66 28 54 42 68 51 53 72 44 81 19 27 73 68 childbed 161 106 114 117 206 213 158 192 177 201 236 225 226 194 chrisomes, and infants 1,369 1,254 1,065 990 1,237 1,280 1,050 1,343 1,089 1,393 1,162 1,144 858 1,123 colick, and wind 103 71 85 82 76 102 80 101 85 120 113 179 116 167 cold, and cough 41 36 21 58 30 31 33 24 consumption, and cough 2,423 2,200 2,388 1,988 2,350 2,410 2,286 2,868 2,606 3,184 2,757 3,610 2,982 3,414 convulsion 684 491 530 493 569 653 606 828 702 1,027 807 841 742 1,031 cramp 1 cut of the stone 2 1 3 1 1 2 4 1 3 5 46 48 dropsy, and tympany 185 434 421 508 444 556 617 704 660 706 631 931 646 872 drowned 47 40 30 27 49 50 53 30 43 49 63 60 57 48 excessive drinking 2 executed 8 17 29 43 24 12 19 21 19 22 20 18 7 18 fainted in bath 1 falling-sickness 3 2 2 3 3 4 1 4 3 1 4 5 flox, and small pox 139 400 1,190 184 525 1,279 139 812 1,294 823 835 409 1,523 354 found dead in the streets 6 6 9 8 7 9 14 4 3 4 9 11 2 6 French-pox 18 29 15 18 21 20 20 20 29 23 25 53 51 31 frighted 4 4 1 3 2 1 1 9 gout 9 5 12 9 7 7 5 6 8 7 8 13 14 2 grief 12 13 16 7 17 14 11 17 10 13 10 12 13 4 hanged, and made-away themselves 11 10 13 14 9 14 15 9 14 16 24 18 11 36 head-ache 1 11 2 2 6 6 5 3 4 5 35 26 jaundice 57 35 39 49 41 43 57 71 61 41 46 77 102 76 jaw-faln 1 1 3 2 2 3 1 impostume 75 61 65 59 80 105 79 90 92 122 80 134 105 96 itch 1 killed by several accidents 27 57 39 94 47 45 57 58 52 43 52 47 55 47 King’s evil 27 26 22 19 22 20 26 26 27 24 23 28 28 54 lehargy 3 4 2 4 4 4 3 10 9 4 6 2 6 4 leprosy 1 1 2 livergrown, spleen, and rickets 53 46 56 59 65 72 67 65 52 50 38 51 8 15 lunatique 12 18 6 11 7 11 9 12 6 7 13 5 14 14 meagrom 12 13 5 8 6 6 14 3 6 7 6 5 4 measles 5 92 3 33 33 62 8 52 11 153 15 80 6 74 mother 2 1 1 2 2 3 3 1 8 murdered 3 2 7 5 4 3 3 3 9 6 5 7 70 20 overlayd, and starved at nurse 25 22 36 28 28 29 30 36 58 53 44 50 46 43 palsy 27 21 19 20 23 20 29 18 22 23 20 22 17 21 plague 3,597 611 67 15 23 16 6 16 9 6 4 14 36 14 plague in the guts 1 110 32 87 315 446 253 402 pleurisy 30 26 13 20 23 19 17 23 10 9 17 16 12 10 poysoned 3 7 purples, and spotted fever 145 47 43 65 54 60 75 89 56 52 56 126 368 146 quinsy, and sore-throat 14 11 12 17 24 20 18 9 15 13 7 10 21 14 rickets 150 224 216 190 260 329 229 372 347 458 317 476 441 521 mother, rising of the lights 150 92 115 120 134 138 135 178 166 212 203 228 210 249 rupture 16 7 7 6 7 16 7 15 11 20 19 18 12 28 scal’d-head 2 1 2 scurvy 32 20 21 21 29 43 41 44 103 71 82 82 95 12 smothered, and stifled 2 sores, ulcers, broken and bruised limbs 15 17 17 16 26 32 25 32 23 34 40 47 61 48 shot 7 20 spleen 12 17 13 13 6 2 5 7 7 shingles 1 starved 4 8 7 1 2 1 1 3 1 3 6 7 14 stitch 1 stone, and strangury 45 42 29 28 50 41 44 38 49 57 72 69 22 30 sciatica 2 stopping of the stomach 29 29 30 33 55 67 66 107 94 145 129 277 186 214 surfet 217 137 136 123 104 177 178 212 128 161 137 218 202 192 swine-pox 4 4 3 1 4 2 1 1 1 2 teeth, and worms 767 597 540 598 709 905 691 1,131 803 1,198 878 1,036 839 1,008 tissick 62 47 thrush 57 66 vomiting 1 6 3 7 4 6 3 14 7 27 16 19 8 10 worms 147 107 105 65 85 86 53 wen 1 1 2 2 1 1 2 1 1
## Social numbers
Lacking, as they did, complete counts of population, 18th-century practitioners of political arithmetic had to rely largely on conjectures and calculations. In France especially, mathematicians such as Laplace used probability to surmise the accuracy of population figures determined from samples. In the 19th century such methods of estimation fell into disuse, mainly because they were replaced by regular, systematic censuses. The census of the United States, required by the U.S. Constitution and conducted every 10 years beginning in 1790, was among the earliest. (For the role of the U.S. census in spurring the development of the computer, see computer: Herman Hollerith’s census tabulator.) Sweden had begun earlier; most of the leading nations of Europe followed by the mid-19th century. They were also eager to survey the populations of their colonial possessions, which indeed were among the very first places to be counted. A variety of motives can be identified, ranging from the requirements of representative government to the need to raise armies. Some of this counting can scarcely be attributed to any purpose, and indeed the contemporary rage for numbers was by no means limited to counts of human populations. From the mid-18th century and especially after the conclusion of the Napoleonic Wars in 1815, the collection and publication of numbers proliferated in many domains, including experimental physics, land surveys, agriculture, and studies of the weather, tides, and terrestrial magnetism. (For perhaps the best statistical graph ever constructed, see the The size of Napoleon’s army is shown by the dwindling width of the lines of advance (green) and retreat (gold). The retreat information is correlated with a temperature scale shown along the lower portion of the map. Published by Charles Minard in 1869.Encyclopædia Britannica, Inc..) Still, the management of human populations played a decisive role in the statistical enthusiasm of the early 19th century. Political instabilities associated with the French Revolution of 1789 and the economic changes of early industrialization made social science a great desideratum. A new field of moral statistics grew up to record and comprehend the problems of dirt, disease, crime, ignorance, and poverty.
Some of these investigations were conducted by public bureaus, but much was the work of civic-minded professionals, industrialists, and, especially after midcentury, women such as Florence Nightingale (see the The English nurse Florence Nightingale was an innovator in displaying statistical data through graphs. In 1858 she devised the type depicted here, which she named Coxcomb. Like pie charts, the Coxcomb indicates frequency by relative area, but it differs in its use of fixed angles and variable radii.Encyclopædia Britannica, Inc.). One of the first serious statistical organizations arose in 1832 as section F of the new British Association for the Advancement of Science. The intellectual ties to natural science were uncertain at first, but there were some influential champions of statistics as a mathematical science. The most effective was the Belgian mathematician Adolphe Quetelet, who argued untiringly that mathematical probability was essential for social statistics. Quetelet hoped to create from these materials a new science, which he called at first social mechanics and later social physics. He wrote often of the analogies linking this science to the most mathematical of the natural sciences, celestial mechanics. In practice, though, his methods were more like those of geodesy or meteorology, involving massive collections of data and the effort to detect patterns that might be identified as laws. These, in fact, seemed to abound. He found them in almost every collection of social numbers, beginning with some publications of French criminal statistics from the mid-1820s. The numbers, he announced, were essentially constant from year to year, so steady that one could speak here of statistical laws. If there was something paradoxical in these “laws” of crime, it was nonetheless comforting to find regularities underlying the manifest disorder of social life.
## A new kind of regularity
Even Quetelet had been startled at first by the discovery of these statistical laws. Regularities of births and deaths belonged to the natural order and so were unsurprising, but here was constancy of moral and immoral acts, acts that would normally be attributed to human free will. Was there some mysterious fatalism that drove individuals, even against their will, to fulfill a budget of crimes? Were such actions beyond the reach of human intervention? Quetelet determined that they were not. Nevertheless, he continued to emphasize that the frequencies of such deeds should be understood in terms of causes acting at the level of society, not of choices made by individuals. His view was challenged by moralists, who insisted on complete individual responsibility for thefts, murders, and suicides. Quetelet was not so radical as to deny the legitimacy of punishment, since the system of justice was thought to help regulate crime rates. Yet he spoke of the murderer on the scaffold as himself a victim, part of the sacrifice that society requires for its own conservation. Individually, to be sure, it was perhaps within the power of the criminal to resist the inducements that drove him to his vile act. Collectively, however, crime is but trivially affected by these individual decisions. Not criminals but crime rates form the proper object of social investigation. Reducing them is to be achieved not at the level of the individual but at the level of the legislator, who can improve society by providing moral education or by improving systems of justice. Statisticians have a vital role as well. To them falls the task of studying the effects on society of legislative changes and of recommending measures that could bring about desired improvements.
Quetelet’s arguments inspired a modest debate about the consistency of statistics with human free will. This intensified after 1857, when the English historian Henry Thomas Buckle recited his favourite examples of statistical law to support an uncompromising determinism in his immensely successful History of Civilization in England. Interestingly, probability had been linked to deterministic arguments from very early in its history, at least since the time of Jakob Bernoulli. Laplace argued in his Philosophical Essay on Probabilities (1825) that man’s dependence on probability was simply a consequence of imperfect knowledge. A being who could follow every particle in the universe, and who had unbounded powers of calculation, would be able to know the past and to predict the future with perfect certainty. The statistical determinism inaugurated by Quetelet had a quite different character. Now it was not necessary to know things in infinite detail. At the microlevel, indeed, knowledge often fails, for who can penetrate the human soul so fully as to comprehend why a troubled individual has chosen to take his or her own life? Yet such uncertainty about individuals somehow dissolves in light of a whole society, whose regularities are often more perfect than those of physical systems such as the weather. Not real persons but l’homme moyen, the average man, formed the basis of social physics. This contrast between individual and collective phenomena was, in fact, hard to reconcile with an absolute determinism like Buckle’s. Several critics of his book pointed this out, urging that the distinctive feature of statistical knowledge was precisely its neglect of individuals in favour of mass observations.
## Statistical physics
The same issues were discussed also in physics. Statistical understandings first gained an influential role in physics at just this time, in consequence of papers by the German mathematical physicist Rudolf Clausius from the late 1850s and, especially, of one by the Scottish physicist James Clerk Maxwell published in 1860. Maxwell, at least, was familiar with the social statistical tradition, and he had been sufficiently impressed by Buckle’s History and by the English astronomer John Herschel’s influential essay on Quetelet’s work in the Edinburgh Review (1850) to discuss them in letters. During the1870s, Maxwell often introduced his gas theory using analogies from social statistics. The first point, a crucial one, was that statistical regularities of vast numbers of molecules were quite sufficient to derive thermodynamic laws relating the pressure, volume, and temperature in gases. Some physicists, including, for a time, the German Max Planck, were troubled by the contrast between a molecular chaos at the microlevel and the very precise laws indicated by physical instruments. They wondered if it made sense to seek a molecular, mechanical grounding for thermodynamic laws. Maxwell invoked the regularities of crime and suicide as analogies to the statistical laws of thermodynamics and as evidence that local uncertainty can give way to large-scale predictability. At the same time, he insisted that statistical physics implied a certain imperfection of knowledge. In physics, as in social science, determinism was very much an issue in the 1850s and ’60s. Maxwell argued that physical determinism could only be speculative, since human knowledge of events at the molecular level is necessarily imperfect. Many of the laws of physics, he said, are like those regularities detected by census officers: they are quite sufficient as a guide to practical life, but they lack the certainty characteristic of abstract dynamics.
## The spread of statistical mathematics
Statisticians, wrote the English statistician Maurice Kendall in 1942, “have already overrun every branch of science with a rapidity of conquest rivaled only by Attila, Mohammed, and the Colorado beetle.” The spread of statistical mathematics through the sciences began, in fact, at least a century before there were any professional statisticians. Even regardless of the use of probability to estimate populations and make insurance calculations, this history dates back at least to 1809. In that year, the German mathematician Carl Friedrich Gauss published a derivation of the new method of least squares incorporating a mathematical function that soon became known as the astronomer’s curve of error, and later as the Gaussian or normal distribution.
The graph is based on measurements taken about 1750 near Rome by mathematician Ruggero Boscovich. The x-axis covers one degree of latitude, while the y-axis corresponds to the length of the arc along the meridian as measured in units of Paris toise (=1.949 metres). The straight line represents the least squares approximation, or average slope, for the measured data, allowing the mathematician to predict arc lengths at other latitudes and thereby calculate the shape of the Earth.Encyclopædia Britannica, Inc.The problem of combining many astronomical observations to give the best possible estimate of one or several parameters had been discussed in the 18th century. The first publication of the method of least squares as a solution to this problem was inspired by a more practical problem, the analysis of French geodetic measures undertaken in order to fix the standard length of the metre. This was the basic measure of length in the new metric system, decreed by the French Revolution and defined as 1/40,000,000 of the longitudinal circumference of the Earth. In 1805 the French mathematician Adrien-Marie Legendre proposed to solve this problem by choosing values that minimize the sums of the squares of deviations of the observations from a point, line, or curve drawn through them. In the simplest case, where all observations were measures of a single point, this method was equivalent to taking an arithmetic mean.
Graph of intelligence quotient (IQ) as a normal distribution with a mean of 100 and a standard deviation of 15. The shaded region between 85 and 115 (within one standard deviation of the mean) accounts for about 68 percent of the total area, hence 68 percent of all IQ scores.Encyclopædia Britannica, Inc.Gauss soon announced that he had already been using least squares since 1795, a somewhat doubtful claim. After Legendre’s publication, Gauss became interested in the mathematics of least squares, and he showed in 1809 that the method gave the best possible estimate of a parameter if the errors of the measurements were assumed to follow the normal distribution. This distribution, whose importance for mathematical probability and statistics was decisive, was first shown by the French mathematician Abraham de Moivre in the 1730s to be the limit (as the number of events increases) for the binomial distribution (see the Normal approximation to the binomial distribution.Encyclopædia Britannica, Inc.). In particular, this meant that a continuous function (the normal distribution) and the power of calculus could be substituted for a discrete function (the binomial distribution) and laborious numerical methods. Laplace used the normal distribution extensively as part of his strategy for applying probability to very large numbers of events. The most important problem of this kind in the 18th century involved estimating populations from smaller samples. Laplace also had an important role in reformulating the method of least squares as a problem of probabilities. For much of the 19th century, least squares was overwhelmingly the most important instance of statistics in its guise as a tool of estimation and the measurement of uncertainty. It had an important role in astronomy, geodesy, and related measurement disciplines, including even quantitative psychology. Later, about 1900, it provided a mathematical basis for a broader field of statistics that came to be used by a wide range of fields.
## Statistical theories in the sciences
The role of probability and statistics in the sciences was not limited to estimation and measurement. Equally significant, and no less important for the formation of the mathematical field, were statistical theories of collective phenomena that bypassed the study of individuals. The social science bearing the name statistics was the prototype of this approach. Quetelet advanced its mathematical level by incorporating the normal distribution into it. He argued that human traits of every sort, from chest circumference (see the Histogram (bar chart) showing chest measurements of 5,732 Scottish soldiers, published in 1817 by Belgian mathematician Adolph Quetelet. This was the first time that a human characteristic had been shown to follow a normal distribution, as indicated by the superimposed curve.Encyclopædia Britannica, Inc.) and height to the distribution of propensities to marry or commit crimes, conformed to the astronomer’s error law. The kinetic theory of gases of Clausius, Maxwell, and the Austrian physicist Ludwig Boltzmann was also a statistical one. Here it was not the imprecision or uncertainty of scientific measurements but the motions of the molecules themselves to which statistical understandings and probabilistic mathematics were applied. Once again, the error law played a crucial role. The Maxwell-Boltzmann distribution law of molecular velocities, as it has come to be known, is a three-dimensional version of this same function. In importing it into physics, Maxwell drew both on astronomical error theory and on Quetelet’s social physics.
## Biometry
The English biometric school developed from the work of the polymath Francis Galton, cousin of Charles Darwin. Galton admired Quetelet, but he was critical of the statistician’s obsession with mean values rather than variation. The normal law, as he began to call it, was for him a way to measure and analyze variability. This was especially important for studies of biological evolution, since Darwin’s theory was about natural selection acting on natural diversity. A figure from Galton’s 1877 paper on breeding sweet peas shows a physical model, now known as the Galton board, that he employed to explain the normal distribution of inherited characteristics; in particular, he used his model to explain the tendency of progeny to have the same variance as their parents, a process he called reversion, subsequently known as regression to the mean. Galton was also founder of the eugenics movement, which called for guiding the evolution of human populations the same way that breeders improve chickens or cows. He developed measures of the transmission of parental characteristics to their offspring: the children of exceptional parents were generally somewhat exceptional themselves, but there was always, on average, some reversion or regression toward the population mean. He developed the elementary mathematics of regression and correlation as a theory of hereditary transmission and thus as statistical biological theory rather than as a mathematical tool. However, Galton came to recognize that these methods could be applied to data in many fields, and by 1889, when he published his Natural Inheritance, he stressed the flexibility and adaptability of his statistical tools.
Still, evolution and eugenics remained central to the development of statistical mathematics. The most influential site for the development of statistics was the biometric laboratory set up at University College London by Galton’s admirer, the applied mathematician Karl Pearson. From about 1892 he collaborated with the English biologist Walter F.R. Weldon on quantitative studies of evolution, and he soon began to attract an assortment of students from many countries and disciplines who hoped to learn the new statistical methods. Their journal, Biometrika, was for many years the most important venue for publishing new statistical tools and for displaying their uses.
Biometry was not the only source of new developments in statistics at the turn of the 19th century. German social statisticians such as Wilhelm Lexis had turned to more mathematical approaches some decades earlier. In England, the economist Francis Edgeworth became interested in statistical mathematics in the early 1880s. One of Pearson’s earliest students, George Udny Yule, turned away from biometry and especially from eugenics in favour of the statistical investigation of social data. Nevertheless, biometry provided an important model, and many statistical techniques, for other disciplines. The 20th-century fields of psychometrics, concerned especially with mental testing, and econometrics, which focused on economic time-series, reveal this relationship in their very names.
## Samples and experiments
Near the beginning of the 20th century, sampling regained its respectability in social statistics, for reasons that at first had little to do with mathematics. Early advocates, such as the first director of the Norwegian Central Bureau of Statistics, A.N. Kiaer, thought of their task primarily in terms of attaining representativeness in relation to the most important variables—for example, geographic region, urban and rural, rich and poor. The London statistician Arthur Bowley was among the first to urge that sampling should involve an element of randomness. Jerzy Neyman, a statistician from Poland who had worked for a time in Pearson’s laboratory, wrote a particularly decisive mathematical paper on the topic in 1934. His method of stratified sampling incorporated a concern for representativeness across the most important variables, but it also required that the individuals sampled should be chosen randomly. This was designed to avoid selection biases but also to create populations to which probability theory could be applied to calculate expected errors. George Gallup achieved fame in 1936 when his polls, employing stratified sampling, successfully predicted the reelection of Franklin Delano Roosevelt, in defiance of the Literary Digest’s much larger but uncontrolled survey, which forecast a landslide for the Republican Alfred Landon.
The alliance of statistical tools and experimental design was also largely an achievement of the 20th century. Here, too, randomization came to be seen as central. The emerging protocol called for the establishment of experimental and control populations and for the use of chance where possible to decide which individuals would receive the experimental treatment. These experimental repertoires emerged gradually in educational psychology during the 1900s and ’10s. They were codified and given a full mathematical basis in the next two decades by Ronald A. Fisher, the most influential of all the 20th-century statisticians. Through randomized, controlled experiments and statistical analysis, he argued, scientists could move beyond mere correlation to causal knowledge even in fields whose phenomena are highly complex and variable. His ideas of experimental design and analysis helped to reshape many disciplines, including psychology, ecology, and therapeutic research in medicine, especially during the triumphant era of quantification after 1945.
## The modern role of statistics
In some ways, statistics has finally achieved the Enlightenment aspiration to create a logic of uncertainty. Statistical tools are at work in almost every area of life, including agriculture, business, engineering, medicine, law, regulation, and social policy, as well as in the physical, biological, and social sciences and even in parts of the academic humanities. The replacement of human “computers” with mechanical and then electronic ones in the 20th century greatly lightened the immense burdens of calculation that statistical analysis once required. Statistical tests are used to assess whether observed results, such as increased harvests where fertilizer is applied, or improved earnings where early childhood education is provided, give reasonable assurance of causation, rather than merely random fluctuations. Following World War II, these significance levels virtually came to define an acceptable result in some of the sciences and also in policy applications.
From about 1930 there grew up in Britain and America—and a bit later in other countries—a profession of statisticians, experts in inference, who defined standards of experimentation as well as methods of analysis in many fields. To be sure, statistics in the various disciplines retained a fair degree of specificity. There were also divergent schools of statisticians, who disagreed, often vehemently, on some issues of fundamental importance. Fisher was highly critical of Pearson; Neyman and Egon Pearson, while unsympathetic to father Karl’s methods, disagreed also with Fisher’s. Under the banner of Bayesianism appeared yet another school, which, against its predecessors, emphasized the need for subjective assessments of prior probabilities. The most immoderate ambitions for statistics as the royal road to scientific inference depended on unacknowledged compromises that ignored or dismissed these disputes. Despite them, statistics has thrived as a somewhat heterogeneous but powerful set of tools, methods, and forms of expertise that continues to regulate the acquisition and interpretation of quantitative data.
| 10,873
| 48,150
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.1875
| 4
|
CC-MAIN-2015-18
|
latest
|
en
| 0.965405
|
https://cstheory.stackexchange.com/questions/34722/current-research-topics-in-tree-automata/34758
| 1,716,802,026,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-22/segments/1715971059039.52/warc/CC-MAIN-20240527083011-20240527113011-00229.warc.gz
| 154,664,808
| 43,335
|
# Current research topics in tree automata
What are current research topics connected with tree automata?
I'm particularly interested with connection between automatas, logics and databases.
Kind regards,
XYZ
• Can downvoters explain whay is wrong with the question or offer suggestions to improve the question?
– usul
May 12, 2016 at 15:35
• @usul, I have voted to close for it seems too broad and violates this policy. May 13, 2016 at 19:49
Here is a short list of authors that work on the connections between tree automata, logic and databases. For each author, I will just give one paper, but many more can be found on the respective web pages of these authors.
[1] Luc Segoufin
FO2(+1,<.~) on data trees, data tree automata and branching vector addition systems
A note on monadic datalog on unranked trees
Automata for Data Words and Data Trees.
You may also check Georg Gottlob's work
I would focus your research in the "devops" space until you find a hard problem that could lead to a good theory question. Many configuration files out there need property checkers over tree structures.
Pay particular attention to the Curry/Uncurry in the the third chapter of Drewes lecture notes. I think that is the kernel of an idea to make Tree Automata much more useful.
$B^{A}$ counts functions from an $A$ set to a $B$ set.
$C^{AB} = C^{A}C^{B}$ is an algebraic identity you learn in Jr. High. For functions it means you can pass in a touple of arguments, or pass a touple of functions each taking one argument. Simple, but very useful in refactoring software.
Space is another issue. Succinct representations of trees is very efficient, but rarely used in databases. You easily get 20x speedup by using succinct trees.
Also anything in Celko's book. Worst SQL project I ever had was dealing with a DAG in a database. Lots of open research questions on how to add tree operations to databases.
Blass (1994), Seven Trees in One
Fiore (2004), Isomorphisms of Generic Recursive Polynomial Types
Drewes (2009), Lecture Notes on Tree Automata
Sadakane 2011, Succinct Trees: Theory and Practice
-Expressivity of particular classes (complexity of deciding if a language is definable in some class, separability, etc).
-Refining the complexity bounds for traditional problems (ex: intersection).
-Tree transducers (almost half of Drewes' book mentioned by Chad deal with them. There is also a transducer paper in FOCS'15).
-Extending the traditional regular tree languages with operations on 'data values' (comparisons).
More generally, I would advise looking at ICALP, STACS, MFCS, LATA, CIAA (and perhaps DLT). There is even a workshop -- TTATT -- dedicated to tree automata.
• Regarding the connection with databases, this is more 'subjective', but I have a feeling that tree automata are not very popular for processing data in traditional databases. Most applications of tree automata I have witnessed are either for reasoning about data (checking properties) or XML-related query language formalisms. May 13, 2016 at 9:18
• Ed Kmett gave an amazing lecture on this recently, youtube.com/watch?v=uA0Z7_4J7u8 Well worth the watch. May 13, 2016 at 18:37
• Unless I'm mistaken the lecture about succinct data structures has connections to tree(XML) processing but little (nothing) to do with tree automata. May 14, 2016 at 14:45
• Correct. Just data structures. Bahr has a Tree Automata package but I haven't played with it, github.com/pa-ba/compdata-automata May 15, 2016 at 13:33
| 854
| 3,498
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.59375
| 3
|
CC-MAIN-2024-22
|
latest
|
en
| 0.927516
|
http://de.metamath.org/mpeuni/mp3and.html
| 1,721,650,066,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-30/segments/1720763517846.73/warc/CC-MAIN-20240722095039-20240722125039-00537.warc.gz
| 5,365,069
| 3,400
|
Metamath Proof Explorer < Previous Next > Nearby theorems Mirrors > Home > MPE Home > Th. List > mp3and Structured version Visualization version GIF version
Theorem mp3and 1419
Description: A deduction based on modus ponens. (Contributed by Mario Carneiro, 24-Dec-2016.)
Hypotheses
Ref Expression
mp3and.1 (𝜑𝜓)
mp3and.2 (𝜑𝜒)
mp3and.3 (𝜑𝜃)
mp3and.4 (𝜑 → ((𝜓𝜒𝜃) → 𝜏))
Assertion
Ref Expression
mp3and (𝜑𝜏)
Proof of Theorem mp3and
StepHypRef Expression
1 mp3and.1 . . 3 (𝜑𝜓)
2 mp3and.2 . . 3 (𝜑𝜒)
3 mp3and.3 . . 3 (𝜑𝜃)
41, 2, 33jca 1235 . 2 (𝜑 → (𝜓𝜒𝜃))
5 mp3and.4 . 2 (𝜑 → ((𝜓𝜒𝜃) → 𝜏))
64, 5mpd 15 1 (𝜑𝜏)
Colors of variables: wff setvar class Syntax hints: → wi 4 ∧ w3a 1031 This theorem was proved from axioms: ax-mp 5 ax-1 6 ax-2 7 ax-3 8 This theorem depends on definitions: df-bi 196 df-an 385 df-3an 1033 This theorem is referenced by: eqsupd 8246 eqinfd 8274 mreexexlemd 16127 mhmlem 17358 nn0gsumfz 18203 mdetunilem3 20239 mdetunilem9 20245 axtgeucl 25171 wwlkextprop 26272 measdivcst 29615 btwnouttr2 31299 btwnexch2 31300 cgrsub 31322 btwnconn1lem2 31365 btwnconn1lem5 31368 btwnconn1lem6 31369 segcon2 31382 btwnoutside 31402 broutsideof3 31403 outsideoftr 31406 outsideofeq 31407 lineelsb2 31425 relowlssretop 32387 lshpkrlem6 33420 fmuldfeq 38650 stoweidlem5 38898 wwlksnextprop 41118 el0ldep 42049 ldepspr 42056
Copyright terms: Public domain W3C validator
| 733
| 1,453
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.328125
| 3
|
CC-MAIN-2024-30
|
latest
|
en
| 0.144738
|
http://mathhelpforum.com/algebra/179719-roots-5-degree-polynomial-print.html
| 1,529,358,187,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-26/segments/1529267861163.5/warc/CC-MAIN-20180618203134-20180618223134-00445.warc.gz
| 207,436,968
| 2,801
|
# roots of a 5-degree polynomial
Printable View
• May 6th 2011, 08:31 AM
Sambit
roots of a 5-degree polynomial
Here is another equation where I am stuck to find the exact number of real roots: $\displaystyle x^5+x^3-2x+1=0$. Descarte's rule says it has $\displaystyle 0$ or $\displaystyle 2$ +ve real roots and $\displaystyle 1$ -ve real root. How do I know (apart from Wolframalpha, of course, which says it has no +ve root) how many +ve real roots does this equation have? I see $\displaystyle f(0),f(1),f(2)$ all are $\displaystyle >0$. But I want something more general to rely on.
Thanks
• May 6th 2011, 08:47 AM
TheEmptySet
First note that every odd degree polynomial has at least one real root.
$\displaystyle f(x)=x^5+x^3-2x+1 \implies g'(x)=5x^4+3x^2-2 =(5x^2-2)(x^2+1)$
So the function is increasing on
$\displaystyle \left(-\infty, \sqrt{\frac{2}{5}}\right) \cup \left(\sqrt{\frac{2}{5}},\infty\right)$
and has a max at
$\displaystyle x=-\sqrt{\frac{2}{5}}$
and a min at
$\displaystyle x=\sqrt{\frac{2}{5}}$
Note that
$\displaystyle f\left(-\sqrt{\frac{2}{5}} \right) > 0 \text{ and } f\left( \sqrt{\frac{2}{5}}\right) > 0$
So the function has one zero in
$\displaystyle \left(-\infty, \sqrt{\frac{2}{5}}\right)$
and then is never negative again so it has only one real zero.
| 450
| 1,301
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4
| 4
|
CC-MAIN-2018-26
|
latest
|
en
| 0.783483
|
https://www.geosci-model-dev.net/9/413/2016/
| 1,544,741,267,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-51/segments/1544376825112.63/warc/CC-MAIN-20181213215347-20181214000847-00080.warc.gz
| 892,745,903
| 18,140
|
Journal cover Journal topic
Geoscientific Model Development An interactive open-access journal of the European Geosciences Union
Journal topic
# Journal metrics
• IF 4.252
• IF 5-year 4.890
• CiteScore 4.49
• SNIP 1.539
• SJR 2.404
• IPP 4.28
• h5-index 40
• Scimago H index 51
# Abstracted/indexed
Abstracted/indexed
Geosci. Model Dev., 9, 413-429, 2016
https://doi.org/10.5194/gmd-9-413-2016
Geosci. Model Dev., 9, 413-429, 2016
https://doi.org/10.5194/gmd-9-413-2016
Development and technical paper 29 Jan 2016
Development and technical paper | 29 Jan 2016
# A flexible importance sampling method for integrating subgrid processes
E. K. Raut and V. E. Larson E. K. Raut and V. E. Larson
• University of Wisconsin – Milwaukee, Department of Mathematical Sciences, Milwaukee, WI, USA
Abstract. Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales.
The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories.
The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.
Short summary
Numerical models of weather and climate can estimate grid-box-averaged rates of physical processes such as microphysics using Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain into categories, and allows the modeler to prescribe the sampling density in each category.
Numerical models of weather and climate can estimate grid-box-averaged rates of physical...
Citation
Share
| 594
| 2,598
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.515625
| 3
|
CC-MAIN-2018-51
|
latest
|
en
| 0.805131
|
http://www.answers.com/topic/frank-fraser-darling
| 1,495,487,661,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2017-22/segments/1495463607120.76/warc/CC-MAIN-20170522211031-20170522231031-00009.warc.gz
| 404,821,258
| 65,720
|
# Where is Fraser Island?
Fraser Island is located in the South Pacific Ocean, off thesoutheast Queensland coast of Australia. It is near Hervey Bay andMaryborough, and about 200 km north of the Queens (MORE)
# What is darles in Spanish?
To give them. "Darles una leccion" - "To give 'em a lesson".
Thanks for the feedback!
In Uncategorized
# What is the company called House of Fraser?
a store that sells womens, mens and childrens clothing.also sells household goods. they have been around for 160 years. they are the leading national store in the united kingd (MORE)
# Who is Hannah Fraser?
She is a Human who Can hold her breath really long and dive really deep without any equipment. To go faster she had a tail maid, like a mermaid.....SHES A MERMAID! Hannah Fra (MORE)
# Why did dawn fraser get kicked out of swimming?
She received a 10 year ban from the Australian Swimming Union for alleged misbehavior at the 1964 Summer Olympics in Tokyo. Her alleged offenses included marching in the 1964 (MORE)
# Darling who sings this song its just called darling?
There was a hit song in the 70's called Darlin' by Frankie Miller there is also a song called darling by eyes set to kill
Thanks for the feedback!
In Uncategorized
# What is better the you phone 5c or 5s?
the 5s because it has better service but it dosent have diffrent colrs just silver gold and black
# How does erosion affect Fraser Island?
Erosions effected fraser island because it helped form the sand that makes fraser island
# What is the latitude and longitude for fraser island?
latitude is 38.8468 SE and longitude 73.7472 W === Answer #2: Even if you change the longitude tag in the first answer to ' E ', the coordinates given in the answer ma (MORE)
# What is the answer to 20c plus 5 equals 5c plus 65?
20c + 5 = 5c + 65 Divide through by 5: 4c + 1 = c + 13 Subtract c from both sides: 3c + 1 = 13 Subtract 1 from both sides: 3c = 12 Divide both sides by 3: c = 4
Thanks for the feedback!
| 512
| 1,993
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.890625
| 3
|
CC-MAIN-2017-22
|
longest
|
en
| 0.953394
|
http://www.polarhome.com/service/man/?qf=csqrtl&tf=2&of=FreeBSD&sf=3
| 1,516,404,998,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-05/segments/1516084888302.37/warc/CC-MAIN-20180119224212-20180120004212-00461.warc.gz
| 541,063,774
| 5,135
|
csqrtl man page on FreeBSD
Man page or keyword search: man All Sections 1 - General Commands 2 - System Calls 3 - Subroutines, Functions 4 - Special Files 5 - File Formats 6 - Games and Screensavers 7 - Macros and Conventions 8 - Maintenence Commands 9 - Kernel Interface New Commands Server 4.4BSD AIX Alpinelinux Archlinux Aros BSDOS BSDi Bifrost CentOS Cygwin Darwin Debian DigitalUNIX DragonFly ElementaryOS Fedora FreeBSD Gentoo GhostBSD HP-UX Haiku Hurd IRIX Inferno JazzOS Knoppix LinuxMint MacOSX Mageia Mandriva Manjaro Minix MirBSD NeXTSTEP NetBSD OPENSTEP OSF1 OpenBSD OpenDarwin OpenIndiana OpenMandriva OpenServer OpenSuSE OpenVMS Oracle PC-BSD Peanut Pidora Plan9 QNX Raspbian RedHat Scientific Slackware SmartOS Solaris SuSE SunOS Syllable Tru64 UNIXv7 Ubuntu Ultrix UnixWare Xenix YellowDog aLinux 9747 pages apropos Keyword Search (all sections) Output format html ascii pdf view pdf save postscript
[printable version]
```CSQRT(3) BSD Library Functions Manual CSQRT(3)
NAME
csqrt, csqrtf, csqrtl — complex square root functions
LIBRARY
Math Library (libm, -lm)
SYNOPSIS
#include <complex.h>
double complex
csqrt(double complex z);
float complex
csqrtf(float complex z);
long double complex
csqrtl(long double complex z);
DESCRIPTION
The csqrt(), csqrtf(), and csqrtl() functions compute the square root of
z in the complex plane, with a branch cut along the negative real axis.
In other words, csqrt(), csqrtf(), and csqrtl() always return the square
root whose real part is non-negative.
RETURN VALUES
These functions return the requested square root. The square root of 0
is +0 ± 0, where the imaginary parts of the input and respective result
have the same sign. For infinities and NaNs, the following rules apply,
with the earlier rules having precedence:
Input Result
k + ∞*I ∞ + ∞*I (for all k)
-∞ + NaN*I NaN ± ∞*I
∞ + NaN*I ∞ + NaN*I
k + NaN*I NaN + NaN*I
NaN + k*I NaN + NaN*I
-∞ + k*I +0 + ∞*I
∞ + k*I ∞ + 0*I
For numbers with negative imaginary parts, the above special cases apply
given the identity:
csqrt(conj(z) = conj(sqrt(z))
Note that the sign of NaN is indeterminate. Also, if the real or imagi‐
nary part of the input is finite and an NaN is generated, an invalid
exception will be thrown.
cabs(3), fenv(3), math(3),
STANDARDS
The csqrt(), csqrtf(), and csqrtl() functions conform to ISO/IEC
9899:1999 (“ISO C99”).
BUGS
For csqrt() and csqrtl(), inexact results are not always correctly
rounded.
BSD March 30, 2008 BSD
```
[top]
List of man pages available for FreeBSD
Copyright (c) for man pages and the logo by the respective OS vendor.
For those who want to learn more, the polarhome community provides shell access and support.
[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
...................................................................
Vote for polarhome
| 831
| 2,982
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.59375
| 3
|
CC-MAIN-2018-05
|
latest
|
en
| 0.503644
|
https://math.stackexchange.com/questions/1550977/prove-that-for-some-p-q-a-p-a-pa-p1-cdots-all-are-positive
| 1,566,234,994,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-35/segments/1566027314852.37/warc/CC-MAIN-20190819160107-20190819182107-00506.warc.gz
| 575,302,067
| 32,337
|
# prove that, for some $p$ & $q, a_p, a_p+a_{p+1} + \cdots$ all are positive
Let $a_1,a_2,\ldots ,a_{100}$ be real numbers, each less than one, satisfy $a_1+a_2+\cdots+a_{100} > 1$
Show that there exist two integers $p$ and $q$ , $p<q$, such that the numbers
$$a_q, a_q+a_{q-1}, \ldots, a_q+\cdots+a_p,$$
$$a_p, a_p+a_{p+1},\ldots,a_p+\cdots+a_q$$
are all positive.
I proved that if $n$ is the smallest integer such that $a_1+a_2+\cdots+a_n>1$ then all the sums $a_n,a_n+a_{n-1},\ldots,a_n+\cdots+a_1$ are positive. Will this help to prove?
source: Test of Math at 10+2 level( A collection of old ISI B.stat & B.math entrance exam question papers)
• Can someone suggest me a better tag for the question? – Akshay Hegde Nov 30 '15 at 4:53
• Your tags are fine. The question is really a combinatorics question (and not quite an inequality question), and it indeed sounds like contest-math. However, it is best to clearly state where you got the problem from, so that we know it's not an ongoing contest (otherwise the question is not allowed). – user21820 Nov 30 '15 at 5:16
[I found this solution collaboratively with someone else offline.] $\def\nn{\mathbb{N}}$ $\def\rr{\mathbb{R}}$
Let $T(n) = ( \text{The theorem is true for any length-$n$sequence from$\rr$} )$, for any $n \in \nn$.
If $T(n)$ is false for some $n \in \nn$:
Let $m \in \nn$ be the minimum such that $T(m)$ is false [by well-ordering].
Let $a_{1..m}$ be a sequence that does not satisfy the theorem.
For any $p \in [1..m]$:
If $\sum_{k=1}^p a_k \le 0$:
$\sum_{k=p+1}^m a_k \ge \sum_{k=1}^m a_k > 1$.
Also $a_{p+1..m}$ satisfies the theorem [by minimality of $m$].
Thus some segment of $a_{p+1..m}$ has initial and terminal segments all with positive sum.
But any segment of $a_{p+1..m}$ is also a segment of $a_{1..m}$.
Thus $a_{1..m}$ satisfies the theorem, which gives a contradiction.
Therefore $\sum_{k=1}^p a_k > 0$.
Similarly $\sum_{k=p}^m a_k > 0$.
Therefore $a_{1..m}$ has initial and terminal segments all with positive sum.
Thus $a_{1..m}$ satisfies the theorem, which gives a contradiction.
Therefore $T(n)$ is true for any $n \in \nn$.
• I didn't get some parts of the answer: 1) $a_{1..m}$for a sequence? 2) $\sum ^{p}_{k=1} \leq 0$? – Akshay Hegde Nov 30 '15 at 5:03
• @AkshayHegde: (1) $a_{1..m}$ is just a short-hand for $a_1,a_2,\cdots,a_m$. Same for $[1..m]$ being a short-hand for the collection of all integers from $1$ to $m$. (2) Sorry I missed out the expression to be summed. Edited. – user21820 Nov 30 '15 at 5:13
• @AkshayHegde: So do you get it? – user21820 Dec 1 '15 at 7:44
• Yeah got it... struggled initially though @user21820 – Akshay Hegde Dec 1 '15 at 9:24
• @AkshayHegde: Ah okay great. In general, considering the smallest counter-example (where size is a natural number) can help, even though well-ordering is equivalent to induction. This is because when attempting a proof by contradiction you would start with existence of a counter-example, which by well-ordering immediately gives existence of a smallest counter-example, which is often a lot more information to work with. – user21820 Dec 1 '15 at 10:16
If the sum is positive, at least one element is positive. Pick that one as "sequence" (of length 1).
• Please elaborate.. I didn't understand – Akshay Hegde Dec 26 '15 at 3:32
| 1,092
| 3,319
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.546875
| 4
|
CC-MAIN-2019-35
|
latest
|
en
| 0.801234
|
https://www.scribd.com/document/187191058/SOLID-MECHANICS-QUESTION-BANK
| 1,566,288,421,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-35/segments/1566027315258.34/warc/CC-MAIN-20190820070415-20190820092415-00038.warc.gz
| 959,372,248
| 59,456
|
You are on page 1of 2
# Encl. 35 a.
## : QUESTION BANK AE 2203
UNIT V TB 1 TB 2 REF 1 Qs. No. TLP No.1
SOLID MECHANICS
Bi Axial Stresses Nash William Strength of Materials, TMH, 1998 Timoshenko.S. and Young D.H. Elements of strength materials Vol. I and Vol. II., Strength of Materials R.K.Rajput Univ.Qs Questions Stresses in thin circular cylinder and spherical shell under internal pressure Textb ook No. REF 1 REF 1 REF 1 REF 1 REF 1 REF 1 Ans - Page Nos. Mark
1. 2. 3. 4. 5. 6.
7. 8. 9.
10.
11.
## 12. 13. 14.
TLP No.2
Define thin cylindrical shells What are the stresses can act in the cylinder when there is an internal May09 pressure? Define Hoop stress. Define Longitudinal stress. What is the formula for Permissible tensile stress in thin cylindrical shell? A thin cylindrical shell of diameter 300mm and wall thickness 6mm has hemispherical ends. If there is no distortion of the junction under pressure determine the thickness of hemispherical ends. Take: E= 208GN/m2. And Poissons ratio = 0.3 Define Built-up cylindrical shells. Derive Change in dimension of thin cylindrical shell due to an internal pressure. A boiler shell is to be made of 15mm thick plate having tensile stress of May 10 120MN/m2. If the efficiencies of longitudinal and circumferential joints are 70% and 30% respectively, determine: (i) Maximum permissible diameter of the shell for an internal pressure of 2MN/m2. (ii) Permissible intensity of internal pressure when the shell diameter is 1.5m. A cylindrical vessel whose ends are closed by means of rigid flange May 11 plates is made of steel plate 3mm thick. The internal diameter of vessel is 50cm and 25cm respectively. Determine the longitudinal and circumferential stresses in the cylindrical shell due to an internal fluid pressure of 3MN/m2.also calculate increase in length, diameter and volume of the vessel. Take: E=200GN/m2, and Poissons ratio = 0.3 A built up cylindrical shell of 300mm dia, 3m long and 6mm thick is 2 subjected to an internal pressure of 2MN/m . Calculate change in length, dia and volume of cylinder if the efficiencies of longitudinal and circumferential joints are 80% and 50% respectively. E= 200GN/m2, m=3.5. Define spherical shells. Derive Change in dimensions for spherical shell under internal pressure. Calculate the increase in volume of spherical shell 1m in diameter and 2 1cm thick when it is subjected to an internal pressure of 1.6MN/m . Take E=200GN/m2 and Poissons ratio=0.3
Volumetric strain
## 589 590 590 590 592 593
2 2 2 2 2
4
REF 1 REF 1 REF 1
2 8
16
REF 1
598
16
REF 1
600 16
609
15.
TLP No.3
## Define volumetric strain.
May 09 -
REF 1 REF 1 REF 1 REF 1 REF 1 REF 1 REF 1 REF 1 REF 1 REF 1
## 4 330 331 94 95 101
2 2 2 2 2
16. 17.
TLP No.4
Principal stress and maximum shear sterss
18. 19.
TLP No.5
## Define Principal stresses. Define Maximum shear stress.
Analytical and Graphical Methods
20.
21.
A circular bar is subjected to an axial pull of 100kN. If the maximum intensity of shear stress on any oblique plane is not to exceed 60MN/m2. Determine diameter of the bar. A short metallic column of 500mm2 cross sectional area carries an axial compressive load of 100kN. For a plane inclined 600 with the direction of load, calculate: (i) Normal stress
4 REF 1
102 16
Pilivalam P.O, Pudukkottai Dt., Tamil Nadu. Pin - 622 507, Fax: 04333 277125, Ph: 04322 - 320801, 320802, Website: www.mzcet.in, Email: info@mzcet.in Page 1 of 2
## Encl. 35 a. : QUESTION BANK AE 2203
UNIT V
SOLID MECHANICS
Bi Axial Stresses
22.
23.
24.
25.
(ii) Tangential stress (iii) Resultant stress (iv) Maximum shear stress (v) Obliquity of resultant stress A point is subjected to perpendicular stresses of 50MN/m2 and 2 30MN/m , both tensile. Calculate normal, tangential stress and resultant stress and its obliquity on a plane making an angle of 300 with the axis of second stress. Find by analytical method. A point is subjected to perpendicular stresses of 50MN/m2 and 2 30MN/m , both tensile. Calculate normal, tangential stress and resultant stress and its obliquity on a plane making an angle of 300 with the axis of second stress. Find by graphical method. Draw the Mohrs circle for direct stresses of 65MN/m2(tensile) and May 11 35MN/m2(compressive) and estimate the magnitude and direction of the resultant stresses on planes making an angles of 200 and 650 with the plane of first principal stress. Find also the normal and tangential stress on these planes. At a point in a bracket the stresses on two mutually perpendicular planes are 35MN/m2 (tensile) and 15MN/m2 (tensile). The shear stress across these planes is 8MN/m2. Find the magnitude and direction of the resultant stress on a plane making an angle of 400 With the plane of first stress. Find also the normal and tangential stresses on the planes.
REF 1
106 16
REF 1
107 16
REF 1
108 16
REF 1
112 16
## Completion Verified by Concerned Dept. HOD Sign dd/mm/yy
Pilivalam P.O, Pudukkottai Dt., Tamil Nadu. Pin - 622 507, Fax: 04333 277125, Ph: 04322 - 320801, 320802, Website: www.mzcet.in, Email: info@mzcet.in Page 2 of 2
| 1,472
| 5,121
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.140625
| 3
|
CC-MAIN-2019-35
|
latest
|
en
| 0.813997
|
http://www.ncatlab.org/nlab/show/free+groupoid
| 1,448,592,537,000,000,000
|
application/xhtml+xml
|
crawl-data/CC-MAIN-2015-48/segments/1448398447906.82/warc/CC-MAIN-20151124205407-00300-ip-10-71-132-137.ec2.internal.warc.gz
| 577,669,045
| 7,848
|
category theory
# Contents
## Idea
The free groupoid on a directed graph is the groupoid whose objects are the vertices of the graph and whose morphisms are finite concatenations of the edges in the graph and formal inverses to them.
This construction is the left adjoint free construction to the forgetful functor that sends a groupoid to its underlying directed graph.
## Definition
Given a graph $D$, that is, a collection of vertices and of labeled arrows between them, the free groupoid $G(D)$ on $D$ is the groupoid that has the vertices of $D$ as objects, and whose morphisms are constructed recursively by formal composition (i.e., juxtaposition) from identity maps, the arrows of $D$ and formal inverses for the arrows of $D$.
The only relations between morphisms of $G(D)$ are the necessary ones defining the identity of each object, the inverse of each arrow in $D$ and the associativity of composition. This is clearly a groupoid, which comes with an evident morphism $D \to G(D)$ of quivers.
The above sketched construction could be made more precise, but what really matters is the universal property it enjoys: the free groupoid $G(D)$ is the universal (initial) groupoid mapping out of $D$. By varying $D$, the free groupoid yields a functor $G$ from directed graphs to groupoids, left adjoint to the forgetful functor.
This last conceptual characterization is best taken as the definition. Similarly, it is possible to construct the left adjoint to the forgetful functor from groupoids to categories, that is the free groupoid over a category.
The construction of free groupoids in “Topology and Groupoids” is by taking a disjoint union of copies of the unit interval groupoid $\mathbf I$ and then identifying the vertices according to the scheme given by the directed graph.
See the paper by Crisp and Paris for an application of free groupoids.
## Properties
### Fundamental group
###### Proposition
The fundamental group of a free groupoid on a countable directed graph (for any basepoint) is a free group.
For instance (Cote, theorem 2.3).
###### Example
The fundamental group of the free groupoid of a graph with a single vertex is the free group on the set of edges of the graph. A result relevant to the Jordan Curve Theorem and the Phragmen-Brouwer Property is given in the Corrigendum referenced below. It gives conditions on a pushout of groupoids to contain a free groupoid as a retract.
## References
• Lauren Cote, Free groups and graphs: the Hanna Neumann theorem (pdf)
• Philip Higgins, Categories and groupoids, Van Nostrand Reinhold, 1971; Reprints in Theory and Applications of Categories, No. 7 (2005) pp 1-195 (pdf available)
• Ronnie Brown Topology and Groupoids, (details here)
• Omar Antolin Camarena and Ronnie Brown, “Corrigendum to ”Groupoids, the Phragmen-Brouwer Property, and the Jordan Curve Theorem“, J. Homotopy and Related Structures 1 (2006) 175-183.” J. Homotopy and Related Structures (pdf)
• J. Crisp, L. Paris, “The solution to a conjecture of Tits on the subgroup generated by the squares of the generators of an Artin group”, Invent. math. 145, 19–36 (2001).
Revised on June 24, 2015 12:10:02 by Ronnie Brown (31.51.47.223)
| 757
| 3,204
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.671875
| 3
|
CC-MAIN-2015-48
|
latest
|
en
| 0.930366
|
https://www.mathway.com/examples/precalculus/linear-equations/finding-x-and-y-intercepts?id=34
| 1,657,212,534,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00764.warc.gz
| 907,270,319
| 27,813
|
# Precalculus Examples
Find the x and y Intercepts
Step 1
Find the x-intercepts.
To find the x-intercept(s), substitute in for and solve for .
Solve the equation.
Rewrite the equation as .
Subtract from both sides of the equation.
x-intercept(s) in point form.
x-intercept(s):
x-intercept(s):
Step 2
Find the y-intercepts.
To find the y-intercept(s), substitute in for and solve for .
Solve the equation.
Remove parentheses.
Remove parentheses.
| 115
| 445
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.96875
| 3
|
CC-MAIN-2022-27
|
longest
|
en
| 0.865106
|
https://www.machinelearningmindset.com/linear-independence-of-vectors/
| 1,695,865,429,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-40/segments/1695233510334.9/warc/CC-MAIN-20230927235044-20230928025044-00866.warc.gz
| 965,260,471
| 33,706
|
This tutorial in dedicated to the concept of linear independence of vectors and its associations with the solution of linear equation systems.
What is the concept of vectors’ linear independence, and why should I care about it? One simple example: Let’s say you want to find the answer to the . One may say we can simply find the answer by taking an inverse of as . NOT SO FAST! There is more in that? What if does not have an inverse or is not square? You just let it go? I guess not.
That was just one example. In Machine Learning, it frequently happens that you want to explore the correlation between vectors to analyze them better. For example, you are dealing with a matrix which in essence, is formed by vectors. What would you know if you are not familiar with the concept of linear independence?
In this tutorial, you will learn the following:
Before You Move On
### The Concept of Linear Independence
Assuming we have the set of which are column vectors of size . Then, we call this set linear independent, if no vector exists that we can represent it as the linear combination of any other two vectors. Although, perhaps it is easier to define linear dependent: A vector is linear dependent if we can express it as the linear combination of another two vectors in the set, as below:
In the above case, we say the set of vectors are linearly dependent!
### Example
Consider the three vectors below:
The above set is linearly dependent. Why? It is simple. Because . Let’s do the above with Python and Numpy:
# Import Numpy library
import numpy as np
# Define three column vectors
v = np.array([1, -1, 2]).reshape(-1,1)
u = np.array([0, 3, 1]).reshape(-1,1)
w = np.array([2, 1, 5]).reshape(-1,1)
# Check the linear dependency with writing the equality
print('Does the equality w = 2v+u holds? Answer:', np.all(w == 2*v+u))
Run the above code and see if Numpy confirms that or not!
### The Relationship With Matrix Rank
I talked about the linear dependence of vectors so far. Assume we have the matrix . The matrix has m rows and n columns. Let’s focus on the columns. The n columns form n vectors. I denote the column with . So we have the set of n vectors as columns. The size of the largest subset of that its vectors are linearly independent, is called the column rank of the matrix . Considering the rows of , the size of the largest subset of rows that form a linearly independent set is called the row rank of the matrix .
We have the following properties for matrix ranks:
1. For , . If , the matrix is full rank.
2. For , , and , .
3. For , , and , .
Check first and second property above with the following code:
# Import Numpy library
import numpy as np
from numpy.linalg import matrix_rank
# Define random 3x4 matrix using np.array
# Ref: https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.randint.html
M = np.random.randint(10, size=(3, 4))
N = np.random.randint(10, size=(4, 3))
# np.all() test whether all array elements are True.
checkProperty = np.all(matrix_rank(M) <= min(M.shape[0],M.shape[1]))
if checkProperty: print('Property rank(M) <= min(M.shape[0],M.shape[1]) is confirmed!')
checkProperty = np.all(matrix_rank(np.matmul(M,N)) <= min(matrix_rank(M),matrix_rank(N)))
if checkProperty: print('Property rank(MN) <= min(rank(M),rank(N)) is confirmed!')
Practice: Modify the above code and check property (3).
### Linear Equations
I talked about linear dependency and matrix ranks. After that, I would like to discuss their application in finding the solution of linear equations, which is of great importance. Consider the following equality which set a system of linear equations:
Above, we see the matrix is multiplied by the vector and forms another vector . The above equality creates a set of m-line linear equations. Let’s write line for example:
So the question is how many solution exists for the system of equations . By solution I mean the possible values for the variables . The answer is one of the following:
• There is NO solution.
• The system has one unique solution.
• We have infinite numbers of solutions.
As you observed, having more than one BUT less than infinity solutions is off the table!
Theorem: If and are two solutions of the equation, then the specific linear combination of them as below, is a solution as well:
PROOF: Look at the below equations to see how is also a solution:
So the above proves shows that if we have more than one solution, then we can say we have infinite number of solutions!
The following two conditions determine the number of solutions if there is at least one solution! Try to prove them based on what you learned so far:
• has at least one solution if and Rank() = m.
• has exactly one solution if and Rank() = m.
### Conclusion
In this tutorial, I discussed the concept of linear independence of the vectors and their associates with the system of linear equations. This concept is crucial, especially in Machine Learning and optimization theory, in which we are dealing with all sorts of mathematical proofs necessary to justify why a method should work! For the majority of what you may work in Machine Learning, you may not need to use what I talked about here directlyBUT, you need to know it if you would like to stand outDo you see any ambiguous concept here? Anything missing or wrong? Feel free to ask as it will help me, yourself, and the others to learn better, and I can further improve this article.
| 1,240
| 5,454
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.5
| 4
|
CC-MAIN-2023-40
|
latest
|
en
| 0.930266
|
https://www.roseindia.net/answers/viewqa/Java-Beginners/30800-code-in-java.html
| 1,508,444,029,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2017-43/segments/1508187823462.26/warc/CC-MAIN-20171019194011-20171019214011-00733.warc.gz
| 999,151,917
| 17,718
|
# code in java
siddhant rastogi
code in java
0 Answer(s) 4 years and 2 months ago
Posted in : Java Beginners
In NASA, two researchers, Mathew and John, started their work on a new planet, but while practicing research they faced a mathematical difficulty. In order to save the time they divided their work.
So scientist Mathew worked on a piece and invented a number computed with the following formula:
T(n) = n(n+1)/2 These numbers are called Mathew numbers.
And scientist John invented another number which is built by adding the squares of its digits. Doing this perpetually, the numbers will end in 1 or 4. If a positive integer ends with 1, then it is called John number. Example of John numbers is:
13 = 1^2 + 3^2 = 1+9 = 10 (Step : 1). 10 = 1^2 + 0^2 = 1+0 = 1 (Step : 2), iteration ends in Step 2 since number ends with 1. Help Mathew and John combine their research work by finding out number in a given range that satisfies both properties?
Related Tutorials/Questions & Answers:
JAVA code For
JAVA code For JAVA code For "Traffic signals Identification for vehicles
java code
java code what is the code to turn off my pc through java program
JAVA CODE
JAVA CODE JAVA SOURCE CODE TO BLOCK A PARTICULAR WEB SITES(SOCIAL WEB SITE
java code
java code write a java code to convert hindi to english using arrays
java code
java code hi any one please tell me the java code to access any link i mean which method of which class is used to open any link in java program
java code
java code sir how to merge the cells in excel using java code please help me and also how to make the text placed in the cell to be center
java code
java code need java code for chart or graph which compare the performance of aprior algorithm and coherent rule algorithm.plz any one help me out
java code
java code write a java code for finding a string in partiular position in a delimited text file and replace the word with the values given by user and write the file in new location
java code
java code sir how to merge the cells in excel using java code please help me and also how to make the text placed in the cell to be center
java code
java code develop a banking system in java
java code
java code I need the java code that would output the following: HARDWARE ITEMS CODE DESCRIPTION UNIT PRICE K16 Wood screws,brass,20mm \$7.75 D24 Wood glue,clear,1 liter \$5.50 M93
Java code
Java code Create a washing machine class with methods as switchOn, acceptClothes, acceptDetergent, switchOff. acceptClothes accepts the noofClothes as argument & returns the noofClothes
java code
java code how to extract html tags using java
java code
java code what is meaning bufferedreader
java code
java code what is meaning bufferedreader
java code
java code i want HSVcolor descriptor for color image in java coding
java code
java code HOW TO PRINT 1 TO 100 WITHOUT USING CONDITIONAL,ANY LOOP AND ARRAY IN JAVA AND C.URGENT SIR PLZ Hi, You can use following code: class MyClass { public static void main(String[] args) { int
java code
java code I am beginer in java my question is how can i fill data from mysql database to jcombobox using netbeans
Java Code
Java Code Write a java program, which creates multiple threads. Each thread performs add/delete/update operations on an ArrayList simultaneously
java code
java code hi, Can any one tell me " How to read the data which is present om an image in java
java code
java code Write a program in java for lru caching using two inputs one using integer and another using string of uppercase only
java code
java code An employee _id consist of 5 digits is stored in a string variable strEmpid. Now Mr.Deb wants to store this Id in Integer type to IntEmpid. write Java statements to do
Java code
Java code Write a program which performs to raise a number to a power and returns the value. Provide a behavior to the program so as to accept any... the following code: import java.util.*; import java.text.*; class NumberProgram
java code
java code int g() { System.out.println("Inside method g"); int h() { System.out.println("Inside method h"); } } Find the error in the following... cannot define a method inside another method. Anyways, we have modified your code
java code
java code Write a program to find the difference between sum of the squares and the square of the sums of n numbers
java code
java code Develop a program that accepts the area of a square and will calculate its perimeter
java code
java code Create a calculator class which will have methods add, multiply, divide & subtract
java code
java code input any word ie risk,resul is-- r ri ris risk
Java code
Java code Create a calculator class which will have methods add, multiply, divide & subtract Hi Friend, Try the following code: class Calculation{ public int add(int a,int b){ return a+b
Java code
Java code An old-style movie theater has a simple profit program. Each customer pays \$5 per ticket. Every performance costs the theater \$20, plus...; Hi Friend, Try the following code: import java.util.*; import
java code
java code Develop the program calculateCylinderVolume., which accepts radius of a cylinder's base disk and its height and computes the volume of the cylinder
| 1,273
| 5,231
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.734375
| 3
|
CC-MAIN-2017-43
|
longest
|
en
| 0.869436
|
https://origin.geeksforgeeks.org/find-smallest-possible-number-from-a-given-large-number-with-same-count-of-digits/
| 1,675,780,017,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00114.warc.gz
| 449,311,119
| 32,368
|
Open in App
Not now
Find smallest possible Number from a given large Number with same count of digits
• Last Updated : 20 Dec, 2022
Given a number K of length N, the task is to find the smallest possible number that can be formed from K of N digits by swapping the digits any number of times.
Examples:
Input: N = 15, K = 325343273113434
Output: 112233333344457
Explanation:
The smallest number possible after swapping the digits of the given number is 112233333344457
Input: N = 7, K = 3416781
Output: 1134678
Approach: The idea is to use Hashing. To implement the hash, an array arr[] of size 10 is created. The given number is iterated and the count of occurrence of every digit is stored in the hash at the corresponding index. Then iterate the hash array and print the ith digit according to its frequency. The output will be the smallest required number of N digits.
Below is the implementation of the above approach:
C++
// C++ implementation of the above approach #include using namespace std; // Function for finding the smallest // possible number after swapping // the digits any number of times string smallestPoss(string s, int n) { // Variable to store the final answer string ans = ""; // Array to store the count of // occurrence of each digit int arr[10] = { 0 }; // Loop to calculate the number // of occurrences of every digit for (int i = 0; i < n; i++) { arr[s[i] - 48]++; } // Loop to get smallest number for (int i = 0; i < 10; i++) { for (int j = 0; j < arr[i]; j++) ans = ans + to_string(i); } // Returning the answer return ans; } // Driver code int main() { int N = 15; string K = "325343273113434"; cout << smallestPoss(K, N); return 0; }
Java
// Java implementation of the above approach import java.util.*; import java.io.*; class GFG { // Function for finding the smallest // possible number after swapping // the digits any number of times static String smallestPoss(String s, int n) { // Variable to store the final answer String ans = ""; // Array to store the count of // occurrence of each digit int arr[] = new int[10]; // Loop to calculate the number // of occurrences of every digit for (int i = 0; i < n; i++) { arr[s.charAt(i) - 48]++; } // Loop to get smallest number for (int i = 0; i < 10; i++) { for (int j = 0; j < arr[i]; j++) ans = ans + String.valueOf(i); } // Returning the answer return ans; } // Driver code public static void main(String[] args) { int N = 15; String K = "325343273113434"; System.out.print(smallestPoss(K, N)); } } // This code is contributed by PrinciRaj1992
Python3
# Python3 implementation of the above approach # Function for finding the smallest # possible number after swapping # the digits any number of times def smallestPoss(s, n): # Variable to store the final answer ans = ""; # Array to store the count of # occurrence of each digit arr = [0]*10; # Loop to calculate the number # of occurrences of every digit for i in range(n): arr[ord(s[i]) - 48] += 1; # Loop to get smallest number for i in range(10): for j in range(arr[i]): ans = ans + str(i); # Returning the answer return ans; # Driver code if __name__ == '__main__': N = 15; K = "325343273113434"; print(smallestPoss(K, N)); # This code is contributed by 29AjayKumar
C#
// C# implementation of the above approach using System; class GFG { // Function for finding the smallest // possible number after swapping // the digits any number of times static String smallestPoss(String s, int n) { // Variable to store the readonly answer String ans = ""; // Array to store the count of // occurrence of each digit int []arr = new int[10]; // Loop to calculate the number // of occurrences of every digit for (int i = 0; i < n; i++) { arr[s[i] - 48]++; } // Loop to get smallest number for (int i = 0; i < 10; i++) { for (int j = 0; j < arr[i]; j++) ans = ans + String.Join("",i); } // Returning the answer return ans; } // Driver code public static void Main(String[] args) { int N = 15; String K = "325343273113434"; Console.Write(smallestPoss(K, N)); } } // This code is contributed by PrinciRaj1992
Javascript
Output:
112233333344457
Time Complexity: O(N)
Auxiliary Space: O(N + 10)
My Personal Notes arrow_drop_up
Related Articles
| 1,313
| 4,673
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.421875
| 3
|
CC-MAIN-2023-06
|
latest
|
en
| 0.517218
|
http://webphysics.davidson.edu/physlet_resources/bu_semester2/c06_spheres.html
| 1,386,819,465,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2013-48/segments/1386164447901/warc/CC-MAIN-20131204134047-00015-ip-10-33-133-15.ec2.internal.warc.gz
| 256,479,159
| 3,248
|
#### Potential and Charged Spheres
Note that in the simulation above, the insulating sphere has a net charge +Q and the conducting sphere has a net charge -5Q. Drag the red circle left or right to plot field or potential as a function of r.
For a charged sphere of radius R, what is the potential due to the sphere? If the charge is symmetrically distributed, the potential outside a sphere of charge Q is the same as that from a point charge Q placed at the center of the sphere. This is what happened for electric field, too.
V = kQ/r
What happens inside the sphere? If the sphere is a conductor the potential is the same everywhere throughout the conductor, and is equal to the value of the potential at the surface:
V = kQ/R.
If the sphere is an insulator with a uniform charge density, the potential is not constant because there is a field inside the insulator. We showed previously that the field inside a uniformly charged insulator is:
E = kQr/R3
Starting from some point a distance r from the center and moving out to the edge of the sphere, the potential changes by an amount:
DV = V(R) - V(r) = -òE·ds = -òE dr = -kQ/R3 ò r dr
where the limits of the integral are from r to R.
Integrating gives:
V(R) - V(r) = -[kQ/2R3] (R2-r2)
V(R) is simply kQ/R, so:
For r < R, V(r) = [kQ/2R] (3-r2/R2)
#### Concentric Spheres
Now we'll put the two cases together. The insulating sphere at the center has a charge +Q uniformly distributed over it, and has a radius R. The concentric conducting shell has inner radius 1.5R and outer radius 2R. It has a net charge of -5Q.
What is the electric field as a function of r?
What is the electric potential as a function of r?
Let's try the field first:
For r > 2R, E = 4kQ/r and points toward the center
For 1.5R < r < 2R, E = 0
For R < r < 1.5R, E = kQ/r, directed away from the center
For r < R, E = kQr/R3, directed away from the center
The potential is a little trickier:
For r > 2R, V(r) = -4kQ/r
The potential looks like the potential from a -4Q charge.
For 1.5R < r < 2R, V(r) = -4kQ/2R = -2kQ/R
The potential inside the conducting shell is constant, and is equal to the value it has at the outside of the shell.
For R < r < 1.5R, V(r) = kQ/r - kQ/1.5R - 2kQ/R
From the previous region, we know that V(1.5R) = -2kQ/R. We also know how the potential changes as we move closer to a point charge:
VA - VB = kq/rA - kq/rB
Here VA = V(r), VB = V(1.5R) = -2kQ/R, q=+Q, rA = r, and rB = 1.5R.
For r < R, V(r) = [-kQ/6R](7+3r2/R2)
From the previous region, we have V(R) = -5kQ/3R.
We derived previously that the potential difference between a point on the edge of an insulating sphere and a point inside is:
V(R) - V(r) = -[kQ/2R3] (R2-r2)
Solving for V(r) gives:
V(r) = -5kQ/3R + kQ/2R - kQr2/2R3
V(r) = -10kQ/6R + 3kQ/6R - 3kQr2/6R3
For r < R, V(r) = [-kQ/6R](7+3r2/R2)
| 923
| 2,848
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.15625
| 4
|
CC-MAIN-2013-48
|
longest
|
en
| 0.941904
|
https://mycbseguide.com/blog/ncert-solutions-class-12-maths-exercise-9-6/
| 1,558,626,508,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-22/segments/1558232257259.71/warc/CC-MAIN-20190523143923-20190523165923-00231.warc.gz
| 552,958,415
| 52,076
|
# NCERT Solutions class 12 Maths Exercise 9.6
Subscribe complete study pack at ₹12.5/- per month only. Subscribe Install the best mobile app for CBSE students. Install App
Download NCERT solutions for Differential Equations as PDF.
## NCERT Solutions class 12 Maths Differential Equations
In each of the following differential equations given in each Questions 1 to 4, find the general solution:
1.
Ans. Given: Differential equation
Comparing with , we have P = 3 and Q = .
I.F. =
Solution is (I.F.) =
……….(i)
Applying product rule, I =
Again applying product rule, I =
I =
I =
I =
Putting the value of I in eq. (i),
2.
Ans. Given: Differential equation
Comparing with , we have P = 2 and Q = .
I.F. =
Solution is (I.F.) =
3.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
4.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
For each of the following differential equations given in Question 5 to 8, find the general solution:
5.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
…….(i)
Putting and differentiating
Applying product rule,
Putting this value in eq. (i),
6.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
7.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
Applying Product rule of Integration,
8.
Ans. Given: Differential equation
[to make unity]
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
For each of the following differential equations given in Question 9 to 12, find the general solution:
9.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
=
I.F. =
Solution is (I.F.) =
Applying product rule of Integration,
=
### 10.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
Applying product rule of Integration,
= =
### 11.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
### 12.
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F. =
Solution is (I.F.) =
### For each of the differential equations given in Questions 13 to 15, find a particular solution satisfying the given condition:
13. when
Ans. Given: Differential equation when
Comparing with , we have P = and Q = .
I.F.=
Solution is (I.F.) =
### 14. when
Ans. Given: Differential equation when
Comparing with , we have P = and Q = .
I.F.=
Solution is (I.F.) =
……….(i)
Now putting
Putting the value of in eq. (i),
### 15. when
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F.=
Solution is (I.F.) =
……….(i)
Now putting in eq. (i),
Putting in eq. (i),
### 16. Find the equation of the curve passing through the origin, given that the slope of the tangent to the curve at any point is equal to the sum of coordinates of that point.
Ans. Slope of the tangent to the curve at any point = Sum of coordinates of the point
Comparing with , we have P = and Q = .
I.F.=
Solution is (I.F.) =
Applying Product rule of Integration,
……….(i)
Now, since curve (i) passes through the origin (0, 0), therefore putting in eq. (i)
Putting in eq. (i),
### 17. Find the equation of the curve passing through the point (0, 2) given that the sum of the coordinates of any point on the curve exceeds the magnitude of the slope of the tangents to the curve at that point by 5.
Ans. According to the question, Sum of the coordinates of any point say on the curve
= Magnitude of the slope of the tangent to the curve + 5
Comparing with , we have P = and Q = .
I.F.=
Solution is (I.F.) =
Applying Product rule of Integration,
……….(i)
Now, since curve (i) passes through the point (0, 2), therefore putting in eq. (i)
Putting in eq. (i),
### 18. Choose the correct answer:
The integrating factor of the differential equation is:
(A)
(B)
(C)
(D)
Ans. Given: Differential equation
Comparing with , we have P = and Q = .
I.F.=
Therefore, option (C) is correct.
### 19. Choose the correct answer:
The integrating factor of the differential equation
(A)
(B)
(C)
(D)
Ans. Given: Differential equation
Comparing with , we have P = and Q =
=
I.F. =
Therefore, option (A) is correct.
## NCERT Solutions class 12 Maths Exercise 9.6
NCERT Solutions Class 12 Maths PDF (Download) Free from myCBSEguide app and myCBSEguide website. Ncert solution class 12 Maths includes text book solutions from both part 1 and part 2. NCERT Solutions for CBSE Class 12 Maths have total 20 chapters. 12 Maths NCERT Solutions in PDF for free Download on our website. Ncert Maths class 12 solutions PDF and Maths ncert class 12 PDF solutions with latest modifications and as per the latest CBSE syllabus are only available in myCBSEguide
## CBSE App for Students
To download NCERT Solutions for class 12 Physics, Chemistry, Biology, History, Political Science, Economics, Geography, Computer Science, Home Science, Accountancy, Business Studies and Home Science; do check myCBSEguide app or website. myCBSEguide provides sample papers with solution, test papers for chapter-wise practice, NCERT solutions, NCERT Exemplar solutions, quick revision notes for ready reference, CBSE guess papers and CBSE important question papers. Sample Paper all are made available through the best app for CBSE students and myCBSEguide website.
Subscribe complete study pack at ₹12.5/- per month only. Subscribe Install the best mobile app for CBSE students. Install App
### 5 thoughts on “NCERT Solutions class 12 Maths Exercise 9.6”
1. Good job
2. In answer No. 19 in the differential equation there is dy/dx given but they treated it as dx/dy … How is it possible
3. best
4. Thank you sir so much
5. Thanku for provide easy methods of answer
| 1,614
| 6,015
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.46875
| 4
|
CC-MAIN-2019-22
|
longest
|
en
| 0.904035
|
https://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday,_September_12
| 1,718,889,022,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861940.83/warc/CC-MAIN-20240620105805-20240620135805-00320.warc.gz
| 185,586,232
| 8,268
|
# 06-240/Classnotes For Tuesday, September 12
• PDF notes by User:Harbansb: September 12 Notes.
• If I have made an error in my notes, or you would like the editable OpenOffice file, feel free to e-mail me at harbansb@msn.com.
• PDF notes by User:Alla: Week 1 Lecture 1 notes
• Below are a couple of lemmata critical to the derivation we did in class - the Professor left this little work to the students:
# Notes
## The Real Numbers
The Real Numbers are a set (denoted by ${\displaystyle \mathbb {R} }$) along with two binary operations: + (plus) and · (times) and two special elements: 0 (zero) and 1 (one), such that the following laws hold true:
${\displaystyle \mathbb {R} 1}$: ${\displaystyle \forall a,b\in \mathbb {R} }$ we have ${\displaystyle a+b=b+a}$ and ${\displaystyle a\cdot b=b\cdot a}$ (The Commutative Laws)
${\displaystyle \mathbb {R} 2}$: ${\displaystyle \forall a,b,c\in \mathbb {R} }$ we have ${\displaystyle (a+b)+c=a+(b+c)}$ and ${\displaystyle (a\cdot b)\cdot c=a\cdot (b\cdot c)}$ (The Associative Laws)
${\displaystyle \mathbb {R} 3}$: ${\displaystyle 0}$ is an additive unit and ${\displaystyle 1}$ is a multiplicative unit (The Existence of Units/Identities)
${\displaystyle \mathbb {R} 4}$: ${\displaystyle \forall a\in \mathbb {R} \ \exists b\in \mathbb {R} {\mbox{ s.t.}}\ a+b=0}$
This is incomplete.
| 434
| 1,341
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.921875
| 4
|
CC-MAIN-2024-26
|
latest
|
en
| 0.818002
|
http://www.drumtom.com/q/saturated-solution-of-x-at-20-c-using-31-0-g-of-water-how-much-more-solute-can-be-dissolved-if-the-temperature-is-increased-to-3
| 1,481,099,515,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-50/segments/1480698542009.32/warc/CC-MAIN-20161202170902-00310-ip-10-31-129-80.ec2.internal.warc.gz
| 416,102,220
| 8,366
|
# Saturated solution of X at 20∘C using 31.0 g of water. How much more solute can be dissolved if the temperature is increased to 30∘C?
• Saturated solution of X at 20∘C using 31.0 g of water. How much more solute can be dissolved if the temperature is increased to 30∘C?
Find right answers rigt now! Saturated solution of X at 20∘C using 31.0 g of water. How much more solute can be dissolved if the temperature is increased ...
Positive: 59 %
... You have prepared a saturated solution of X at 20C using 39.0g of water. How much more solute can be dissolved ... dissolved if the temperature is ...
Positive: 56 %
### More resources
... of solute in a saturated solution containing 100 mL or 100 g of water at a certain temperature ... of KCl can be dissolved in 100 mL of water?
Positive: 59 %
... heating the solution can increase the amount of solute ... 30(C to 60(C? Using your graph, how much ... 30(C ? 100 g of water with a temperature ...
Positive: 54 %
Calculating At 20°C, how much baking soda can dissolve in ... to 100 g of water at 20°C. ... saturated solution,the extra solute can rapidly deposit out ...
| 299
| 1,124
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.328125
| 3
|
CC-MAIN-2016-50
|
longest
|
en
| 0.903287
|
https://gitlab.inria.fr/why3/why3/blame/e3231d7fd447719f2830c4a856f21e7f1b9637ff/examples/verifythis_2016_matrix_multiplication/naive.mlw
| 1,586,515,567,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-16/segments/1585371893683.94/warc/CC-MAIN-20200410075105-20200410105605-00458.warc.gz
| 484,699,223
| 11,736
|
naive.mlw 1.59 KB
Martin Clochard committed May 09, 2016 1 2 3 4 5 6 7 8 ``````module MatrixMultiplication use import int.Int use import int.Sum use import map.Map use import matrix.Matrix function mul_atom (a b: matrix int) (i j: int) : int -> int = `````` Guillaume Melquiond committed May 16, 2017 9 `````` fun k -> a.elts[i][k] * b.elts[k][j] `````` Martin Clochard committed May 09, 2016 10 11 `````` predicate matrix_product (m a b: matrix int) = forall i j. 0 <= i < m.rows -> 0 <= j < m.columns -> `````` Guillaume Melquiond committed May 16, 2017 12 `````` m.elts[i][j] = sum (mul_atom a b i j) 0 a.columns `````` Martin Clochard committed May 09, 2016 13 14 15 16 17 18 `````` let mult_naive (a b: matrix int) : matrix int requires { a.columns = b.rows } ensures { result.rows = a.rows /\ result.columns = b.columns } ensures { matrix_product result a b } = let rs = make (rows a) (columns b) 0 in `````` Martin Clochard committed Dec 06, 2016 19 `````` for i = 0 to a.rows - 1 do `````` Martin Clochard committed May 09, 2016 20 21 22 `````` invariant { forall i0 j0. i <= i0 < rows a /\ 0 <= j0 < columns b -> rs.elts[i0][j0] = 0 } invariant { forall i0 j0. 0 <= i0 < i /\ 0 <= j0 < columns b -> `````` Guillaume Melquiond committed May 16, 2017 23 24 `````` rs.elts[i0][j0] = sum (mul_atom a b i0 j0) 0 a.columns } label M in `````` Martin Clochard committed May 09, 2016 25 26 `````` for k = 0 to rows b - 1 do invariant { forall i0 j0. 0 <= i0 < rows a /\ 0 <= j0 < columns b -> `````` Guillaume Melquiond committed May 16, 2017 27 `````` i0 <> i -> rs.elts[i0][j0] = (rs at M).elts[i0][j0] } `````` Martin Clochard committed May 09, 2016 28 `````` invariant { forall j0. 0 <= j0 < columns b -> `````` Guillaume Melquiond committed May 16, 2017 29 30 `````` rs.elts[i][j0] = sum (mul_atom a b i j0) 0 k } label I in `````` Martin Clochard committed May 09, 2016 31 32 `````` for j = 0 to columns b - 1 do invariant { forall i0 j0. 0 <= i0 < rows a /\ 0 <= j0 < columns b -> `````` Guillaume Melquiond committed May 16, 2017 33 `````` (i0 <> i \/ j0 >= j) -> rs.elts[i0][j0] = (rs at I).elts[i0][j0] } `````` Martin Clochard committed May 09, 2016 34 `````` invariant { forall j0. 0 <= j0 < j -> `````` Guillaume Melquiond committed May 16, 2017 35 `````` rs.elts[i][j0] = sum (mul_atom a b i j0) 0 (k+1) } `````` Martin Clochard committed May 09, 2016 36 37 38 39 40 41 42 43 `````` set rs i j (get rs i j + get a i k * get b k j) done; done; done; rs end``````
| 942
| 2,477
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.65625
| 3
|
CC-MAIN-2020-16
|
latest
|
en
| 0.592492
|
https://scoodle.co.uk/questions/in/maths/what-s-the-nth-term-for-7101316-2
| 1,643,407,337,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-05/segments/1642320306346.64/warc/CC-MAIN-20220128212503-20220129002503-00243.warc.gz
| 552,673,500
| 28,969
|
Get an answer in 5 minutes
We'll notify as soon as your question has been answered.
Ask a question to our educators
MATHS
Asked by Youcef
# What's the nth term for 7,10,13,16?
1st step : Find the difference The pattern seems to be add three each time. 7+3=10, 10+3= 13 etc. So we know our staring point is 3n 2nd step : How much do we need to add or takeaway to get the first term? 7 is our first term in this sequence. Our starting point for the nth term is 3n. To get 7 I need to do 3+4 Therefore the nth term for this sequence is 3n + 4 Hope this helped !
Jayne Danielle
·
54 students helped
The difference between each number is 3 so this becomes 3n. If u then put the 3x tables under the sequence u will see that you need to add 4 to match your sequence for example 3+4=7 6+4=10 9+4=13 so this becomes the final part of the nth term therefore the nth term is 3n+4
Khadijah Ahmed
·
37 students helped
We can see that each term in the sequence we add 3. Also if we think about what number would come before 7, we can see 7-3=4. This tells us that the nth term is given by: 3n+4
Luke Brooke
·
225 students helped
Maths
Asked by Lilly
X-6y=5v
## What is Q&A on Scoodle?
At Scoodle we understand that everyone learns in a different way. Some people learn through practice, using essays and notes; others prefer video lessons to watch and learn, some just need help with a specifically hard question, while some learn best 1-on-1 tutoring sessions. At Scoodle we cater for all types of learning styles and needs. From GCSE Maths video lessons to A-level English essays and specialist educators in every subject - we’ve got you covered.
Scoodle has helped over 131,000 students so far
Need help with GCSE Maths?
Getting expert help from a tutor is a great way to improve your Maths grades.
## Discover learning resources by tutors
Learn Maths with Video Lessons
1hr 28m · 8 videos
Paja Kruzikova
·
652 students helped
GCSE Maths - Numbers
28m · 3 videos
Iqbal Lokman
·
1.4k students helped
Quadratic Inequalities
1hr 32m · 9 videos
Paja Kruzikova
·
652 students helped
GCSE Maths - Ratios
56m · 6 videos
Paja Kruzikova
·
652 students helped
GCSE Maths - Algebra A
1hr 14m · 7 videos
Paja Kruzikova
·
652 students helped
GCSE Maths - Algebra B
1hr 27m · 9 videos
Paja Kruzikova
·
652 students helped
GCSE Maths - Algebra C
| 678
| 2,353
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.15625
| 4
|
CC-MAIN-2022-05
|
latest
|
en
| 0.939572
|
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-8-test-page-597/14
| 1,537,580,385,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-39/segments/1537267158001.44/warc/CC-MAIN-20180922005340-20180922025740-00118.warc.gz
| 746,963,724
| 13,068
|
## Prealgebra (7th Edition)
Eighties-plus is projected to be 3.7%. $0.037\times326,000,000=12,062,000\approx12,000,000$
| 47
| 120
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.171875
| 3
|
CC-MAIN-2018-39
|
longest
|
en
| 0.833238
|
https://academic-answers.net/instructions-this-assignment-will-give-you-an-opportunity-to-apply-the-concepts-taught-in-this-unit-you-will-complete-this-assignment-in-two-parts-part-1-requires-a-short-written-response-part-2/
| 1,686,342,733,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00189.warc.gz
| 101,165,413
| 12,697
|
# Instructions This assignment will give you an opportunity to apply the concepts taught in this unit. You will complete this assignment in two parts. Part 1 requires a short written response. Part 2
Instructions
This assignment will give you an opportunity to apply the concepts taught in this unit. You will complete this assignment in two parts.
Part 1 requires a short written response.
Part 2 involves working problems based on provided background information. Examples of how to complete these problems can be found in the Unit I Introduction and associated videos presented in the unit.
Both parts of the assignment will be completed using a worksheet on which you will show your work and provide your answers to the questions listed. A link to the worksheet is provided below the instructions.
Part 1
Tyson and Ella work at the Ruby Red Movie Theater in town. After work, they decide to watch a movie. After purchasing their tickets, they stop by the concession stand and purchase popcorn, drinks, and candy. Use the circular flow diagram to describe the purchases that Tyson and Ella made and the services and goods that were provided to them.
Your response must be at least 75 words in length.
Part 2
Background information:
As mentioned in Part 1 of this assignment, Tyson and Ella work at the Ruby Red Movie Theater. Tyson can produce 100 bags of popcorn or 50 hot dogs in one hour. His coworker, Ella, can produce 100 bags of popcorn or 30 hot dogs in an hour. Answer the following questions based on this information. Use the worksheet to show your work and provide your answers.
Part A:
If Tyson and Ella attempted to produce both popcorn and hot dogs, how many bags of popcorn and hot dogs could each produce individually per hour? What would be the total number of bags of popcorn and hot dogs produced by the two workers combined? (Show your work.)
Part B:
Calculate the opportunity cost of producing bags of popcorn for each worker. (Show your work.)
Part C:
Calculate the opportunity cost of producing hot dogs for each person. (Show your work.)
Part D:
Determine how many bags of popcorn should be produced by each worker per hour. (Show your work).
If each worker should specialize in producing popcorn or hot dogs, explain why; use economic terminology that you have learned in this unit in your explanation.
Finally, how many total bags of popcorn and hot dogs will be produced per hour by the two workers combined after specialization?
Part E:
What potential ethical issues could arise from making the decision to have both employees specialize in producing popcorn or hot dogs? Name and explain at least two issues.
When you are ready to begin your assignment, access the Unit I Assignment Worksheet in Blackboard.
Once you have completed all sections of Parts 1 and 2 of the assignment, you will save and upload the worksheet into Blackboard. Name your file “Unit I Assignment Worksheet_YourName” (replace “YourName” with your own name). Make sure you include your name and class section at the top of the worksheet. Any sources used, including the textbook, must be referenced; paraphrased and quoted material must have accompanying citations. All references and citations used must be in APA Style.Hide Files: UTF-8”UnitI_AssignmentWorksheet%281%29.docx
Basic features
• Free title page and bibliography
• Unlimited revisions
• Plagiarism-free guarantee
• Money-back guarantee
On-demand options
• Writer’s samples
• Part-by-part delivery
• Overnight delivery
• Copies of used sources
Paper format
• 275 words per page
• 12 pt Arial/Times New Roman
• Double line spacing
• Any citation style (APA, MLA, Chicago/Turabian, Harvard)
##### Our guarantees
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
### Money-back guarantee
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
### Zero-plagiarism guarantee
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
### Free-revision policy
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
| 927
| 4,494
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.28125
| 3
|
CC-MAIN-2023-23
|
latest
|
en
| 0.938013
|
https://as1air.com/how-many-calories-in-a-snickers-bar/
| 1,624,317,434,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00572.warc.gz
| 105,947,908
| 14,829
|
# How many calories in a snickers bar
how many calories in a snickers bar is one of the most frequently asked questions.
## Why should I know how many calories in a snickers bar?
He who owns the information, owns the world – said V.Cherchill. Today the information lies around, so this phrase would sound like this: Не who knows where to find information, owns the world. Therefore, to answer the question how many calories in a snickers bar you need to know where to find the answer to it.
## How do I know how many calories in a snickers bar?
Today, there are many calculators for converting one value to another and vice versa. At the touch of a button, you can find out how many calories in a snickers bar. To do this, you need to write in the search box (for example, google) how many calories in a snickers bar and add to it an additional word: converter or calculator . Choose the calculator you like. And with his help find out how many calories in a snickers bar.
| 221
| 977
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.640625
| 3
|
CC-MAIN-2021-25
|
latest
|
en
| 0.941401
|
https://whatisconvert.com/218-knots-in-meters-second
| 1,606,688,025,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-50/segments/1606141203418.47/warc/CC-MAIN-20201129214615-20201130004615-00236.warc.gz
| 544,042,003
| 7,708
|
# What is 218 Knots in Meters/Second?
## Convert 218 Knots to Meters/Second
To calculate 218 Knots to the corresponding value in Meters/Second, multiply the quantity in Knots by 0.514444444444 (conversion factor). In this case we should multiply 218 Knots by 0.514444444444 to get the equivalent result in Meters/Second:
218 Knots x 0.514444444444 = 112.14888888879 Meters/Second
218 Knots is equivalent to 112.14888888879 Meters/Second.
## How to convert from Knots to Meters/Second
The conversion factor from Knots to Meters/Second is 0.514444444444. To find out how many Knots in Meters/Second, multiply by the conversion factor or use the Velocity converter above. Two hundred eighteen Knots is equivalent to one hundred twelve point one four nine Meters/Second.
## Definition of Knot
The knot is a unit of speed equal to one nautical mile (1.852 km) per hour, approximately 1.151 mph. The ISO Standard symbol for the knot is kn. The same symbol is preferred by the IEEE; kt is also common. The knot is a non-SI unit that is "accepted for use with the SI". Worldwide, the knot is used in meteorology, and in maritime and air navigation—for example, a vessel travelling at 1 knot along a meridian travels approximately one minute of geographic latitude in one hour. Etymologically, the term derives from counting the number of knots in the line that unspooled from the reel of a chip log in a specific time.
## Definition of Meter/Second
Metre per second (American English: meter per second) is an SI derived unit of both speed (scalar) and velocity (vector quantity which specifies both magnitude and a specific direction), defined by distance in metres divided by time in seconds. The SI unit symbols are m·s−1, m s−1 or m/s sometimes (unofficially) abbreviated as "mps". Where metres per second are several orders of magnitude too slow to be convenient, such as in astronomical measurements, velocities may be given in kilometres per second, where 1 km/s is 1000 metres per second, sometimes unofficially abbreviated as "kps".
## Using the Knots to Meters/Second converter you can get answers to questions like the following:
• How many Meters/Second are in 218 Knots?
• 218 Knots is equal to how many Meters/Second?
• How to convert 218 Knots to Meters/Second?
• How many is 218 Knots in Meters/Second?
• What is 218 Knots in Meters/Second?
• How much is 218 Knots in Meters/Second?
• How many m/s are in 218 kt?
• 218 kt is equal to how many m/s?
• How to convert 218 kt to m/s?
• How many is 218 kt in m/s?
• What is 218 kt in m/s?
• How much is 218 kt in m/s?
| 665
| 2,581
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.65625
| 4
|
CC-MAIN-2020-50
|
longest
|
en
| 0.899401
|
https://www.coursehero.com/file/6410621/last-lab/
| 1,513,520,880,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2017-51/segments/1512948596051.82/warc/CC-MAIN-20171217132751-20171217154751-00623.warc.gz
| 740,982,247
| 68,639
|
last lab
# last lab - Date Created December 2nd 2009 Name Tyler Campos...
This preview shows pages 1–3. Sign up to view the full content.
Date Created: December 2 nd , 2009 Name: Tyler Campos Date Submitted: December 4, 2009 Partner: Conor McKenna Instructor: Bing Yan Properties of Gases Objectives: To study the dependence of the volume of a gas on temperature and pressure. Part A: Boyle’s Law Temperature °C h= length of airspace (mm) 25 50 35 45 50 50 65 55 75 55 95 60 Part B: Boyle’s Law Syringe Vol (mL) Sensor Pressure (mmHg) Syringe Vol (mL) Sensor Pressure (mmHg) 4 1 11 -391 1 657 12 -417 2 352 13 -437 3 144 14 -453 5 -102 15 -470 6 -177 16 -484 7 -243 17 -497 8 -294 18 -507 9 -335 19 -517 10 -366 20 -526 Sample Calculations: Charles’ Law V=a 2h Boyle’s Law To calculate the air pressure of each volume in the syringe: Pair=Psensor+Patm To calculate PV, we multiplied the pressure x the volume. PV=?
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Results and Conclusions: Part A: Temperature °C Height of air (cm) Volume (cubic mm) 25 5 245.44 35 4.5 220.89 50 5 245.44 65 5.5 269.98 75 5.5 269.98 95 6 294.52 Discussion Part A: As we increased the temperature of the contents inside the tube, the column of air inside
This is the end of the preview. Sign up to access the rest of the document.
{[ snackBarMessage ]}
### Page1 / 3
last lab - Date Created December 2nd 2009 Name Tyler Campos...
This preview shows document pages 1 - 3. Sign up to view the full document.
View Full Document
Ask a homework question - tutors are online
| 496
| 1,612
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.890625
| 3
|
CC-MAIN-2017-51
|
latest
|
en
| 0.688379
|
https://ncertmcq.com/rs-aggarwal-class-6-solutions-chapter-21-concept-of-perimeter-and-area-ex-21b/
| 1,721,092,233,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-30/segments/1720763514724.0/warc/CC-MAIN-20240715224905-20240716014905-00774.warc.gz
| 369,910,361
| 10,180
|
## RS Aggarwal Class 6 Solutions Chapter 21 Concept of Perimeter and Area Ex 21B
These Solutions are part of RS Aggarwal Solutions Class 6. Here we have given RS Aggarwal Solutions Class 6 Chapter 21 Concept of Perimeter and Area Ex 21B
Other Exercises
Question 1.
Solution:
(i) Radius of the circle (r) = 28 cm
Circumference = 2 πr
= 2 x $$\\ \frac { 22 }{ 7 }$$ x 28 cm
= 176 cm Ans.
Question 2.
Solution:
(i) Diameter of the circle (d) = 14 cm
Circumference = πd
= $$\\ \frac { 22 }{ 7 }$$ x 14
= 44 cm
Question 3.
Solution:
Circumference of the circle = 176 cm
Let r be the radius, then
Question 4.
Solution:
Circumference of a wheel = 264 cm
Let d be its diameter, then
πd = 264
=> $$\\ \frac { 22 }{ 7 } d$$
= 264
Question 5.
Solution:
Diameter of the wheel (d) = 77 cm
Circumference = πd
Question 6.
Solution:
Diameter of the wheel = 70 cm
circumference = πd
Hope given RS Aggarwal Solutions Class 6 Chapter 21 Concept of Perimeter and Area Ex 21B are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
| 360
| 1,111
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.5
| 4
|
CC-MAIN-2024-30
|
latest
|
en
| 0.755636
|
http://gustavogargiulo.com/lesson-6-homework-41-answer-key/
| 1,632,419,581,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00548.warc.gz
| 32,436,451
| 11,177
|
# Lesson 6 Homework 4.1 Answer Key
How many ounces does her sister weigh. Towards his house 5.
Close Reading Comprehension Color Coding Text Evidence Free Text Evidence Color Coding Text Evidence Close Reading Comprehension
### Make a simple math drawing with labels.
Lesson 6 homework 4.1 answer key. Draw place value disks to represent each number in the place value chart. 4 8 16 2. 12 Homework Helper G1-M2-Lesson 1 Read the math story.
Addition and Subtraction within 1000. 100000 less than six hundred thirty thousand five hundred seventeen is Þ Ü Ù Þ Ú à. Practice and homework lesson 41 answer key.
Eureka Math Grade 4 Module 6 Lesson 7 Problem Set Answer Key. Write a decimal number sentence to identify the total value of the number disks. Label the units in the place value chart.
Nine hundred five thousand two hundred three c. Lesson 3 Homework Practice – Displaying top 8 worksheets found for this concept. 900000 5000 200 3 3.
Represent and Interpret Data. Lesson 1 Homework 4 7 Lesson 1. Ninety thousand five hundred twenty-three c.
Draw disks in the place value chart to show how you got your answer using arrows to show any regrouping. Use place value disks to find the sum or difference. Use or to compare the two numbers.
Lesson 7 Homework Date 4-6 1. Eureka Math Homework Helper 20152016 Grade 4 Module 1. Exploring Measurement with Multiplication Lesson 2 Answer Key 4 7 Exit Ticket 1.
Place Value Rounding and Algorithms for Addition and Subtraction 1 Lesson 1 Answer Key 4 1 Lesson 1 Sprint Side A 1. 40003 3 tens 300 4 hundreds 4 tenths 2 hundredths 3 hundredths 00B Use the place value chart to answer the following questions. 2015-16 Lesson 1.
NYS COMMON CORE MATHEMATICS CURRICULUM 4Lesson 4 Answer Key 1 Lesson 4 Problem Set 1. All papers Evaluate Homework And Practice Module 4 Lesson 1 Answer Key are carried out by competent and proven writers whose credentials and portfolios we will be glad to introduce on your demand. Kristin Siglers Class – Home.
NYS COMMON CORE MATHEMATICS CURRICULUM 4Lesson 8 Answer Key Lesson 8. In the above-given question given that 2 tens 5 tenths 3 hundredths. House fence house 2.
90000 500 20 3 2. 4 quarter turns 7. 1 counter-clockwise or 3 clockwise quarter turns 8.
Division Facts and Strategies. Some of the worksheets for this concept are Harcourt practice grade 2 lesson 22 answers Homework and practice workbook 10 3 Correctionkeya lesson do not edit changes must be Lesson 3 homework 4 7 Name date period lesson 4 homework practice Practice and homework name lesson customary capacity Grade 3. Grade 3 HMH Go Math Answer Keys.
Express the value of the digit in unit form. Eureka Math Answer Key for Grades Pre K 12 Engage NY Math Book Answers for Grades Pre K K 1 2 3 4 5 6 7 8 9 10 11 12. Fence tree barn 2.
Youll save your time well write your thesis in a professional manner. Help for fourth graders with Eureka Math Module 1 Lesson 6. Write the answer in standard form on the line.
Find 1 10 and 100 thousand more and less than a given number. Hundreds The digit 8 The digit The digit. Label the place value chart.
Compare numbers based on meanings of the digits using or to record the comparison. 90523 written in chart b. 905203 written in chart b.
Eureka Math Grade 4 Module 1 Lesson 1 Homework Answer Key. Label the place value charts. Fill in the blanks to make the following equations true.
Solve word problems with three addends two of which make ten. 41 G4-1-Lesson 6 1. Circle 10 and solve.
Go Math Answer Key for Grade 3 4 5 6 7 and 8. Multiplication Facts and Strategies. Picture shows a 270 turn.
41 Homework G4-M1-Lesson 5 1. Eureka Math Lesson 4 Homework 21 Answer Key. Write a decimal number sentence to identify the total value of the place value disks.
Results related to Cc3 Lesson 41 1 Answer Key Practice And Homework 41 Answers – 122020 41 Homework Practice Problems Question 1 015 out of 015 points Decide whether or not the ordered pair is a solution of the system.
Homework Help Math Homework Help Homework Math Homework
Go Math Practice 5th Grade 4 1 Multiplication With Decimals Go Math Math Practices Math
Pin On Fourth Grade Teaching Ideas
Pin On Tpt Language Arts Lessons
36 Weeks Of Daily Common Core Language Review For Fourth Grade Preview And Review Important 4th Gra Language Review Daily Language Review Fourth Grade Writing
Task Cards Are A Fun And Easy Way To Have Students Practice Concepts In Small Groups Or Independently These R Pronoun Task Cards Task Cards Relative Pronouns
3rd Grade Go Math 4 1 Multiply With 2 And 4 Color By Number Go Math Third Grade Math Multiplication Fern Smith S Classroom Ideas
Gomath Lesson 4 1 Estimating Quotients Using Multiples Estimating Quotients Fourth Grade Math Guided Math
Pin On Activities
Are You Looking For Activities To Use When Teaching About The Progressive Verb Tenses These Posters Wo Progressive Verbs Verb Activities For First Grade Verb
Updated Go Math Grade 3 Lesson Plans Chapter 6 6 1 6 9 Bundled Go Math Lesson Plans Math
| 1,240
| 5,028
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.984375
| 4
|
CC-MAIN-2021-39
|
longest
|
en
| 0.774504
|
https://lists.boost.org/Archives/boost/2005/10/95274.php
| 1,620,406,275,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00622.warc.gz
| 390,353,137
| 4,301
|
# Boost :
From: Matt Calabrese (rivorus_at_[hidden])
Date: 2005-10-11 19:54:25
On 10/11/05, Deane Yang <deane_yang_at_[hidden]> wrote:
>
> What I'm more interested in learning is how you handle "composite
> quantities", which are obtained by multiplying and dividing existing
> units (like "meters/second"), as well as raising to a rational power
> (like the standard unit of volatility in finance, "1/square_root(years)".
>
>
Rational powers are handled with power functions and metafunctions, as I
regarding "volatility in finance." Up until now, I have seen absolutely no
cases where non-derived unit classifications raised to a non-integer powers
makes sense and have even talked about such situations with mathematicians.
Looking back to the archives, I see people talking about fractional-powered
base units being possible and speak of examples from other threads, but I
can't seem to find such examples. An exact link would be very helpful. Right
now I support fractional powers, but not when the operation yields
fractional-powered base units. For instance, I allow the expression
power< 1, 2 >( your_meters_quantity * your_meters_quantity )
// where power< 1,2 > denotes a power of 1/2
However, I have chosen to disallow:
power< 1, 2 >( your_meters )
since it does not seem to ever make sense -- for any base classification
type, not just length. In an attempt to rationalize why this was the case, I
noticed that a base classification raised to a power could be looked at as a
hyper-volume in N-dimensional space, where N is the value of the exponent.
Continuing with "length" as an example, your_meters^2 represents a
hyper-volume in 2 dimensional space (area), and your_meters^3 represents a
hyper-volume in 3 dimensional space (volume), and your_meters^-3 could be
looked at as units per volume, etc. This model makes sense for all integer
powers, yet not for rational powers for base units, as it would imply a
concept of fractions of a dimension, which intuitively I do not believe
exist, though I am admittedly not a mathematician and my model could be too
specific.
Keep in mind that rational powered derived-classifications are still
perfectly fine, just so long as the resultant unit type does not have
fractional powered base units in its make-up. Considering you apparently
have an example where fractional-powered years is used (years being a base
unit of time), I suppose my logic could be flawed, though I haven't heard of
your example and googling around doesn't appear to be helping either. If you
can, would you link to information regarding such fractional-powered base
classifications? It's easy to go back and allow them in my library, as my
restriction is mostly just superficial, but I won't do so until I see a
place in practice where such operations actually make sense.
```--
| 661
| 2,819
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.890625
| 3
|
CC-MAIN-2021-21
|
latest
|
en
| 0.935409
|
https://www.transtutors.com/questions/use-the-depth-first-search-dfs-algorithm-starting-at-vertex-1-to-perform-topological-2773741.htm
| 1,606,845,378,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-50/segments/1606141681209.60/warc/CC-MAIN-20201201170219-20201201200219-00076.warc.gz
| 891,127,125
| 15,302
|
Use the depth first search (dfs) algorithm starting at vertex 1 to perform topological sorting of... 1 answer below »
Use the depth first search (dfs) algorithm starting at vertex 1 to perform topological sorting of the directed acyclic graph shown in Figure 1. Explain each step clearly by drawing the dfs trees generated and the output array at each step. Write an algorithm that finds the sum of the in-degrees of all the vertices in a directed graph. Assume that the directed graph is represented by an adjacency list. What is the time complexity of your algorithm in part (b) in terms of the number of vertices and edges? Justify your answer.
malla v
& storing of graph in Answer! hogical representation of the given aeljourney est representation 9 We apply Dis on the above gooph starting from verden af ODES () sus/F64) DFS (6) S 1 Stock Bo3f12) W DFS (2) DIY (3 DFS (4) DFS (5) 5089/EU d 6 stack s(9) Flo content of stack Topological order! - We popis 2,4,5,3, 4,6 Hrite an algorithm to find...
Plagiarism Checker
Submit your documents and get free Plagiarism report
Free Plagiarism Checker
Recent Questions in Programming Languages
Looking for Something Else? Ask a Similar Question
| 296
| 1,198
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.6875
| 3
|
CC-MAIN-2020-50
|
latest
|
en
| 0.867695
|
https://www.apt-initiatives.com/products/apt4maths-set-of-10-powerpoint-presentations-on-symmetry-transformations-and-vectors-for-gcse-and-key-stage-3-mathematics/
| 1,721,212,354,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-30/segments/1720763514759.39/warc/CC-MAIN-20240717090242-20240717120242-00849.warc.gz
| 565,883,442
| 18,632
|
APT
Initiatives Ltd
Est 1999
# apt4Maths: Set of 10 PowerPoint Presentations (Relating to Geometry and Measures) on Symmetry Transformations and Vectors for GCSE (and Key Stage 3) Mathematics
Product Code: PPGKS3M-4GM-STV
From £16.50 ex VAT
If you are a school/college and would like to place an order and be INVOICED later, please email sales@apt-initiatives.com.
This set of 10 PowerPoint Presentations, written by a highly experienced teacher (of 25+ years), senior examiner and reviser for Maths and Stats examinations, are designed for use by:
• any teacher – not necessarily a maths specialist – as part of their own delivery of lessons.
• students working independently.
They can be used by:
• cover teachers.
• students who are unable to attend their lesson in person.
Each PowerPoint Presentation includes:
• Lesson objectives
• Step-by-step explanations of the subject matter
• Examples to aid understanding
• Questions to check understanding
• Answers to questions, with explanations
• Suggestions regarding which topic(s) should be moved on to next.
These PowerPoint Presentations are one of several sets of PowerPoint Presentations, which essentially relate to the ‘Geometry and Measures’ section of the Maths specifications. These other sets concern:
• Measures, Perimeter, Area & Volume
• Symmetry, Transformations & Vectors
#### Product Information
This set of 10 PowerPoint Presentations (149 slides, excluding Title Pages) covers the following topics relating to ‘Symmetry, Transformations and Vectors’:
• 01 Line Symmetry (17 slides): Reviews what is meant by ‘symmetry’; Explains how to draw reflections and find mirror lines and to determine how many lines of symmetry something has.
• 02 Rotational Symmetry (15 slides): Explains how to find the order of rotational symmetry and how to draw rotationally symmetric shapes.
• 03 3D Symmetry (9 slides): Explains how to find planes of symmetry for 3D Shapes.
• 04 Transformations – Reflections (13 slides): Reviews what is meant by ‘reflect’; Explains how to draw reflections on a grid in a given line, and how to find the line of reflection on a grid.
• 05 Transformations – Rotations (16 slides): Reviews what is meant by ‘rotate’; Explains how to draw rotations on a grid using a centre and angle of rotation, and how to find the angle and the centre of rotation.
• 06 Transformations – Translations (13 slides): Reviews what is meant by ‘translate’; Explains how to draw translations on a grid given a matrix vector, and how to find the matrix vector of a translation.
• 07 Transformations – Enlargements (18 slides): Reviews what is meant by ‘enlarge’; Explains how to draw enlargements on a grid given a centre and factor of enlargement, and how to find the centre and the factor of enlargement.
• 08 Similarity (17 slides): Reviews what is meant by ‘similar’; Explains how to find missing lengths on similar shapes, as well as how to find missing areas or volumes of similar 2D or 3D shapes.
• 09 Proofs of Congruency & Similarity (12 slides): Explains what you need to do to prove that two shapes are congruent or that two shapes are similar.
• 10 Vectors (19 slides): Explains what a vector is, how vectors can be presented, and how to calculate with vectors.
apt4Maths PowerPoint on Symmetry Transformations and Vectors - Proofs of Congruency and Similarity
## Key Information
The purchase of this resource comes with a licence to make the resource available in digital and / or in print form (including photocopying) to the staff and students attending the purchasing institution, ie the individual school / college on a single site.
The resource may be distributed via a secure virtual learning environment. It must not be made available on any public or insecure website or other platform.
The resource must not be distributed to other institutions that are members of the same academy chain or similar organisation; each individual institution must purchase their own copy of the resource direct from APT Initiatives Ltd.
The resource (or any part of the resource) must not be distributed to any other individual or organisation in any form, or by any means. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages.
| 950
| 4,304
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.84375
| 3
|
CC-MAIN-2024-30
|
latest
|
en
| 0.893965
|
www.colloquium-journal.org
| 1,642,852,510,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00019.warc.gz
| 90,610,819
| 141,070
|
# Bad Debts Percentage Of Credit Sales Method
In Exhibit 1, the aging schedule shows that the older the receivable, the less likely the company is to collect it. The first equation multiplies 365 days by your accounts receivable balance divided by total net sales. Your average A/R collection period is an important key performance metric . It’s smart to know how to calculate your collection period, understand what it means, and how to assess the data so you can improve accounts receivable efficiency.
To understand how this ratio works, imagine the hypothetical company H.F. It sells to supermarkets and convenience stores across the country, and it offers its customers 30-day terms. That means the customers have 30 days to pay for the beverages they’ve ordered. Further, these goods must be returned within a few days immediately after they are sold. In this article, we are going to discuss what is net sales, how to calculate net sales and the net sales formula. Marquis Codjia is a New York-based freelance writer, investor and banker.
## What Affects Sale Price?
The accounts receivable turnover ratio measures a company’s effectiveness in collecting its receivables or money owed by clients. Obviously, the use of cash versus credit sales and the duration of the latter depend on the nature of a company’s business. With consumer goods and services, the credit card has turned most retailers’ sales into cash sales. However, outside the consumer field, virtually all sales by business involve, at a minimum, some payment terms, and, therefore, credit sales.
For instance, it can mean that you have a high amount of bad debt or uncollected receivables. This is because of failing in the collection of credit sale or convert the credits sales into cash in a short period of time will adversely affect the company at least two thinks. The account receivable outstanding at the end of December 2015 is 20,000 USD and at the end of December 2016 is 25,000 USD. So far, we have used one uncollectibility rate for all accounts receivable, regardless of their age. However, some companies use a different percentage for each age category of accounts receivable. When accountants decide to use a different rate for each age category of receivables, they prepare an aging schedule. An aging schedule classifies accounts receivable according to how long they have been outstanding and uses a different uncollectibility percentage rate for each age category.
These categories include Net Sales, Cost of Goods Sold, Gross Margin, Selling and Administrative Expenses, and Net Profit. Let’s assume a manufacturing company has a major customer who purchases a significant amount of product every year. When the customer starting doing business with the manufacturer, they requested credit terms, so they could purchase product on credit and pay for it at a later date. This customer is the only customer with credit terms from the manufacturer and all other customers pay for product at the time of sale. The second step involves determining the amount of sales paid in cash. Likewise, it would be important to account for all products returned for whatever reason. Any allowances made to customers or discounts issued should also be accounted’ for and deducted as well from gross sales.
• To get your DSO calculation, first find your average A/R for the time period.
• Financial performance measures how well a firm uses assets from operations and generates revenues.
• This ratio is very important for management to assess the collection performance as well as credit sales assessments.
• A well-managed company gets its customers to stick to the schedule.
• If your business’s cash flow could use some attention, don’t despair!
• Finally, I need to give each independent variable plot a boxplot of each dependent variable.
If you notice, there’s usually an ebb and flow of business related to the year. You may want to consider this factor when choosing the times to do your calculations. GoCardless is authorised by the Financial Conduct Authority under the Payment Services Regulations 2017, registration number , for the provision of payment services. Learn more about how you can improve payment processing at your business today. Credit Purchases are calculated by Preparing Total Accounts Payable / Creditors T Account.It is to be noted that Net Credit Purchases Formula may also be used to calculate Net Credit Purchases.
On the income statement, Bad Debt Expense would still be 1%of total net sales, or \$5,000. The past experience with the customer and the anticipated credit policy plays a role in determining the percentage. Once the percentage is determined, it is multiplied by the total credit sales of the business to determine bad debt CARES Act expense. For example, for an accounting period, a business reported net credit sales of \$50,000. Net Sales refers to your company’s total sales during an accounting period less any allowances, sales returns, and trade discounts. Furthermore, Net Sales are primarily indicated in the income statement of your business.
However, a very high ratio may mean that the business is using overly-strict collection policies. The allowance can be set by a variety of different methods, including a pure guess, since it is reconciled at the end of an accounting period. This would give a net credit sales value of \$175,000 + \$5,000, or \$180,000.
## What Are Net Credit Sales? Definition Meaning Example
If you started your small business fewer than three years ago, add up the credit sales you generated since its inception. For example, assume your small business generated \$10,000, \$15,000 and \$17,000 in each of the past three years. Add these together to get \$42,000 in total credit sales in the past three years. Credit Sales – It refers to sales in which customer is making payment at a later date.
The secret to accounts receivable management is knowing how to track and measure performance. Add retained earnings together the amount of credit sales you failed to collect in each of the past three years.
Late payments could be a sign of trouble, both in terms of management style and financial footing. Credit sales are found on the income statement, not the balance sheet. You’ll have to have both the income statement and balance sheet in front of you to calculate this equation. Thus, using the accrual method of accounting you can recognize revenue from sales the moment you send invoices to your customers. You do not have to wait for the cash payment to recognize sales in your books of accounts. An increase in sales and allowances account and a decrease in cash or accounts receivable. In other words, your sales return account gets debited and the cash or accounts receivable account gets credited.
With the cash accounting method, gross sales are only the sales which you have received payment. If you your company uses the accrual accounting method, gross sales include all your cash and credit sales. The receivables turnover ratio formula tells you how quickly a company CARES Act is able to convert its accounts receivable into cash. To find the receivables turnover ratio, divide the amount of credit sales by the average accounts receivables. The resulting figure will tell you how often the company collects its outstanding payments from its customers.
The amount allowed for trade discounts indicates the disparity between the standard price and the actual price that consumers pay you. Remember, the trade discount allowance reduces your total sales to represent the actual price that your consumers pay. Accumulation of too much net credit sale may lead to additional debt, consequently creating problems. It is, therefore, important to regulate sales made on credit, as a way of curbing accumulation of bad debts. Consider company ABC that generated \$200,000 worth of gross sales. Later on, the company issued a refund for \$20,000 and allowed \$10,000 as an allowance for a defect product. Gross profit margin is calculated by subtracting cost of goods sold from total revenue and dividing that number by total revenue.
## Bad Debts Percentage Of Credit Sales Method Example
In the fiscal year ended December 31, 2017, there were \$100,000 gross credit sales and returns of \$10,000. Starting and ending accounts receivable for the year were \$10,000 and \$15,000, respectively. John wants to know how many times his company collects its average accounts receivable over the year.
Glossary of terms and definitions for common financial analysis ratios terms. Key performance indicators are quantifiable measures that gauge a company’s performance against a set of targets, objectives, or industry peers. Accounts receivable will show on the balance sheet under Current Assets.
Percentage of sales method is an income statement approach for estimating bad debts expense. Under this method, bad debts expense is calculated as percentage of credit sales of the period. Let’s say your company had \$100,000 in net credit sales for the year, with average accounts how to find credit sales receivable of \$25,000. To determine your accounts receivable turnover ratio, you would divide the net credit sales, \$100,000 by the average accounts receivable, \$25,000, and get four. Typically, the average accounts receivable collection period is calculated in days to collect.
## Timely Payments
In modern times, credit sales are the norm and dominate virtually all business-to-business transactions. This ratio is simply another expression of accounts receivable turnover. Ideally, the average collection period should be reduced over time by improving collection efficiency. It’s never a bad thing for a company to know where its sales are coming from, and this includes calculating cash and credit sales.
Accounts receivable turnover is described as a ratio of average accounts receivable for a period divided by the net credit sales for that same period. This ratio gives the business a solid idea of how efficiently it collects on debts owed toward credit it extended, with a lower number showing higher efficiency. Once you deduct sales returns, discounts, and allowances from gross sales, the remaining figure is your net sales. Typically, a firm records gross sales followed by allowances and discounts. It would be much easier to calculate net credit sales by recording cash sales separately. Similarly, sales returns and sales allowances should be recorded separately. These types of sales are similar to net sales reported on the income statement as they represent a gross amount of sales minus returns, allowances and discounts.
The accounting effect of this would be an increase in the sales returns account and a decrease in the accounts receivable account. Such a discount term means that you offer a 2% discount to your customers. Only if they make payment within 15 days of a 30 day invoice period.
For instance, if 800 dollars of your 1,000 were cash sales, your credit sales would be 200 dollars. For example, if a business had \$200,000 in total sales over a period of time and \$140,000 of those were credit sales, their percentage of credit sales would be 70 percent.
## Accounts Receivables Turnover Formula, Example, Analysis
The company’s profits before tax constituted 31.26 percent of its net sales. Dividing 365 by the accounts receivable turnover ratio yields the accounts receivable turnover in days, which gives the average number of days it takes customers to pay their debts. The accounts receivable turnover ratio is an efficiency ratio that measures the number of times over a year that a company collects its average accounts receivable. On the other hand, a low accounts receivable turnover ratio suggests that the company’s collection process is poor. This can be due to the company extending credit terms to non-creditworthy customers who are experiencing financial difficulties. Finally, if a company had a lot of account receivables, it might be worth considering offering discounts to customers who pay off their accounts in 30 days or less. For example, the company could offer a 2 percent discount, if the balance is settled in 20 days.
As mentioned earlier, net sales are nothing but gross sales less sales returns, allowances, and discounts. This figure is important for various stakeholders such as investors and owners. Therefore, the discount would reduce your gross revenue and credit the assets account.
| 2,455
| 12,460
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.671875
| 3
|
CC-MAIN-2022-05
|
longest
|
en
| 0.951863
|
http://www.circletrack.com/enginetech/ctrp_1304_understanding_volumetric_efficiency_and_torque_relation/dry_flow_system.html
| 1,394,246,414,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2014-10/segments/1393999652862/warc/CC-MAIN-20140305060732-00026-ip-10-183-142-35.ec2.internal.warc.gz
| 285,578,453
| 24,662
|
Interestingly, since single-plane 4V intake manifolds are in common use, we should re-mention that they are inherently provided with two basic (and of different length) runner designs. While much has been said and explored regarding how to optimize these two sets of runners (camshafts ground for boosting torque at two different engine speeds, exhaust systems designed to complement both runner lengths, different rocker ratios, and so on), the fact remains that exploiting such methods is a viable way to "integrate" these types of manifolds with companion parts.
A similar approach can be taken with respect to matching exhaust systems to a particular engine displacement and intended rpm range. Although we are dealing with a "dry flow" system (as compared to an intake system), differences in working fluid temperatures (air/fuel charges vs. exhaust gas) and piston position when peak flow rates are generated (roughly b.d.c. exhaust cycle and mid-stroke intake cycle), we can apply the same approach used for intake manifolds. If you missed it during earlier discussions, the calculation equation is as follows:
Peak torque rpm = (pipe or passage cross-section area x 88,200) / cylinder volume.
You can certainly perform some algebraic operations on this equation to solve for a required pipe or passage cross section area or cylinder volume (one cylinder).
Again, as with intake manifold runner length, header pipe length changes (longer or shorter) tend to rotate a given torque curve about the peak torque rpm point. Shorter pipes increase torque above the peak while decreasing it below this point, longer pipes increase lower rpm torque and decrease it above the peak, all else being equal.
Before we leave this little math model, there's one more instance where it can be applied. Let's say you have some engine dyno data for a particular engine that includes a full torque vs. rpm data stream and that you've measured the cross-sectional areas of the intake manifold and exhaust header passages (primary pipe only). By comparing the dyno data's peak torque rpm points with what you've computed and averaged for the intake and header system used, you can determine if the engine is "over-ported" or "under-ported." For example, if the actual peak torque rpm point is higher than what you computed, chances are it's an over-ported combination of parts. And, of course, the opposite is true as well.
Moving on to camshafts and armed with a specific range of targeted engine speed, you'll find that most camshaft manufacturers have a wealth of experience and technical information to help in the selection process. Just be realistic in your expectations about the rpm range for which you plan the most on-track use.
Cylinder heads? Not so much because intake and exhaust port path lengths are comparatively short (as measured against intake runner and exhaust pipe lengths), it's the need to help make transitional and directional flow changes into and out of the combustion space that cylinder head ports are believed to be important. There are indications that providing efficient and effective transitional flow into and out of the cylinders is more important than port tuning.
| 628
| 3,196
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.34375
| 3
|
CC-MAIN-2014-10
|
longest
|
en
| 0.938266
|
https://www.physics.unlv.edu/~jeffery/astro/orbit/ellipse.html
| 1,716,286,971,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-22/segments/1715971058442.20/warc/CC-MAIN-20240521091208-20240521121208-00630.warc.gz
| 811,669,805
| 5,093
|
Sections
# Ellipses
An ellipse is a closed geometrical curve of which the circle is a special case. The Cartesian plane formula for a circle is
` x2+y2=r2 , `
where r is the radius. The ellipse formula is
` (x/a)2+(y/b)2=1 , `
where a and b are, respectively, the semi-major and semi-minor axes (a > b asssumed without loss of generality). If a = b, then the ellipse is circle of radius a. The figure to the right shows an ellipse with its foci and accompanying formulae. A string fixed at two points and held taut with a pen then the pen can be used to trace an ellipse with the two points becoming the foci: the semi-major axis half the string's length.
For astronomical orbital purposes, it turns out that the physically important distance is from one focus to the curve, and not from the geometric center to the curve.
The eccentricity e of an ellipse (which defined mathematically on the figure above) is loosely speaking a measure of the DEVIATION of the ellipse from circularity. If e = 0, the ellipse is a circle. If e = 4/5, the ellipse is quite quite elliptical: the semi-minor to semi-major axis ratio is 3/5. If the semi-minor to semi-major axis ratio is 1/10, the e = 0.995 approximately. If e = 1, then the ellipse has flattened into a line segment if one sends semi-minor axis b to zero and holds the semi-major a axis constant. (You get different answer for e = 1, when you allow a and b to go to infinity in just the right way.)
The figure below illustrates how eccentricity affects ellipse shape.
Beyond the scope of intro astro, there are more at site Ellipse Arcana.
# Ellipses and Orbits
One of Isaac Newton's (1643--1727) epochal discoveries was that two bodies that treated as point masses, isolated from all other bodies in space, will orbit their mutual center of mass (called the barycenter for orbits) in ellipses where the center of mass is at one of the focuses of each of the ellipses: the other focus in each ellipse is just an empty point in space---the center of mass can be just an empty point too, of course.
The situation is illustrated in the figure below.
Of course, one means "orbit the center of mass" in a physical sense: i.e., the center of mass defines inertial frame which means in modern understanding the center of mass is in free fall in sufficiently uniform external gravitational field.
The BODIES ARE ACCELERATED since they are NOT in uniform straight line motion relative to the inertial frame of the center of mass. There is an internal force causing them to move in orbit.
That internal force is, of course, GRAVITY. Elliptical orbits are a consequence of the INVERSE-SQUARE LAW of Newton's law of universal gravitation which was itself another epochal discovery of Newton.
Another example of "orbiting' is the circular motion of a SWIRLED SLING: see the figure below. Here the center of force is relatively unmoving (unaccelerated) hand and the swung object is accelerated by the TENSION force of the string into circular motion. If the tension force vanished, the object would fly off in a straight line if not acted on by gravity. Of course, with a sling the flying off is the whole point.
How is it that the Sun and each planet individtually can be approximated to 1st order as a gravitational two-body system? The figure below gives the explication.
# Periapsis and Apoapsis
Part of astro-jargon are special names for the points of closest and farthest approach for 2 bodies in a 2-body system: periapsis and apoapsis. Note that peri means something like around, apo something like off: both fragments are derived from Greek.
Periapsis and apoapsis are the general terms. There are special case ones for special 2-body systems:
1. In general: periapsis and apoapsis.
2. For the Earth: perigee and apogee. Note gee means Earth and is derived from Greek.
3. For the Sun: perihelion and aphelion.
4. For Jupiter: perijove and apojove.
5. For a star: periastron and apastron.
6. etc.
The SPEED OF A BODY in orbit varies. It is faster when nearer to the center of force and slower when farther from the center of force. The highest speed is at periapsis and the lowest at apoapsis .
# Planets in the Solar System
The figure below illustrates orbits with terminology for a planet in the Solar System.
The
Table: Solar-System Planets below gives, among other things, planet mean distances (from the Sun), eccentricities, and ecliptic angles. Don't try to memorize these numbers: look at them and think about what they mean.
As the
Table: Solar-System Planets shows, the PLANET ORBITS are close to CIRCULAR: i.e., the eccentricities are small. For example consider the Earth's eccentricity of 0.0167. This means that the Earth is only ever 1.67 % farther from the Sun that its mean distance and only 1.67 % closer to the Sun than its mean distance. Venus has the smallest eccentricity.
Thinking of the planet orbits as CIRCULAR is a fine first order approximation. But for detailed predictions one must go to ELLIPTICAL ORBITS and even further to PERTURBED ELLIPTICAL ORBITS. Detailed prediction of angular position on the sky has always been one of the goals of astronomy since ancient times. In fact ASTRONOMICAL ACCURACY is a byword.
Alas, ASTROPHYSICAL ACCURACY is a byword too: sympathetically it means order of magnitude accuracy; unsympathetically it means "we don't know what were talking about."
The ECLIPTIC ANGLE is the angle inclination of the plane of the orbit from ecliptic plane (i.e., the plane of the Earth's orbit). We see that the planets nearly orbit all in the same plane. Pluto and Mercury have the two largest inclinations by far as illustrated in the figure below. The asteroids orbits are close to the ecliptic too. However, long-period comet orbits can be at any angle relative to the ecliptic.
| 1,386
| 5,811
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.96875
| 4
|
CC-MAIN-2024-22
|
latest
|
en
| 0.916378
|
http://www.thestudentroom.co.uk/showthread.php?t=4063847
| 1,477,443,391,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-44/segments/1476988720471.17/warc/CC-MAIN-20161020183840-00078-ip-10-171-6-4.ec2.internal.warc.gz
| 739,478,500
| 34,257
|
You are Here: Home >< Physics
# flow in pipe (fluid mechanics)
Announcements Posted on
Four hours left to win £100 of Amazon vouchers!! Don't miss out! Take our short survey to enter 24-10-2016
1. in this network of pipe , the author make the attempts to solve it using ( Q1 = Q2 + Q3) , so delta Q = Q1 -(Q2 +Q3) , the value of head of junction that he gt is 25.2 , with delta Q = (3x10^-3) ( not shown in the working)
however, when i tried to solve it using ( Q1 + Q2 = Q3) , which means ( Q1 +Q2 - Q3 ) , thae ans that i gt is hj = 28 , with delta Q = 0.07
P/s : we have to find the delta Q = 0 for the ans so that water from inlet = water from outlet
so , question has two ans depending on the flow of water ??? or i am wrong ?
Attached Images
3. bump
## Register
Thanks for posting! You just need to create an account in order to submit the post
1. this can't be left blank
2. this can't be left blank
3. this can't be left blank
6 characters or longer with both numbers and letters is safer
4. this can't be left empty
1. Oops, you need to agree to our Ts&Cs to register
Updated: May 8, 2016
TSR Support Team
We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.
This forum is supported by:
Today on TSR
### Who is getting a uni offer this half term?
Find out which unis are hot off the mark here
### How to get a 1st class degree
Poll
Study resources
The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.
Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE
Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.
| 503
| 1,861
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.921875
| 3
|
CC-MAIN-2016-44
|
latest
|
en
| 0.917106
|
https://mapleprimes.com/questions/148715-Define-A-Function-For-The-Derivative
| 1,590,784,657,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00128.warc.gz
| 447,776,890
| 21,959
|
# Question:define a function for the derivative variable of an ode
## Question:define a function for the derivative variable of an ode
i have an ode,which its derivatives are with respect to time,i want to use the output as a function of time to use it in a loop ,what should i do ?!
A:=a*diff(x(t),t,t)+b*diff(x(t),t)+c*x(t);
B:=solve(A,diff(x(t),t,t));
at first how can i define B as a function of t here?
how can i define initial conditons for this ?
for example :
a0:=diff(x(t),t,t) at time 0 (x(0)), but i do not know how to define this?
i also can not use B in a loop of time,t
for example i want to do :
for t from 1 to 10 do
B
od:
what should i do here?
plz help me,i really need help with this.
| 210
| 713
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.84375
| 3
|
CC-MAIN-2020-24
|
latest
|
en
| 0.895244
|
https://mathoverflow.net/questions/353277/conformal-maps-between-simply-connected-domains-with-piecewise-real-algebraic-bo
| 1,657,015,061,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00725.warc.gz
| 431,305,436
| 26,102
|
# Conformal maps between simply connected domains with piecewise real algebraic boundary
Between polygons in $$\mathbb C\cup\{\infty\}$$ (including the "single side polygons", hemispheres, disks) the Schwartz-Christoffel mappings give arguably explicit conformal maps. For polygons with few angles those are well known special functions -hypergeometric for triangles to halfplanes and elliptic integrals for rectangles. We also have domains with piecewise quadratic boundaries to which we can write explicit transformations. For instance sectors of disks map to disks via powers -removing the origin to make this conformal- and to halfplanes via Möbius transformations.
My question is whether we can write explicit transformations between domains piecewise bounded by higher degree real algebraic curves, like cubic or quartic, and a disk. There are numerical methods but can we use special functions?
This is related to calculating periods in the sense of Kontsevich-Zagier, though my motivation is rather low-level.
EDIT: I may consider as "explicit" integrals of algebraic functions. If you can give formulas in the form of "generalized Schwarz-Christoffel integrals" but for arbitrary polynomials defining piecewise the boundary that would already be satisfying.
• I disagree that the Schwarz-Christoffel formula is "reasonably explicit", except in the case of triangle and rectangle, and very few other cases. Even less explicit it is for polygons whose sides are arcs of circles. For this case, the case of circular quadrilaterals has been intensively studied. In no way you can call this "explicit". The mapping is a ratio of two solutions of the Heun equation, and about solutions of this equation not so much is known. Feb 22, 2020 at 4:42
I disagree that the Schwarz-Christoffel formula is "reasonably explicit", except in the case of triangle and rectangle, and very few other cases. The reason is that Schwarz-Christoffel formula for $$n\geq 3$$ contains unknown "accessory" parameters. Determination of these parameters requires inversion of some rather complicated integrals.
Even less explicit it is for polygons whose sides are arcs of circles. For this case, only circular triangles are reasonably well understood. (See Klein's book Forlesungen uber hypergeometrische Funktion. In English, Caratheodory, Function theory, vol. II).
The case of circular quadrilaterals has been intensively studied since the second half of 19th century. In no way you can call the conformal map on a generic circular quadrilateral "explicit". The mapping is a ratio of two solutions of the Heun equation, and about solutions of this equation not so much is known. Some people would call them "special functions", but they are not included in Whittaker Watson, except the special case of Lame equation. So one can say that there is no explicit answer in any sense already for a generic circular quadrilateral. Some special quadrilaterals were subject of much research.
The literature about them is enormous, it goes under the names "Heun equation", (a special case is the Lame equation), "accessory parameters", "Painleve VI", and there is a lot of "physics" literature, old and modern, with keywords like "conformal blocks", etc.
Of course there are some very special cases when regions are bounded by other algebraic curves, like ellipse, parabola, some cycloids or lemniscates. But these are very special cases.
A reasonable account of what one can do explicitly is: Werner von Koppenfels, and Friedmann Stallmann, Praxis der konformen Abbildung, Berlin, Springer-Verlag. (1959). It is somewhat out of date but not much.
You can glance at my own recent papers (all available on the arxiv) dedicated to some special cases of circular quadrilaterals, and even one on pentagons, arXiv:1611.01356.
• Thanks. Can you give me good references for mappings between interior of quartics, similar to the lemniscate, and a halfplane? At least a good reference for lemniscates?
– plm
Feb 22, 2020 at 16:51
• Ok, I've had a look at your paper. I figured out that disks are mapped to lemniscate type quartics via $z\mapsto z^2$. I still have to think how far I can adapt Schwartz-Christoffel mappings. I guess I have enough intuition for my needs. Don't hesitate to add any comment. Thank you.
– plm
Feb 23, 2020 at 0:14
• Another type of domains with algebraic boundary for which a conformal map is known, sort of explicitly is called "quadrature domains". Feb 23, 2020 at 1:40
| 1,029
| 4,477
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.8125
| 3
|
CC-MAIN-2022-27
|
longest
|
en
| 0.912274
|
https://communities.sas.com/t5/General-SAS-Programming/Compute-Variable/td-p/149942?nobounce
| 1,532,203,885,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00421.warc.gz
| 630,869,617
| 29,052
|
## Compute Variable = .
Frequent Contributor
Posts: 76
# Compute Variable = .
I have
A=.
B = 10
C=5
I want to obtain
D= A + (B*C)
but I get D=.
it should be D=50
how do I overcome this issue?
Thanks
Contributor
Posts: 62
## Re: Compute Variable = .
Please use sum and mult functions as '+' operator will give missing values...hope it helped.
Thank You
Occasional Contributor
Posts: 17
## Re: Compute Variable = .
D=sum(A, B*C);
or use if-else logic to customize the missing cases. e.g.
if missing(A)=1 then A1=0; /* not necessarily so */
else A1=A;
if missing(B)=1 then B1=0; /* not necessarily so */
else B1=B;
if missing(C)=1 then C1=1; /* not necessarily so */
else C1=C;
D=A1+B1*C1;
Super Contributor
Posts: 276
data _null_;
a=.;
b=5;
c=10;
d=Sum(a,B*c);
Put d;
run;
,
Thanks,
Sanjeev.K
Contributor
Posts: 62
## Re: Compute Variable = .
This will work for missing values of A, you cannont have missing values while you are multiplying.. hope answered the question.
Thank You
Posts: 2,655
## Re: Compute Variable = .
Apple,
There have been many good examples on how to calculate the answer that you want. I'd like to address this in a different way (that might make sense if you consider only + and * as operators). With a missing value for A in your example, you say the answer should be 50. My question is "How do you know"? A is missing and not necessarily equal to zero. It might be anything--you just don't have information. Consequently, any function that includes A as an operand is lacking in information. SAS thus rightly treats the result as missing. If you know that A is truly equal to zero when missing, then I would write something like bill0101's if-then-else logic.
Steve Denham
Discussion stats
• 5 replies
• 269 views
• 1 like
• 5 in conversation
| 499
| 1,824
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.578125
| 4
|
CC-MAIN-2018-30
|
latest
|
en
| 0.792709
|
https://www.r-bloggers.com/correlation-resources-spss-r-causality-interpretation-and-apa-style-reporting/
| 1,591,110,497,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00518.warc.gz
| 866,704,705
| 60,670
|
# Correlation Resources: SPSS, R, Causality, Interpretation, and APA Style Reporting
July 17, 2011
By
[This article was first published on Jeromy Anglim's Blog: Psychology and Statistics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This post provides links to a range of resources related to the use and interpretation of correlations. I wanted to provide a page with links to a number of additional resources that would be useful both for those of my students who might be keen to learn more and for anyone else who might be interested. Specifically, this post provides links to: (a) introductory book-style chapters on correlation, (b) resources related to assorted issues in correlation (i.e., discussion of causal inference, correlation with various variable types, range restriction, statistical power, correlation interpretation, and significance testing), (c) tutorials on computing correlations using SPSS and R, and (d) tips for reporting correlations in APA Style.
### Introductions to correlation
The following provide general textbook style overviews of correlation:
### Assorted Issues
#### Correlation and Causation
Knowing how to reason about causality in the behavioural and social sciences is a really important skill.
#### Types of variables
The prototypical correlation example is based on two continuous, normally distributed variables. However, in practice there are many other types of variables that you might wish to correlate. The following provide pages provide links to suggestions for how to analyse some other common scenarios:
#### Statistical Power
Statistical power within the context of correlation is the probability of obtaining a statistically significant correlation in a study given that a true correlation exists.
• This earlier postprovides (a) some simple rules of thumb for power analysis for correlations, (b) how to calculate statistical power using free software called G-Power, and (c) links to additional reading on the important topic of statistical power.
#### Interpretation
When I first learnt about the correlation coefficient, I found it challenging to truly grok what a particular value meant. Learning the standard interpretation was easy. The challenging part was understanding the practical and theoretical implications for a correlation of a given size.
• The following are some of the standard interpretations of a correlation:
• Pearson’s correlation is an index of the direction and strength of linear association between two variables.
• The square of the correlation between X and Y is the percentage of variance shared between X and Y (e.g., if `r = .50`, then the two variables share `.50 * .50 = 25%` of variance).
• If X and Y were standardised (i.e., made so that the mean of both variables was zero and the standard deviation was one) then, the correlation would be the same as the regression coefficient of X predicting Y or Y predicting X. Thus, for example, if `r = .25` you could say that “a value one standard deviation greater on X predicts a .25 standard deviation greater value on Y”.
• Strategies for building an intuition of what a correlation means:
• Play with the Regression by Eye simulation. The simulation generates a scatterplot, and you are asked to indicate which of a set of correlations corresponds to the scatterplot. It helps to build a mapping between the graphical intuitiveness of a scatterplot and the numeric summary of the linear association in the scatterplot (i.e., the correlation coefficient).
• Memorise some of the rules of thumbs for describing correlation effect sizes (see this discussion by Andy Field), but don’t take the rules of thumb too seriously.
• Try to build up a frame of reference for correlations in different contexts by reading results sections. Meta analyses can also be particularly useful in this regard.
• Read the article ‘Meyer, G. J., et al (2001). Psychological Testing and Psychological Assessment: A Review of Evidence and Issues. American Psychologist, 56(2), 128-165.’ (PDF) which provides large tables of meta-analytic correlations for a wide range of medical and psychological domains sorted by the size of the correlation. Studying these tables can help build an intuition and a context for interpretation of correlations.
#### Graphical approaches
As with most statistical techniques, there are various ways of representing the data. The correlation coefficient provides a very brief summary of the association between two variables. However, graphical representations of association are much richer.
The following are some general heuristics that I find useful when plotting data that might also be represented as a correlation:
• Use scatterplots to explore features of the association (e.g., presence of outliers, linearity, distributional properties, spread of data around any trend line, etc.);
• If one of the variables is positively skewed, consider plotting the corresponding axis on a log scale;
• If there are a lot of data points (e.g., `n > 1000`), adopt a different strategy such as using some form of partial transparency (e.g., see use of the alpha property in ggplot2), or sampling the data;
• If one of the variables takes on a limited number of discrete categories, consider using a jitter or a sunflower plot;
• If there are three or more variables, consider using a scatterplot matrix;
• Fitting some form of trend line is often useful;
• Adjust the size of the plotting character to the sample size (for bigger n, use a smaller plotting character).
#### Significance tests on correlations
There are a wide range of possible significance tests that can be performed on correlations. The following links provide some suggestions and links for different scenarios.
### Statistical Software
Calculating a correlation coefficient and its associated statistical significance is a standard task that almost any statistical package can perform. Many psychology students are taught to use SPSS. It is a proprietary (i.e., you can’t run it at home without a paid licence) data analysis system with a strong empahsis on a GUI and making it easy to perform various standardised analyses common in the social sciences.
My preferred tool for performing data analysis is R. It is open source (thus, you can run it at home for free) and is often described as the lingua franca of statistics. It generally requires a more sophisticated understanding of statistics and computing to use effectively. Thus, for the interested psychology student or researcher I have this introduction to R for researchers in psychology.
Below I list resources for performing correlation analysis in SPSS and R.
#### R
R makes it easy to perform correlations on datasets. Specifically, the following links provide example syntax:
### Reporting Correlations in APA Style
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
| 1,432
| 7,244
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.625
| 4
|
CC-MAIN-2020-24
|
latest
|
en
| 0.913969
|
http://flint.cs.yale.edu/flint/publications/ddifc-coq.html
| 1,542,700,183,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-47/segments/1542039746301.92/warc/CC-MAIN-20181120071442-20181120093442-00383.warc.gz
| 112,023,652
| 39,420
|
# Library ddifc-coq
Require Import Omega.
Require Import Arith.
Require Import ZArith.
Require Import List.
Require Import Classical.
Require Import ProofIrrelevance.
Require Import FunctionalExtensionality.
Require Import Coq.Bool.Bool.
Ltac inv H := inversion H; try subst; try clear H.
Ltac dup H := generalize H; intro.
Ltac intuit := try solve [intuition].
Ltac decomp H := decompose [and or] H; try clear H.
Notation "[ ]" := nil (at level 1).
Notation "[ a ; .. ; b ]" := (a :: .. (b :: []) ..) (at level 1).
Proposition app_assoc {A} : forall l1 l2 l3 : list A, (l1 ++ l2) ++ l3 = l1 ++ l2 ++ l3.
Proof.
induction l1; simpl; intros; auto.
rewrite IHl1; auto.
Qed.
Proposition in_app_iff {A} : forall (l1 l2 : list A) x, In x (l1++l2) <-> In x l1 \/ In x l2.
Proof.
intros; split; intros.
apply in_app_or; auto.
apply in_or_app; auto.
Qed.
Proposition app_nil_r {A} : forall l : list A, l ++ [] = l.
Proof.
induction l; auto.
simpl; rewrite IHl; auto.
Qed.
Proposition list_finite {A} : forall (l : list A) x, l <> x :: l.
Proof.
induction l; intros; intro.
inv H.
inv H.
Qed.
Proposition list_finite' {A} : forall l l' : list A, l' <> [] -> l <> l' ++ l.
Proof.
induction l; intros; intro.
rewrite app_nil_r in H0; subst.
destruct l'.
inv H0.
subst a.
intro.
destruct l'; inv H0.
rewrite app_assoc; auto.
Qed.
Proposition app_cancel_l {A} : forall l l1 l2 : list A, l ++ l1 = l ++ l2 -> l1 = l2.
Proof.
induction l; intros; auto.
inv H; intuit.
Qed.
Proposition app_cancel_r_help {A} : forall (l1 l2 : list A) x, l1 ++ [x] = l2 ++ [x] -> l1 = l2.
Proof.
induction l1; intros.
destruct l2; auto; inv H.
destruct l2; inv H2.
destruct l2; inv H.
destruct l1; inv H2.
apply IHl1 in H2; subst; auto.
Qed.
Proposition app_cancel_r {A} : forall l l1 l2 : list A, l1 ++ l = l2 ++ l -> l1 = l2.
Proof.
induction l; intros.
repeat rewrite app_nil_r in H; auto.
change (l1++([a]++l) = l2++([a]++l)) in H.
repeat rewrite <- app_assoc in H; apply IHl in H.
apply app_cancel_r_help in H; auto.
Qed.
Definition var := nat.
Definition lvar1 := nat.
Definition lvar2 := nat.
Definition fname := nat.
Open Scope Z_scope.
Definition nat_of_Z (v : Z) (pf : v >= 0) : nat.
intros.
destruct v.
apply O.
apply (nat_of_P p).
assert (~ Zneg p >= 0).
clear pf; induction p.
intro H; contradiction H; simpl; auto.
Defined.
Proposition Zneg_dec : forall v : Z, {v >= 0} + {v < 0}.
Proof.
intros.
destruct v.
left; omega.
left.
induction p; auto.
omega.
right.
induction p; auto.
omega.
Qed.
Record poset {A : Set} : Type :=
{leq : A -> A -> bool;
leq_refl : forall x : A, leq x x = true;
leq_antisym : forall x y : A, leq x y = true -> leq y x = true -> x = y;
leq_trans : forall x y z : A, leq x y = true -> leq y z = true -> leq x z = true}.
Record join_semi {A : Set} : Type :=
{po : poset (A:=A);
lub : A -> A -> A;
lub_l : forall x y : A, leq po x (lub x y) = true;
lub_r : forall x y : A, leq po y (lub x y) = true;
lub_least : forall x y z : A, leq po x z = true -> leq po y z = true -> leq po (lub x y) z = true}.
Record join_semi' {A : Set} (js : join_semi (A:=A)) : Type :=
{lub_idem : forall x : A, lub js x x = x;
lub_comm : forall x y : A, lub js x y = lub js y x;
lub_assoc : forall x y z : A, lub js (lub js x y) z = lub js x (lub js y z);
lub_leq : forall x y z : A, leq (po js) (lub js x y) z = true <-> leq (po js) x z = true /\ leq (po js) y z = true}.
Definition join_semi_extend {A : Set} (js : join_semi (A:=A)) : join_semi' (A:=A) js.
intros; split; intros.
apply (leq_antisym (po js)).
apply lub_least; apply leq_refl.
apply lub_l.
apply (leq_antisym (po js)); solve [apply lub_least; [apply lub_r | apply lub_l]].
apply (leq_antisym (po js)).
apply lub_least.
apply lub_least.
apply lub_l.
apply (leq_trans _ _ (lub js y z) _); [apply lub_l | apply lub_r].
apply (leq_trans _ _ (lub js y z) _); [apply lub_r | apply lub_r].
apply lub_least.
apply (leq_trans _ _ (lub js x y) _); [apply lub_l | apply lub_l].
apply lub_least.
apply (leq_trans _ _ (lub js x y) _); [apply lub_r | apply lub_l].
apply lub_r.
split; intros; try split.
apply (leq_trans _ _ (lub js x y) _); [apply lub_l | auto].
apply (leq_trans _ _ (lub js x y) _); [apply lub_r | auto].
apply lub_least; intuit.
Qed.
Record bounded_join_semi {A : Set} : Type :=
{js : join_semi (A:=A);
bot : A;
leq_bot : forall x : A, leq (po js) bot x = true}.
Record bounded_join_semi' {A : Set} (bjs : bounded_join_semi (A:=A)) : Type :=
{bot_unit : forall x : A, lub (js bjs) x (bot bjs) = x}.
Definition bounded_join_semi_extend {A : Set} (bjs : bounded_join_semi (A:=A)) : bounded_join_semi' (A:=A) bjs.
intros; split; intros.
apply (leq_antisym (po (js bjs))).
apply lub_least; [apply leq_refl | apply leq_bot].
apply lub_l.
Qed.
Coercion po : join_semi >-> poset.
Coercion js : bounded_join_semi >-> join_semi.
Parameter lbl : Set.
Parameter lbl_lattice : bounded_join_semi (A:=lbl).
Definition lbl_lattice' := join_semi_extend lbl_lattice.
Definition lbl_lattice'' := bounded_join_semi_extend lbl_lattice.
Definition bottom := bot lbl_lattice.
Definition llub := lub lbl_lattice.
Definition lleq := leq lbl_lattice.
Ltac llub_simpl H := apply (lub_leq lbl_lattice lbl_lattice') in H; destruct H.
Inductive glbl := Lo | Hi.
Definition grp (L l : lbl) := if lleq l L then Lo else Hi.
Definition glbl_poset : poset (A:=glbl).
apply Build_poset with (leq := fun l1 l2 : glbl => if l1 then true else (if l2 then false else true)); intros.
destruct x; auto.
destruct x; destruct y; simpl in *; auto; inv H; inv H0.
destruct x; destruct y; destruct z; simpl in *; auto.
Defined.
Definition glbl_join_semi : join_semi (A:=glbl).
apply Build_join_semi with (po := glbl_poset) (lub := fun l1 l2 : glbl => if l1 then l2 else Hi); intros.
destruct x; destruct y; auto.
destruct x; destruct y; auto.
destruct x; destruct y; auto.
Defined.
Definition glbl_lattice : bounded_join_semi (A:=glbl).
apply Build_bounded_join_semi with (js := glbl_join_semi) (bot := Lo); auto.
Defined.
Definition glbl_lattice' := join_semi_extend glbl_lattice.
Definition glbl_lattice'' := bounded_join_semi_extend glbl_lattice.
Definition gleq := leq glbl_lattice.
Definition glub := lub glbl_lattice.
Delimit Scope glbl_scope with glbl.
Bind Scope glbl_scope with glbl.
Delimit Scope lbl_scope with lbl.
Bind Scope lbl_scope with lbl.
Notation "x <<= y" := (gleq x y = true) (at level 70) : glbl_scope.
Notation "x \_/ y" := (glub x y) (at level 50) : glbl_scope.
Notation "x <<= y" := (lleq x y = true) (at level 70) : lbl_scope.
Notation "x \_/ y" := (llub x y) (at level 50) : lbl_scope.
Open Scope lbl_scope.
Proposition glub_homo : forall l l1 l2, grp l (llub l1 l2) = glub (grp l l1) (grp l l2).
Proof.
intros; case_eq (lleq l1 l); intros.
case_eq (lleq l2 l); intros; unfold grp; rewrite H; rewrite H0; simpl.
assert (l1 \_/ l2 <<= l).
rewrite (lub_leq lbl_lattice lbl_lattice'); split; auto.
rewrite H1; auto.
assert (~ l1 \_/ l2 <<= l).
rewrite (lub_leq lbl_lattice lbl_lattice'); intro.
destruct H1.
unfold lleq in H0; rewrite H2 in H0; inv H0.
destruct (lleq (l1 \_/ l2)%lbl l); auto.
unfold grp; rewrite H; simpl.
assert (~ l1 \_/ l2 <<= l).
rewrite (lub_leq lbl_lattice lbl_lattice'); intro.
destruct H0.
unfold lleq in H; rewrite H0 in H; inv H.
destruct (lleq (l1 \_/ l2) l); auto.
Qed.
Close Scope lbl_scope.
Proposition glub_leq : forall l l1 l2, glub (grp l l1) (grp l l2) = Lo <-> grp l l1 = Lo /\ grp l l2 = Lo.
Proof.
intros; unfold grp; destruct (lleq l1 l); destruct (lleq l2 l); simpl; intuit.
Qed.
Proposition glub_lo : forall l1 l2, glub l1 l2 = Lo <-> l1 = Lo /\ l2 = Lo.
Proof.
destruct l1; destruct l2; intuit.
Qed.
Ltac glub_simpl H := apply glub_lo in H; destruct H.
Ltac glub_simpl_grp H := try (rewrite glub_homo in H); apply glub_leq in H; destruct H.
Inductive binop := Plus | Minus | Mult | Div | Mod.
Inductive bbinop := And | Or | Impl.
Inductive exp :=
| Var : var -> exp
| LVar : lvar1 -> exp
| Num : Z -> exp
| BinOp : binop -> exp -> exp -> exp.
Fixpoint expvars (e : exp) (x : var) : bool :=
match e with
| Var y => if eq_nat_dec y x then true else false
| BinOp _ e1 e2 => if expvars e1 x then true else expvars e2 x
| _ => false
end.
Fixpoint no_lvars_exp (e : exp) :=
match e with
| LVar _ => False
| BinOp _ e1 e2 => no_lvars_exp e1 /\ no_lvars_exp e2
| _ => True
end.
Proposition exp_eq_dec : forall e1 e2 : exp, {e1 = e2} + {e1 <> e2}.
Proof.
induction e1; destruct e2; try solve [right; discriminate].
destruct (eq_nat_dec v v0); subst.
left; auto.
right; intro H; inv H; contradiction n; auto.
destruct (eq_nat_dec l l0); subst.
left; auto.
right; intro H; inv H; contradiction n; auto.
destruct (Z_eq_dec z z0); subst.
left; auto.
right; intro H; inv H; contradiction n; auto.
assert ({b = b0} + {BinOp b e1_1 e1_2 <> BinOp b0 e2_1 e2_2}).
destruct b; destruct b0; auto; try solve [right; intro H; inv H].
destruct H; auto; subst.
destruct (IHe1_1 e2_1); subst.
destruct (IHe1_2 e2_2); subst; auto.
right; intro H; inv H; contradiction n; auto.
right; intro H; inv H; contradiction n; auto.
Qed.
Inductive bexp :=
| FF : bexp
| TT : bexp
| Eq : exp -> exp -> bexp
| Not : bexp -> bexp
| BBinOp : bbinop -> bexp -> bexp -> bexp.
Fixpoint bexpvars (b : bexp) (x : var) : bool :=
match b with
| Eq e1 e2 => if expvars e1 x then true else expvars e2 x
| Not b => bexpvars b x
| BBinOp _ b1 b2 => if bexpvars b1 x then true else bexpvars b2 x
| _ => false
end.
Fixpoint no_lvars_bexp (b : bexp) :=
match b with
| Eq e1 e2 => no_lvars_exp e1 /\ no_lvars_exp e2
| Not b => no_lvars_bexp b
| BBinOp _ b1 b2 => no_lvars_bexp b1 /\ no_lvars_bexp b2
| _ => True
end.
Inductive cmd :=
| Skip : cmd
| Output : exp -> cmd
| Assign : var -> exp -> cmd
| Read : var -> exp -> cmd
| Write : exp -> exp -> cmd
| Seq : cmd -> cmd -> cmd
| If : bexp -> cmd -> cmd -> cmd
| While : bexp -> cmd -> cmd.
Fixpoint mods (C : cmd) : list var :=
match C with
| Assign x _ => [x]
| Read x _ => [x]
| Seq C1 C2 => mods C1 ++ mods C2
| If _ C1 C2 => mods C1 ++ mods C2
| While _ C => mods C
| _ => []
end.
Fixpoint modifies (K : list cmd) : list var :=
match K with
| [] => []
| C::K => mods C ++ modifies K
end.
Fixpoint no_lvars_cmd (C : cmd) :=
match C with
| Skip => True
| Output e => no_lvars_exp e
| Assign _ e => no_lvars_exp e
| Read _ e => no_lvars_exp e
| Write e1 e2 => no_lvars_exp e1 /\ no_lvars_exp e2
| Seq C1 C2 => no_lvars_cmd C1 /\ no_lvars_cmd C2
| If b C1 C2 => no_lvars_bexp b /\ no_lvars_cmd C1 /\ no_lvars_cmd C2
| While b C => no_lvars_bexp b /\ no_lvars_cmd C
end.
Fixpoint no_lvars (K : list cmd) :=
match K with
| [] => True
| C::K => no_lvars_cmd C /\ no_lvars K
end.
Definition val := prod Z glbl.
Definition lmap := prod (lvar1 -> Z) (lvar2 -> glbl).
Definition store := var -> option val.
Definition heap := addr -> option val.
Inductive state := St : lmap -> store -> heap -> state.
Definition getLmap (st : state) := let (i,_,_) := st in i.
Coercion getLmap : state >-> lmap.
Definition getStore (st : state) := let (_,s,_) := st in s.
Coercion getStore : state >-> store.
Definition getHeap (st : state) := let (_,_,h) := st in h.
Coercion getHeap : state >-> heap.
Proposition val_eq_dec : forall v1 v2 : val, {v1 = v2} + {v1 <> v2}.
Proof.
destruct v1; destruct v2.
destruct g; destruct g0; try solve [right; intro H; inv H].
destruct (Z_eq_dec z z0); subst.
left; auto.
right; intro H; inv H; contradiction n; auto.
destruct (Z_eq_dec z z0); subst.
left; auto.
right; intro H; inv H; contradiction n; auto.
Qed.
Proposition opt_eq_dec {A} : (forall a1 a2 : A, {a1 = a2} + {a1 <> a2}) -> forall o1 o2 : option A, {o1 = o2} + {o1 <> o2}.
Proof.
intros.
destruct o1; destruct o2.
destruct (X a a0); subst; auto.
right; intro H; inv H; contradiction n; auto.
right; discriminate.
right; discriminate.
left; auto.
Qed.
Definition upd {A} (x : nat -> option A) y z : nat -> option A := fun w => if eq_nat_dec w y then Some z else x w.
Record SepAlg : Type := mkSepAlg {
sepstate : Set;
unit : sepstate -> Prop;
dot : sepstate -> sepstate -> sepstate -> Prop;
dot_func : forall x y z1 z2, dot x y z1 -> dot x y z2 -> z1 = z2;
dot_comm : forall x y z, dot x y z -> dot y x z;
dot_assoc : forall x y z a b, dot x y a -> dot a z b -> exists c, dot y z c /\ dot x c b;
dot_unit : forall x, exists u, unit u /\ dot u x x;
dot_unit_min : forall u x y, unit u -> dot u x y -> x = y}.
Definition mycombine {A} (s1 s2 : nat -> option A) (n : nat) : option A :=
match s1 n, s2 n with
| Some a, _ => Some a
| None, Some a => Some a
| None, None => None
end.
Definition mydot {A} (s1 s2 s : nat -> option A) : Prop := forall n,
match s n with
| None => s1 n = None /\ s2 n = None
| Some a => (s1 n = Some a /\ s2 n = None) \/ (s1 n = None /\ s2 n = Some a)
end.
Definition mysep : SepAlg.
apply (mkSepAlg state (fun st => match st with St _ _ h => h = (fun _ => None) end)
(fun st1 st2 st3 =>
match st1, st2, st3 with St i1 s1 h1, St i2 s2 h2, St i3 s3 h3 =>
i1 = i2 /\ i1 = i3 /\ s1 = s2 /\ s1 = s3 /\ mydot h1 h2 h3
end)); intros.
destruct x as [i1 s1 h1]; destruct y as [i2 s2 h2]; destruct z1 as [i3 s3 h3]; destruct z2 as [i4 s4 h4].
decomp H; decomp H0; repeat subst.
apply f_equal; apply functional_extensionality; intro n.
specialize (H6 n); specialize (H10 n).
destruct (h3 n); destruct (h4 n); auto.
decomp H6; decomp H10.
rewrite H1 in H3; auto.
rewrite H1 in H3; inv H3.
rewrite H1 in H3; inv H3.
rewrite H2 in H4; auto.
decomp H6; decomp H10.
rewrite H1 in H0; inv H0.
rewrite H2 in H3; inv H3.
decomp H6; decomp H10.
rewrite H0 in H3; inv H3.
rewrite H1 in H4; inv H4.
destruct x as [i1 s1 h1]; destruct y as [i2 s2 h2]; destruct z as [i3 s3 h3].
decomp H; repeat split; repeat subst; auto.
intro n; specialize (H5 n).
destruct (h3 n); intuit.
destruct x as [i1 s1 h1]; destruct y as [i2 s2 h2]; destruct z as [i3 s3 h3]; destruct a as [i4 s4 h4]; destruct b as [i5 s5 h5].
decomp H; decomp H0; repeat subst.
exists (St i5 s5 (mycombine h2 h3)).
repeat split; auto.
intro n; unfold mycombine; specialize (H6 n); specialize (H10 n).
destruct (h2 n); destruct (h3 n); auto.
destruct (h4 n); destruct (h5 n); intuit.
decomp H6.
inv H1.
decomp H10.
inv H3.
inv H2.
destruct H6.
inv H0.
intro n; unfold mycombine; specialize (H6 n); specialize (H10 n).
destruct (h4 n); destruct (h5 n).
decomp H6.
decomp H10.
inv H2; rewrite H0; left; split; auto.
rewrite H1; rewrite H3; auto.
inv H2.
right; split; auto.
rewrite H1.
decomp H10; auto.
inv H2.
destruct H10.
inv H.
decomp H10.
inv H0.
destruct H6; right; split; auto.
rewrite H2; rewrite H1; auto.
destruct H6; destruct H10.
rewrite H0; rewrite H2; auto.
destruct x as [i s h].
exists (St i s (fun _ => None)); repeat split.
intro n.
destruct (h n); auto.
destruct u as [i s h]; subst.
destruct x as [i1 s1 h1]; destruct y as [i2 s2 h2].
decomp H0; repeat subst.
apply f_equal; apply functional_extensionality; intro n; specialize (H5 n).
destruct (h1 n); destruct (h2 n); intuit.
decomp H5; auto.
inv H1.
Defined.
Proposition mydot_upd {A} : forall (x y z : nat -> option A) n v,
mydot x y z -> y n = None -> mydot (upd x n v) y (upd z n v).
Proof.
unfold mydot; unfold upd; intros.
destruct (eq_nat_dec n0 n); subst; intuit.
apply (H n0).
Qed.
Definition option_map2 {A B C} (op : A -> B -> C) x y : option C :=
match x, y with
| Some x, Some y => Some (op x y)
| _, _ => None
end.
Open Scope Z_scope.
Open Scope glbl_scope.
Definition opden (bop : binop) : Z -> Z -> Z :=
match bop with
| Plus => Zplus
| Minus => Zminus
| Mult => Zmult
| Div => Zdiv
| Mod => Zmod
end.
Fixpoint eden (e : exp) (i : lmap) (s : store) : option val :=
match e with
| Var x => s x
| LVar X => Some (fst i X, Lo)
| Num c => Some (c,Lo)
| BinOp bop e1 e2 => option_map2 (fun v1 v2 => (opden bop (fst v1) (fst v2), snd v1 \_/ snd v2)) (eden e1 i s) (eden e2 i s)
end.
Fixpoint edenZ (e : exp) (i : lmap) (s : store) : option Z :=
match e with
| Var x => option_map (fun v => fst v) (s x)
| LVar X => Some (fst i X)
| Num c => Some c
| BinOp bop e1 e2 => option_map2 (fun v1 v2 => opden bop v1 v2) (edenZ e1 i s) (edenZ e2 i s)
end.
Proposition edenZ_some : forall e i s v, edenZ e i s = Some v <-> exists l, eden e i s = Some (v,l).
Proof.
induction e; simpl; intros; split; intros.
destruct (s v) as [[v1 l1]|]; inv H.
exists l1; auto.
destruct H as [l]; rewrite H; auto.
inv H; exists Lo; auto.
destruct H as [l0]; inv H; auto.
inv H; exists Lo; auto.
destruct H as [l]; inv H; auto.
case_eq (edenZ e1 i s); intros.
case_eq (edenZ e2 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite IHe1 in H0; rewrite IHe2 in H1.
destruct H0 as [l1]; destruct H1 as [l2].
rewrite H; rewrite H0; exists (l1 \_/ l2); auto.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite H0 in H; inv H.
destruct H as [l].
case_eq (eden e1 i s); intros.
case_eq (eden e2 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
destruct v0 as [v0 l0]; destruct v1 as [v1 l1].
assert (exists l, eden e1 i s = Some (v0,l)).
exists l0; auto.
assert (exists l, eden e2 i s = Some (v1,l)).
exists l1; auto.
rewrite <- IHe1 in H; rewrite <- IHe2 in H2.
rewrite H; rewrite H2; auto.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite H0 in H; inv H.
Qed.
Proposition edenZ_none : forall e i s, edenZ e i s = None <-> eden e i s = None.
Proof.
induction e; simpl; intros; split; intros.
destruct (s v); inv H; auto.
rewrite H; auto.
inv H.
inv H.
inv H.
inv H.
case_eq (edenZ e1 i s); intros.
case_eq (edenZ e2 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite IHe2 in H1; rewrite H1.
destruct (eden e1 i s); auto.
rewrite IHe1 in H0; rewrite H0; auto.
case_eq (eden e1 i s); intros.
case_eq (eden e2 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite <- IHe2 in H1; rewrite H1.
destruct (edenZ e1 i s); auto.
rewrite <- IHe1 in H0; rewrite H0; auto.
Qed.
Definition bopden (bop : bbinop) : bool -> bool -> bool :=
match bop with
| And => andb
| Or => orb
| Impl => fun v1 v2 => if v1 then v2 else true
end.
Fixpoint bden (b : bexp) (i : lmap) (s : store) : option (bool * glbl) :=
match b with
| FF => Some (false,Lo)
| TT => Some (true,Lo)
| Eq e1 e2 => option_map2 (fun v1 v2 => (if Z_eq_dec (fst v1) (fst v2) then true else false, snd v1 \_/ snd v2)) (eden e1 i s) (eden e2 i s)
| Not b => option_map (fun v => (negb (fst v), snd v)) (bden b i s)
| BBinOp bop b1 b2 => option_map2 (fun v1 v2 => (bopden bop (fst v1) (fst v2), snd v1 \_/ snd v2)) (bden b1 i s) (bden b2 i s)
end.
Fixpoint bdenZ (b : bexp) (i : lmap) (s : store) : option bool :=
match b with
| FF => Some false
| TT => Some true
| Eq e1 e2 => option_map2 (fun v1 v2 => if Z_eq_dec v1 v2 then true else false) (edenZ e1 i s) (edenZ e2 i s)
| Not b => option_map (fun v => negb v) (bdenZ b i s)
| BBinOp bop b1 b2 => option_map2 (fun v1 v2 => bopden bop v1 v2) (bdenZ b1 i s) (bdenZ b2 i s)
end.
Proposition bdenZ_some : forall b i s v, bdenZ b i s = Some v <-> exists l, bden b i s = Some (v,l).
Proof.
induction b; simpl; intros; split; intros.
inv H; exists Lo; auto.
destruct H as [l]; inv H; auto.
inv H; exists Lo; auto.
destruct H as [l]; inv H; auto.
case_eq (edenZ e i s); intros.
case_eq (edenZ e0 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite edenZ_some in H0; rewrite edenZ_some in H1.
destruct H0 as [l]; destruct H1 as [l0]; rewrite H; rewrite H0.
exists (l \_/ l0); auto.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite H0 in H; inv H.
destruct H as [l].
case_eq (eden e i s); intros.
case_eq (eden e0 i s); intros.
destruct v0 as [v0 l0]; destruct v1 as [v1 l1].
rewrite H0 in H; rewrite H1 in H; inv H.
assert (exists l, eden e i s = Some (v0,l)).
exists l0; auto.
assert (exists l, eden e0 i s = Some (v1,l)).
exists l1; auto.
rewrite <- edenZ_some in H; rewrite <- edenZ_some in H2.
rewrite H; rewrite H2; auto.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite H0 in H; inv H.
case_eq (bdenZ b i s); intros.
rewrite H0 in H; inv H.
rewrite IHb in H0; destruct H0 as [l]; exists l.
rewrite H; auto.
rewrite H0 in H; inv H.
destruct H as [l].
case_eq (bden b i s); intros.
destruct p as [v1 l1].
assert (exists l, bden b i s = Some (v1,l)).
exists l1; auto.
rewrite H0 in H; inv H.
rewrite <- IHb in H1; rewrite H1; auto.
rewrite H0 in H; inv H.
case_eq (bdenZ b2 i s); intros.
case_eq (bdenZ b3 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite IHb1 in H0; rewrite IHb2 in H1.
destruct H0 as [l1]; destruct H1 as [l2].
rewrite H; rewrite H0; exists (l1 \_/ l2); auto.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite H0 in H; inv H.
destruct H as [l].
case_eq (bden b2 i s); intros.
case_eq (bden b3 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
destruct p as [v0 l0]; destruct p0 as [v1 l1].
assert (exists l, bden b2 i s = Some (v0,l)).
exists l0; auto.
assert (exists l, bden b3 i s = Some (v1,l)).
exists l1; auto.
rewrite <- IHb1 in H; rewrite <- IHb2 in H2.
rewrite H; rewrite H2; auto.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite H0 in H; inv H.
Qed.
Proposition bdenZ_none : forall b i s, bdenZ b i s = None <-> bden b i s = None.
Proof.
induction b; simpl; intros; split; intros.
inv H.
inv H.
inv H.
inv H.
case_eq (edenZ e i s); intros.
case_eq (edenZ e0 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite edenZ_none in H1; rewrite H1.
destruct (eden e i s); auto.
rewrite edenZ_none in H0; rewrite H0; auto.
case_eq (eden e i s); intros.
case_eq (eden e0 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite <- edenZ_none in H1; rewrite H1.
destruct (edenZ e i s); auto.
rewrite <- edenZ_none in H0; rewrite H0; auto.
case_eq (bdenZ b i s); intros.
rewrite H0 in H; inv H.
rewrite IHb in H0; rewrite H0; auto.
case_eq (bden b i s); intros.
rewrite H0 in H; inv H.
rewrite <- IHb in H0; rewrite H0; auto.
case_eq (bdenZ b2 i s); intros.
case_eq (bdenZ b3 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite IHb2 in H1; rewrite H1.
destruct (bden b2 i s); auto.
rewrite IHb1 in H0; rewrite H0; auto.
case_eq (bden b2 i s); intros.
case_eq (bden b3 i s); intros.
rewrite H0 in H; rewrite H1 in H; inv H.
rewrite <- IHb2 in H1; rewrite H1.
destruct (bdenZ b2 i s); auto.
rewrite <- IHb1 in H0; rewrite H0; auto.
Qed.
Proposition eden_local : forall e i1 s1 h1 i2 s2 h2 i3 s3 h3 v,
dot mysep (St i1 s1 h1) (St i2 s2 h2) (St i3 s3 h3) -> eden e i1 s1 = Some v -> eden e i3 s3 = Some v.
Proof.
intros.
simpl in H; decomp H; repeat subst; auto.
Qed.
Proposition bden_local : forall b i1 s1 h1 i2 s2 h2 i3 s3 h3 v,
dot mysep (St i1 s1 h1) (St i2 s2 h2) (St i3 s3 h3) -> bden b i1 s1 = Some v -> bden b i3 s3 = Some v.
Proof.
intros.
simpl in H; decomp H; repeat subst; auto.
Qed.
Proposition eden_no_lvars : forall e i i' s, no_lvars_exp e -> eden e i s = eden e i' s.
Proof.
induction e; simpl; intros; intuit.
rewrite (IHe1 _ i'); intuit; rewrite (IHe2 _ i'); intuit.
Qed.
Proposition bden_no_lvars : forall b i i' s, no_lvars_bexp b -> bden b i s = bden b i' s.
Proof.
induction b; simpl; intros; intuit.
rewrite (eden_no_lvars e _ i'); intuit; rewrite (eden_no_lvars e0 _ i'); intuit.
rewrite (IHb _ i'); intuit.
rewrite (IHb1 _ i'); intuit; rewrite (IHb2 _ i'); intuit.
Qed.
Definition context := glbl.
Inductive config := Cf : state -> cmd -> list cmd -> config.
Definition getStoreFromConfig (cf : config) := match cf with Cf (St _ s _) _ _ => s end.
Coercion getStoreFromConfig : config >-> store.
Definition taint_vars (K : list cmd) (s : store) : store :=
fun x => if In_dec eq_nat_dec x (modifies K) then
match s x with Some (v,_) => Some (v,Hi) | None => Some (0,Hi) end
else s x.
Definition taint_vars_cf (cf : config) : config :=
match cf with Cf (St i s h) C K => Cf (St i (taint_vars (C::K) s) h) C K end.
Inductive hstep : config -> config -> Prop :=
| HStep_skip : forall st C K, hstep (Cf st Skip (C::K)) (Cf st C K)
| HStep_assign : forall i s h K x e v l,
eden e i s = Some (v,l) ->
hstep (Cf (St i s h) (Assign x e) K) (Cf (St i (upd s x (v, Hi)) h) Skip K)
| HStep_read : forall i s h K x e v1 l1 v2 l2 (pf : v1 >= 0),
eden e i s = Some (v1,l1) -> h (nat_of_Z v1 pf) = Some (v2,l2) ->
hstep (Cf (St i s h) (Read x e) K) (Cf (St i (upd s x (v2, Hi)) h) Skip K)
| HStep_write : forall i s h K e1 e2 v1 l1 v2 l2 (pf : v1 >= 0),
eden e1 i s = Some (v1,l1) -> eden e2 i s = Some (v2,l2) -> h (nat_of_Z v1 pf) <> None ->
hstep (Cf (St i s h) (Write e1 e2) K) (Cf (St i s (upd h (nat_of_Z v1 pf) (v2, Hi))) Skip K)
| HStep_seq : forall st C1 C2 K, hstep (Cf st (Seq C1 C2) K) (Cf st C1 (C2::K))
| HStep_if_true : forall i s h C1 C2 K b l,
bden b i s = Some (true,l) -> hstep (Cf (St i s h) (If b C1 C2) K) (Cf (St i s h) C1 K)
| HStep_if_false : forall i s h C1 C2 K b l,
bden b i s = Some (false,l) -> hstep (Cf (St i s h) (If b C1 C2) K) (Cf (St i s h) C2 K)
| HStep_while_true : forall i s h C K b l,
bden b i s = Some (true,l) -> hstep (Cf (St i s h) (While b C) K) (Cf (St i s h) C (While b C :: K))
| HStep_while_false : forall i s h C K b l,
bden b i s = Some (false,l) -> hstep (Cf (St i s h) (While b C) K) (Cf (St i s h) Skip K).
Inductive hstepn : nat -> config -> config -> Prop :=
| HStep_zero : forall cf, hstepn 0 cf cf
| HStep_succ : forall n cf cf' cf'', hstep cf cf' -> hstepn n cf' cf'' -> hstepn (S n) cf cf''.
Definition halt_config cf := match cf with Cf _ Skip [] => true | _ => false end.
Inductive can_hstep : config -> Prop := Can_hstep : forall cf cf', hstep cf cf' -> can_hstep cf.
Definition hsafe cf := forall n cf', hstepn n cf cf' -> halt_config cf' = false -> can_hstep cf'.
Inductive lstep : config -> config -> list Z -> Prop :=
| LStep_skip : forall st C K, lstep (Cf st Skip (C::K)) (Cf st C K) []
| LStep_output : forall i s h K e v,
eden e i s = Some (v,Lo) ->
lstep (Cf (St i s h) (Output e) K) (Cf (St i s h) Skip K) [v]
| LStep_assign : forall i s h K x e v l,
eden e i s = Some (v,l) ->
lstep (Cf (St i s h) (Assign x e) K) (Cf (St i (upd s x (v, l)) h) Skip K) []
| LStep_read : forall i s h K x e v1 l1 v2 l2 (pf : v1 >= 0),
eden e i s = Some (v1,l1) -> h (nat_of_Z v1 pf) = Some (v2,l2) ->
lstep (Cf (St i s h) (Read x e) K) (Cf (St i (upd s x (v2, l1 \_/ l2)) h) Skip K) []
| LStep_write : forall i s h K e1 e2 v1 l1 v2 l2 (pf : v1 >= 0),
eden e1 i s = Some (v1,l1) -> eden e2 i s = Some (v2,l2) -> h (nat_of_Z v1 pf) <> None ->
lstep (Cf (St i s h) (Write e1 e2) K) (Cf (St i s (upd h (nat_of_Z v1 pf) (v2, l1 \_/ l2))) Skip K) []
| LStep_seq : forall st C1 C2 K, lstep (Cf st (Seq C1 C2) K) (Cf st C1 (C2::K)) []
| LStep_if_true : forall i s h C1 C2 K b,
bden b i s = Some (true,Lo) -> lstep (Cf (St i s h) (If b C1 C2) K) (Cf (St i s h) C1 K) []
| LStep_if_false : forall i s h C1 C2 K b,
bden b i s = Some (false,Lo) -> lstep (Cf (St i s h) (If b C1 C2) K) (Cf (St i s h) C2 K) []
| LStep_while_true : forall i s h C K b,
bden b i s = Some (true,Lo) -> lstep (Cf (St i s h) (While b C) K) (Cf (St i s h) C (While b C :: K)) []
| LStep_while_false : forall i s h C K b,
bden b i s = Some (false,Lo) -> lstep (Cf (St i s h) (While b C) K) (Cf (St i s h) Skip K) []
| LStep_if_hi : forall i s h st' C1 C2 K b v n,
bden b i s = Some (v,Hi) -> hsafe (taint_vars_cf (Cf (St i s h) (If b C1 C2) [])) ->
hstepn n (taint_vars_cf (Cf (St i s h) (If b C1 C2) [])) (Cf st' Skip []) ->
lstep (Cf (St i s h) (If b C1 C2) K) (Cf st' Skip K) []
| LStep_if_hi_dvg : forall i s h C1 C2 K b v,
bden b i s = Some (v,Hi) -> hsafe (taint_vars_cf (Cf (St i s h) (If b C1 C2) [])) ->
(forall n st', ~ hstepn n (taint_vars_cf (Cf (St i s h) (If b C1 C2) [])) (Cf st' Skip [])) ->
lstep (Cf (St i s h) (If b C1 C2) K) (Cf (St i s h) (If b C1 C2) K) []
| LStep_while_hi : forall i s h st' C K b v n,
bden b i s = Some (v,Hi) -> hsafe (taint_vars_cf (Cf (St i s h) (While b C) [])) ->
hstepn n (taint_vars_cf (Cf (St i s h) (While b C) [])) (Cf st' Skip []) ->
lstep (Cf (St i s h) (While b C) K) (Cf st' Skip K) []
| LStep_while_hi_dvg : forall i s h C K b v,
bden b i s = Some (v,Hi) -> hsafe (taint_vars_cf (Cf (St i s h) (While b C) [])) ->
(forall n st', ~ hstepn n (taint_vars_cf (Cf (St i s h) (While b C) [])) (Cf st' Skip [])) ->
lstep (Cf (St i s h) (While b C) K) (Cf (St i s h) (While b C) K) [].
Inductive lstepn : nat -> config -> config -> list Z -> Prop :=
| LStep_zero : forall cf, lstepn 0 cf cf []
| LStep_succ : forall n cf cf' cf'' o o', lstep cf cf' o -> lstepn n cf' cf'' o' -> lstepn (S n) cf cf'' (o++o').
Inductive can_lstep : config -> Prop := Can_lstep : forall cf cf' o, lstep cf cf' o -> can_lstep cf.
Definition lsafe cf := forall n cf' o, lstepn n cf cf' o -> halt_config cf' = false -> can_lstep cf'.
Definition side_condition C (st1 st2 : state) :=
match C, st1, st2 with
| Read _ e, St i1 s1 h1, St i2 s2 h2 =>
match (eden e i1 s1), (eden e i2 s2) with
| Some (v1,_), Some (v2,_) =>
match Zneg_dec v1, Zneg_dec v2 with
| left pf1, left pf2 =>
match h1 (nat_of_Z v1 pf1), h2 (nat_of_Z v2 pf2) with
| Some (_,l1), Some (_,l2) => l1 = l2
| _, _ => False
end
| _, _ => False
end
| _, _ => False
end
| _, _, _ => True
end.
Close Scope Z_scope.
Proposition dvg_ex_mid : forall cf,
(forall n st, ~ hstepn n cf (Cf st Skip [])) \/ exists n, exists st, hstepn n cf (Cf st Skip []).
Proof.
intros.
dup (classic (exists n, exists st, hstepn n cf (Cf st Skip []))).
destruct H; [right | left]; auto.
exists n; exists st; auto.
Qed.
Lemma hstep_trans : forall n1 n2 cf1 cf2 cf3, hstepn n1 cf1 cf2 -> hstepn n2 cf2 cf3 -> hstepn (n1+n2) cf1 cf3.
Proof.
induction n1 using (well_founded_induction lt_wf); intros.
inv H0; simpl; auto.
apply HStep_succ with (cf' := cf'); auto.
apply H with (cf2 := cf2); auto.
Qed.
Lemma lstep_trans : forall n1 n2 cf1 cf2 cf3 o1 o2, lstepn n1 cf1 cf2 o1 -> lstepn n2 cf2 cf3 o2 -> lstepn (n1+n2) cf1 cf3 (o1++o2).
Proof.
induction n1 using (well_founded_induction lt_wf); intros.
inv H0; simpl; auto.
rewrite app_assoc; apply LStep_succ with (cf' := cf'); auto.
apply H with (cf2 := cf2); auto.
Qed.
Lemma hstep_extend : forall st C K st' C' K' K0,
hstep (Cf st C K) (Cf st' C' K') -> hstep (Cf st C (K++K0)) (Cf st' C' (K'++K0)).
Proof.
intros.
inv H.
apply HStep_skip.
apply HStep_assign with (l := l); auto.
apply HStep_read with (v1 := v1) (pf := pf) (l1 := l1) (l2 := l2); auto.
apply HStep_write with (l1 := l1) (l2 := l2); auto.
apply HStep_seq.
apply HStep_if_true with (l := l); auto.
apply HStep_if_false with (l := l); auto.
apply HStep_while_true with (l := l); auto.
apply HStep_while_false with (l := l); auto.
Qed.
Lemma hstepn_extend : forall n st C K st' C' K' K0,
hstepn n (Cf st C K) (Cf st' C' K') -> hstepn n (Cf st C (K++K0)) (Cf st' C' (K'++K0)).
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0.
apply HStep_zero.
destruct cf' as [st'' C'' K''].
apply HStep_succ with (cf' := Cf st'' C'' (K''++K0)).
apply hstep_extend; auto.
apply H; auto.
Qed.
Lemma lstep_extend : forall st C K st' C' K' K0 o,
lstep (Cf st C K) (Cf st' C' K') o -> lstep (Cf st C (K++K0)) (Cf st' C' (K'++K0)) o.
Proof.
intros.
inv H.
apply LStep_skip.
apply LStep_output; auto.
apply LStep_assign with (l := l); auto.
apply LStep_read with (v1 := v1) (pf := pf) (l1 := l1) (l2 := l2); auto.
apply LStep_write with (l1 := l1) (l2 := l2); auto.
apply LStep_seq.
apply LStep_if_true; auto.
apply LStep_if_false; auto.
apply LStep_while_true; auto.
apply LStep_while_false; auto.
apply LStep_if_hi with (b := b) (v := v) (n := n); auto.
apply LStep_if_hi_dvg with (b := b) (v := v); auto.
apply LStep_while_hi with (b := b) (v := v) (n := n); auto.
apply LStep_while_hi_dvg with (b := b) (v := v); auto.
Qed.
Lemma lstepn_extend : forall n st C K st' C' K' K0 o,
lstepn n (Cf st C K) (Cf st' C' K') o -> lstepn n (Cf st C (K++K0)) (Cf st' C' (K'++K0)) o.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0.
apply LStep_zero.
destruct cf' as [st'' C'' K''].
apply LStep_succ with (cf' := Cf st'' C'' (K''++K0)).
apply lstep_extend; auto.
apply H; auto.
Qed.
Lemma hstep_trans_inv : forall n st st' C C' K0 K K',
hstepn n (Cf st C (K0++K)) (Cf st' C' K') ->
(exists K'', hstepn n (Cf st C K0) (Cf st' C' K'') /\ K' = K''++K) \/
exists st'', exists n1, exists n2,
hstepn n1 (Cf st C K0) (Cf st'' Skip []) /\ hstepn n2 (Cf st'' Skip K) (Cf st' C' K') /\
n = n1 + n2.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0.
left; exists K0.
split; auto; apply HStep_zero.
inv H1.
destruct K0.
simpl in H5; subst.
right; exists st; exists 0; exists (S n0); repeat (split; auto).
apply HStep_zero.
apply HStep_succ with (cf' := Cf st C0 K1); auto.
apply HStep_skip.
inv H5.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf st c K0); auto.
apply HStep_skip.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf st c K0); auto.
apply HStep_skip.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf (St i (upd s x (v,Hi)) h) Skip K0); auto.
apply HStep_assign with (l := l); auto.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf (St i (upd s x (v,Hi)) h) Skip K0); auto.
apply HStep_assign with (l := l); auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf (St i (upd s x (v2,Hi)) h) Skip K0); auto.
apply HStep_read with (v1 := v1) (l1 := l1) (l2 := l2) (pf := pf); auto.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf (St i (upd s x (v2, Hi)) h) Skip K0); auto.
apply HStep_read with (v1 := v1) (l1 := l1) (l2 := l2) (pf := pf); auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf (St i s (upd h (nat_of_Z v1 pf) (v2,Hi))) Skip K0); auto.
apply HStep_write with (l1 := l1) (l2 := l2); auto.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf (St i s (upd h (nat_of_Z v1 pf) (v2,Hi))) Skip K0); auto.
apply HStep_write with (l1 := l1) (l2 := l2); auto.
change (hstepn n0 (Cf st C1 ((C2 :: K0) ++ K)) (Cf st' C' K')) in H2.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf st C1 (C2::K0)); auto.
apply HStep_seq.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf st C1 (C2::K0)); auto.
apply HStep_seq.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf (St i s h) C1 K0); auto.
apply HStep_if_true with (l := l); auto.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf (St i s h) C1 K0); auto.
apply HStep_if_true with (l := l); auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf (St i s h) C2 K0); auto.
apply HStep_if_false with (l := l); auto.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf (St i s h) C2 K0); auto.
apply HStep_if_false with (l := l); auto.
change (hstepn n0 (Cf (St i s h) C0 ((While b C0 :: K0) ++ K)) (Cf st' C' K')) in H2.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf (St i s h) C0 (While b C0 :: K0)); auto.
apply HStep_while_true with (l := l); auto.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf (St i s h) C0 (While b C0 :: K0)); auto.
apply HStep_while_true with (l := l); auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply HStep_succ with (cf' := Cf (St i s h) Skip K0); auto.
apply HStep_while_false with (l := l); auto.
destruct H0 as [st'' [n1 [n2 [H0 [H1]]]]]; subst.
right; exists st''; exists (S n1); exists n2; repeat (split; auto).
apply HStep_succ with (cf' := Cf (St i s h) Skip K0); auto.
apply HStep_while_false with (l := l); auto.
Qed.
Lemma lstep_trans_inv : forall n st st' C C' K0 K K' o,
lstepn n (Cf st C (K0++K)) (Cf st' C' K') o ->
(exists K'', lstepn n (Cf st C K0) (Cf st' C' K'') o /\ K' = K''++K) \/
exists st'', exists n1, exists n2, exists o1, exists o2,
lstepn n1 (Cf st C K0) (Cf st'' Skip []) o1 /\ lstepn n2 (Cf st'' Skip K) (Cf st' C' K') o2 /\
n = n1 + n2 /\ o = o1 ++ o2.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0.
left; exists K0.
split; auto; apply LStep_zero.
inv H1.
destruct K0.
simpl in H5; subst.
right; exists st; exists 0; exists (S n0); exists []; exists ([]++o'); repeat (split; auto).
apply LStep_zero.
apply LStep_succ with (cf' := Cf st C0 K1); auto.
apply LStep_skip.
inv H5.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf st c K0); auto.
apply LStep_skip.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf st c K0); auto.
apply LStep_skip.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i s h) Skip K0); auto.
apply LStep_output; auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([v]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i s h) Skip K0); auto.
apply LStep_output; auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i (upd s x (v,l)) h) Skip K0); auto.
apply LStep_assign; auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i (upd s x (v, l)) h) Skip K0); auto.
apply LStep_assign; auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i (upd s x (v2, l1 \_/ l2)) h) Skip K0); auto.
apply LStep_read with (v1 := v1) (pf := pf); auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i (upd s x (v2, l1 \_/ l2)) h) Skip K0); auto.
apply LStep_read with (v1 := v1) (pf := pf); auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i s (upd h (nat_of_Z v1 pf) (v2, l1 \_/ l2))) Skip K0); auto.
apply LStep_write; auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i s (upd h (nat_of_Z v1 pf) (v2, l1 \_/ l2))) Skip K0); auto.
apply LStep_write; auto.
change (lstepn n0 (Cf st C1 ((C2 :: K0) ++ K)) (Cf st' C' K') o') in H2.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf st C1 (C2::K0)); auto.
apply LStep_seq.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf st C1 (C2::K0)); auto.
apply LStep_seq.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i s h) C1 K0); auto.
apply LStep_if_true; auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i s h) C1 K0); auto.
apply LStep_if_true; auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i s h) C2 K0); auto.
apply LStep_if_false; auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i s h) C2 K0); auto.
apply LStep_if_false; auto.
change (lstepn n0 (Cf (St i s h) C0 ((While b C0 :: K0) ++ K)) (Cf st' C' K') o') in H2.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i s h) C0 (While b C0 :: K0)); auto.
apply LStep_while_true; auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i s h) C0 (While b C0 :: K0)); auto.
apply LStep_while_true; auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i s h) Skip K0); auto.
apply LStep_while_false; auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i s h) Skip K0); auto.
apply LStep_while_false; auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf st'0 Skip K0); auto.
apply LStep_if_hi with (b := b) (v := v) (n := n); auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf st'0 Skip K0); auto.
apply LStep_if_hi with (b := b) (v := v) (n := n); auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i s h) (If b C1 C2) K0); auto.
apply LStep_if_hi_dvg with (b := b) (v := v); auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i s h) (If b C1 C2) K0); auto.
apply LStep_if_hi_dvg with (b := b) (v := v); auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf st'0 Skip K0); auto.
apply LStep_while_hi with (b := b) (v := v) (n := n); auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf st'0 Skip K0); auto.
apply LStep_while_hi with (b := b) (v := v) (n := n); auto.
apply H in H2; auto.
destruct H2.
destruct H0 as [K'' [H0]]; subst.
left; exists K''; split; auto.
apply LStep_succ with (cf' := Cf (St i s h) (While b C0) K0); auto.
apply LStep_while_hi_dvg with (b := b) (v := v); auto.
destruct H0 as [st'' [n1 [n2 [o1 [o2 [H0 [H1 [H2]]]]]]]]; subst.
right; exists st''; exists (S n1); exists n2; exists ([]++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := Cf (St i s h) (While b C0) K0); auto.
apply LStep_while_hi_dvg with (b := b) (v := v); auto.
Qed.
Lemma hstep_trans_inv' : forall a b cf cf',
hstepn (a+b) cf cf' -> exists cf'', hstepn a cf cf'' /\ hstepn b cf'' cf'.
Proof.
induction a using (well_founded_induction lt_wf); intros.
inv H0.
assert (a = 0); try omega.
assert (b = 0); try omega; subst.
exists cf'; split; apply HStep_zero.
destruct a; simpl in H1; subst.
exists cf; split.
apply HStep_zero.
apply HStep_succ with (cf' := cf'0); auto.
inv H1.
apply H in H3; auto.
destruct H3 as [cf'' [H3]]; exists cf''; split; auto.
apply HStep_succ with (cf' := cf'0); auto.
Qed.
Lemma lstep_trans_inv' : forall a b cf cf' o,
lstepn (a+b) cf cf' o -> exists cf'', exists o1, exists o2,
lstepn a cf cf'' o1 /\ lstepn b cf'' cf' o2 /\ o = o1 ++ o2.
Proof.
induction a using (well_founded_induction lt_wf); intros.
inv H0.
assert (a = 0); try omega.
assert (b = 0); try omega; subst.
exists cf'; exists []; exists []; repeat (split; auto); apply LStep_zero.
destruct a; simpl in H1; subst.
exists cf; exists []; exists (o0++o'); repeat (split; auto).
apply LStep_zero.
apply LStep_succ with (cf' := cf'0); auto.
inv H1.
apply H in H3; auto.
destruct H3 as [cf'' [o1 [o2 [H3 [H4]]]]]; exists cf''; exists (o0++o1); exists o2; repeat (split; auto).
apply LStep_succ with (cf' := cf'0); auto.
subst; rewrite app_assoc; auto.
Qed.
Lemma hstep_det : forall cf cf1 cf2, hstep cf cf1 -> hstep cf cf2 -> cf1 = cf2.
Proof.
intros.
inv H; inv H0; auto.
rewrite H8 in H1; inv H1; auto.
rewrite H9 in H1; inv H1.
rewrite (proof_irrelevance _ pf0 pf) in H10; rewrite H10 in H2; inv H2; auto.
rewrite H10 in H1; inv H1; rewrite H11 in H2; inv H2.
rewrite (proof_irrelevance _ pf0 pf); auto.
rewrite H9 in H1; inv H1.
rewrite H9 in H1; inv H1.
rewrite H8 in H1; inv H1.
rewrite H8 in H1; inv H1.
Qed.
Lemma hstepn_det : forall n cf cf1 cf2, hstepn n cf cf1 -> hstepn n cf cf2 -> cf1 = cf2.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0; inv H1; auto.
dup (hstep_det _ _ _ H2 H4); subst.
apply H with (y := n0) (cf := cf'0); auto.
Qed.
Lemma hstepn_det_term : forall n1 n2 cf st1 st2, hstepn n1 cf (Cf st1 Skip []) -> hstepn n2 cf (Cf st2 Skip []) -> n1 = n2.
Proof.
intros.
assert (n1 = n2 \/ n1 < n2 \/ n2 < n1); try omega.
decomp H1; auto.
assert (n1 + (n2-n1) = n2); try omega.
rewrite <- H1 in H0; apply hstep_trans_inv' in H0.
destruct H0 as [cf' [H0]].
dup (hstepn_det _ _ _ _ H H0); subst cf'.
inv H2; try omega.
inv H5.
assert (n2 + (n1-n2) = n1); try omega.
rewrite <- H1 in H; apply hstep_trans_inv' in H.
destruct H as [cf' [H]].
dup (hstepn_det _ _ _ _ H H0); subst cf'.
inv H2; try omega.
inv H5.
Qed.
Lemma lstep_det : forall cf cf1 cf2 o1 o2, lstep cf cf1 o1 -> lstep cf cf2 o2 -> cf1 = cf2 /\ o1 = o2.
Proof.
intros.
inv H.
inv H0; auto.
inv H0.
rewrite H8 in H1; inv H1; auto.
inv H0.
rewrite H9 in H1; inv H1; auto.
inv H0.
rewrite H10 in H1; inv H1.
rewrite (proof_irrelevance _ pf0 pf) in H11; rewrite H11 in H2; inv H2; auto.
inv H0.
rewrite H11 in H1; inv H1; rewrite H12 in H2; inv H2.
rewrite (proof_irrelevance _ pf0 pf); auto.
inv H0; auto.
inv H0; auto.
rewrite H10 in H1; inv H1.
rewrite H10 in H1; inv H1.
rewrite H10 in H1; inv H1.
inv H0; auto.
rewrite H10 in H1; inv H1.
rewrite H10 in H1; inv H1.
rewrite H10 in H1; inv H1.
inv H0; auto.
rewrite H9 in H1; inv H1.
rewrite H9 in H1; inv H1.
rewrite H9 in H1; inv H1.
inv H0; auto.
rewrite H9 in H1; inv H1.
rewrite H9 in H1; inv H1.
rewrite H9 in H1; inv H1.
inv H0; auto.
rewrite H12 in H1; inv H1.
rewrite H12 in H1; inv H1.
dup (hstepn_det_term _ _ _ _ _ H3 H14); subst.
dup (hstepn_det _ _ _ _ H3 H14).
inv H; auto.
inv H0; auto.
rewrite H12 in H1; inv H1.
rewrite H12 in H1; inv H1.
inv H0; auto.
rewrite H11 in H1; inv H1.
rewrite H11 in H1; inv H1.
dup (hstepn_det_term _ _ _ _ _ H3 H13); subst.
dup (hstepn_det _ _ _ _ H3 H13).
inv H; auto.
inv H0; auto.
rewrite H11 in H1; inv H1.
rewrite H11 in H1; inv H1.
Qed.
Lemma lstepn_det : forall n cf cf1 cf2 o1 o2, lstepn n cf cf1 o1 -> lstepn n cf cf2 o2 -> cf1 = cf2 /\ o1 = o2.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0; inv H1; auto.
destruct (lstep_det _ _ _ _ _ H2 H4); subst.
assert (n0 < S n0); try omega.
destruct (H _ H0 _ _ _ _ _ H3 H5); subst; auto.
Qed.
Lemma lstepn_det_term : forall n1 n2 cf st1 st2 o1 o2, lstepn n1 cf (Cf st1 Skip []) o1 -> lstepn n2 cf (Cf st2 Skip []) o2 -> n1 = n2.
Proof.
intros.
assert (n1 = n2 \/ n1 < n2 \/ n2 < n1); try omega.
decomp H1; auto.
assert (n1 + (n2-n1) = n2); try omega.
rewrite <- H1 in H0; clear H1; apply lstep_trans_inv' in H0.
destruct H0 as [cf' [o3 [o4 [H0 [H2]]]]]; subst.
destruct (lstepn_det _ _ _ _ _ _ H H0); subst.
inv H2; try omega.
inv H4.
assert (n2 + (n1-n2) = n1); try omega.
rewrite <- H1 in H; clear H1; apply lstep_trans_inv' in H.
destruct H as [cf' [o3 [o4 [H [H1]]]]]; subst.
destruct (lstepn_det _ _ _ _ _ _ H H0); subst.
inv H1; try omega.
inv H4.
Qed.
Definition diverge cf := forall n, exists cf', exists o, lstepn n cf cf' o.
Corollary diverge_halt : forall n cf st o, diverge cf -> lstepn n cf (Cf st Skip []) o -> False.
Proof.
intros.
destruct (H (n+1)) as [cf' [o']].
apply lstep_trans_inv' in H1.
destruct H1 as [cf'' [o1 [o2]]]; decomp H1; subst.
destruct (lstepn_det _ _ _ _ _ _ H0 H2); subst; inv H4.
inv H3.
Qed.
Proposition diverge_same_cf : forall cf o, lstep cf cf o -> diverge cf.
Proof.
intros.
assert (forall n, exists o, lstepn n cf cf o).
induction n; intros.
exists []; apply LStep_zero.
destruct IHn as [o']; exists (o++o'); apply LStep_succ with (cf' := cf); auto.
intro n; destruct (H0 n) as [o'].
exists cf; exists o'; auto.
Qed.
Lemma diverge_seq1 : forall C1 C2 K st, diverge (Cf st C1 []) -> diverge (Cf st (Seq C1 C2) K).
Proof.
intros; intro n.
destruct n.
exists (Cf st (Seq C1 C2) K); exists []; apply LStep_zero.
destruct (H n) as [[st' C' K'] [o]].
exists (Cf st' C' (K'++[C2]++K)); exists ([]++o).
apply LStep_succ with (cf' := Cf st C1 ([]++[C2]++K)).
apply LStep_seq.
apply lstepn_extend; auto.
Qed.
Lemma diverge_seq2 : forall C1 C2 K st st' n o,
lstepn n (Cf st C1 []) (Cf st' Skip []) o -> diverge (Cf st' C2 K) -> diverge (Cf st (Seq C1 C2) K).
Proof.
intros; intro n'.
assert (n' <= S n \/ n' > S n); try omega.
destruct H1.
destruct n'.
exists (Cf st (Seq C1 C2) K); exists []; apply LStep_zero.
assert (n = n'+(n-n')); try omega.
rewrite H2 in H; apply lstep_trans_inv' in H.
destruct H as [[st'' C'' K''] [o1'' [o2'']]]; decomp H.
exists (Cf st'' C'' (K''++[C2]++K)); exists ([]++o1'').
apply LStep_succ with (cf' := Cf st C1 ([]++[C2]++K)).
apply LStep_seq.
apply lstepn_extend; auto.
destruct (H0 (n' - S (S n))) as [cf [o']].
exists cf; exists ([]++o++[]++o').
assert (n' = S (n + S (n' - S (S n)))); try omega.
rewrite H3; apply LStep_succ with (cf' := Cf st C1 ([]++[C2]++K)).
apply LStep_seq.
apply lstep_trans with (cf2 := Cf st' Skip ([]++[C2]++K)).
apply lstepn_extend; auto.
apply LStep_succ with (cf' := Cf st' C2 K); auto.
apply LStep_skip.
Qed.
Lemma hstep_ff : forall C K C' K' i s h1 h2 h3 i' s' h1',
mydot h1 h2 h3 -> hstep (Cf (St i s h1) C K) (Cf (St i' s' h1') C' K') ->
exists h3', mydot h1' h2 h3' /\ hstep (Cf (St i s h3) C K) (Cf (St i' s' h3') C' K').
Proof.
intros.
inv H0.
exists h3; split; auto; apply HStep_skip.
exists h3; split; auto; apply HStep_assign with (l := l); auto.
exists h3; split; auto; apply HStep_read with (l1 := l1) (l2 := l2) (pf := pf); auto.
specialize (H (nat_of_Z v1 pf)); destruct (h3 (nat_of_Z v1 pf)); decomp H.
rewrite H1 in H12; inv H12; auto.
rewrite H1 in H12; inv H12.
rewrite H0 in H12; inv H12.
exists (upd h3 (nat_of_Z v1 pf) (v2,Hi)); split.
apply mydot_upd; auto.
specialize (H (nat_of_Z v1 pf)); destruct (h3 (nat_of_Z v1 pf)); decomp H; auto; try contradiction.
apply HStep_write with (l1 := l1) (l2 := l2); auto.
contradict H13; specialize (H (nat_of_Z v1 pf)).
rewrite H13 in H; intuit.
exists h3; split; auto; apply HStep_seq.
exists h3; split; auto; apply HStep_if_true with (l := l); auto.
exists h3; split; auto; apply HStep_if_false with (l := l); auto.
exists h3; split; auto; apply HStep_while_true with (l := l); auto.
exists h3; split; auto; apply HStep_while_false with (l := l); auto.
Qed.
Lemma hstepn_ff : forall n C K C' K' i s h1 h2 h3 i' s' h1',
mydot h1 h2 h3 -> hstepn n (Cf (St i s h1) C K) (Cf (St i' s' h1') C' K') ->
exists h3', mydot h1' h2 h3' /\ hstepn n (Cf (St i s h3) C K) (Cf (St i' s' h3') C' K').
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H1.
exists h3; split; auto; apply HStep_zero.
destruct cf' as [[i'' s'' h''] C'' K'']; apply hstep_ff with (h2 := h2) (h3 := h3) in H2; auto.
destruct H2 as [h3' [H2]].
assert (n0 < S n0); try omega.
destruct (H _ H4 _ _ _ _ _ _ _ _ _ _ _ _ H2 H3) as [h3'' [H5]]; exists h3''; split; auto.
apply HStep_succ with (cf' := Cf (St i'' s'' h3') C'' K''); auto.
Qed.
Lemma hstep_bf : forall C K C' K' i s h1 h2 h3 i' s' h3',
mydot h1 h2 h3 -> hstep (Cf (St i s h3) C K) (Cf (St i' s' h3') C' K') -> hsafe (Cf (St i s h1) C K) ->
exists h1', mydot h1' h2 h3' /\ hstep (Cf (St i s h1) C K) (Cf (St i' s' h1') C' K').
Proof.
intros.
inv H0.
exists h1; split; auto; apply HStep_skip.
exists h1; split; auto; apply HStep_assign with (l := l); auto.
exists h1; split; auto; apply HStep_read with (l1 := l1) (l2 := l2) (pf := pf); auto.
specialize (H (nat_of_Z v1 pf)); rewrite H13 in H; decomp H; auto.
specialize (H1 0 (Cf (St i' s h1) (Read x e) K') (HStep_zero _) (refl_equal _)).
inv H1.
inv H.
rewrite H10 in H4; inv H4.
rewrite (proof_irrelevance _ pf0 pf) in H11; rewrite H11 in H2; inv H2.
specialize (H1 0 (Cf (St i' s' h1) (Write e1 e2) K') (HStep_zero _) (refl_equal _)).
inv H1.
inv H0.
rewrite H9 in H5; inv H5.
rewrite (proof_irrelevance _ pf0 pf) in H11.
exists (upd h1 (nat_of_Z v1 pf) (v2,Hi)); split.
apply mydot_upd; auto.
specialize (H (nat_of_Z v1 pf)); destruct (h3 (nat_of_Z v1 pf)); decomp H; auto; try contradiction.
apply HStep_write with (l1 := l1) (l2 := l2); auto.
exists h1; split; auto; apply HStep_seq.
exists h1; split; auto; apply HStep_if_true with (l := l); auto.
exists h1; split; auto; apply HStep_if_false with (l := l); auto.
exists h1; split; auto; apply HStep_while_true with (l := l); auto.
exists h1; split; auto; apply HStep_while_false with (l := l); auto.
Qed.
Lemma hstepn_bf : forall n C K C' K' i s h1 h2 h3 i' s' h3',
mydot h1 h2 h3 -> hstepn n (Cf (St i s h3) C K) (Cf (St i' s' h3') C' K') -> hsafe (Cf (St i s h1) C K) ->
exists h1', mydot h1' h2 h3' /\ hstepn n (Cf (St i s h1) C K) (Cf (St i' s' h1') C' K').
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H1.
exists h1; split; auto; apply HStep_zero.
destruct cf' as [[i'' s'' h''] C'' K'']; apply hstep_bf with (h1 := h1) (h2 := h2) in H3; auto.
destruct H3 as [h1' [H3]].
assert (n0 < S n0); try omega.
assert (hsafe (Cf (St i'' s'' h1') C'' K'')).
unfold hsafe; intros.
apply (H2 (S n)); auto.
apply HStep_succ with (cf' := Cf (St i'' s'' h1') C'' K''); auto.
destruct (H _ H5 _ _ _ _ _ _ _ _ _ _ _ _ H3 H4 H6) as [h1'' [H7]]; exists h1''; split; auto.
apply HStep_succ with (cf' := Cf (St i'' s'' h1') C'' K''); auto.
Qed.
Lemma lstep_ff : forall C K C' K' i s h1 h2 h3 i' s' h1' o,
mydot h1 h2 h3 -> lstep (Cf (St i s h1) C K) (Cf (St i' s' h1') C' K') o ->
exists h3', mydot h1' h2 h3' /\ lstep (Cf (St i s h3) C K) (Cf (St i' s' h3') C' K') o.
Proof.
intros.
inv H0.
exists h3; split; auto; apply LStep_skip.
exists h3; split; auto; apply LStep_output; auto.
exists h3; split; auto; apply LStep_assign; auto.
exists h3; split; auto; apply LStep_read with (v1 := v1) (pf := pf); auto.
specialize (H (nat_of_Z v1 pf)); rewrite H13 in H.
destruct (h3 (nat_of_Z v1 pf)); decomp H; auto.
inv H1.
exists (upd h3 (nat_of_Z v1 pf) (v2, l1 \_/ l2)); split.
apply mydot_upd; auto.
specialize (H (nat_of_Z v1 pf)).
destruct (h3 (nat_of_Z v1 pf)); decomp H0; auto; try contradiction.
apply LStep_write; auto.
contradict H14; specialize (H (nat_of_Z v1 pf)).
rewrite H14 in H; intuit.
exists h3; split; auto; apply LStep_seq.
exists h3; split; auto; apply LStep_if_true; auto.
exists h3; split; auto; apply LStep_if_false; auto.
exists h3; split; auto; apply LStep_while_true; auto.
exists h3; split; auto; apply LStep_while_false; auto.
apply hstepn_ff with (h2 := h2) (h3 := h3) in H12; auto.
destruct H12 as [h3' [H12]]; exists h3'; split; auto.
apply LStep_if_hi with (v := v) (n := n); auto.
unfold hsafe; intros.
destruct cf' as [[i'' s'' h''] C'' K'']; apply hstepn_bf with (h1 := h1) (h2 := h2) in H1; auto.
destruct H1 as [h1'' [H1]].
apply H11 in H3; apply H3 in H2.
inv H2.
destruct cf' as [[i''' s''' h'''] C''' K''']; apply hstep_ff with (h2 := h2) (h3 := h'') in H4; auto.
destruct H4 as [h3'' [H4]].
apply (Can_hstep _ _ H2).
exists h3; split; auto.
apply LStep_if_hi_dvg with (v := v); auto.
unfold hsafe; intros.
destruct cf' as [[i'' s'' h''] C'' K'']; apply hstepn_bf with (h1 := h1') (h2 := h2) in H0; auto.
destruct H0 as [h1'' [H0]].
apply H13 in H2; apply H2 in H1.
inv H1.
destruct cf' as [[i''' s''' h'''] C''' K''']; apply hstep_ff with (h2 := h2) (h3 := h'') in H3; auto.
destruct H3 as [h3' [H3]].
apply (Can_hstep _ _ H1).
intros; intro.
destruct st' as [i'' s'' h3']; apply hstepn_bf with (h1 := h1') (h2 := h2) in H0; auto.
destruct H0 as [h1'' [H0]].
contradiction (H14 n (St i'' s'' h1'')).
apply hstepn_ff with (h2 := h2) (h3 := h3) in H12; auto.
destruct H12 as [h3' [H12]]; exists h3'; split; auto.
apply LStep_while_hi with (v := v) (n := n); auto.
unfold hsafe; intros.
destruct cf' as [[i'' s'' h''] C'' K'']; apply hstepn_bf with (h1 := h1) (h2 := h2) in H1; auto.
destruct H1 as [h1'' [H1]].
apply H11 in H3; apply H3 in H2.
inv H2.
destruct cf' as [[i''' s''' h'''] C''' K''']; apply hstep_ff with (h2 := h2) (h3 := h'') in H4; auto.
destruct H4 as [h3'' [H4]].
apply (Can_hstep _ _ H2).
exists h3; split; auto.
apply LStep_while_hi_dvg with (v := v); auto.
unfold hsafe; intros.
destruct cf' as [[i'' s'' h''] C'' K'']; apply hstepn_bf with (h1 := h1') (h2 := h2) in H0; auto.
destruct H0 as [h1'' [H0]].
apply H13 in H2; apply H2 in H1.
inv H1.
destruct cf' as [[i''' s''' h'''] C''' K''']; apply hstep_ff with (h2 := h2) (h3 := h'') in H3; auto.
destruct H3 as [h3' [H3]].
apply (Can_hstep _ _ H1).
intros; intro.
destruct st' as [i'' s'' h3']; apply hstepn_bf with (h1 := h1') (h2 := h2) in H0; auto.
destruct H0 as [h1'' [H0]].
contradiction (H14 n (St i'' s'' h1'')).
Qed.
Lemma lstepn_ff : forall n C K C' K' i s h1 h2 h3 i' s' h1' o,
mydot h1 h2 h3 -> lstepn n (Cf (St i s h1) C K) (Cf (St i' s' h1') C' K') o ->
exists h3', mydot h1' h2 h3' /\ lstepn n (Cf (St i s h3) C K) (Cf (St i' s' h3') C' K') o.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H1.
exists h3; split; auto; apply LStep_zero.
destruct cf' as [[i'' s'' h''] C'' K'']; apply lstep_ff with (h2 := h2) (h3 := h3) in H2; auto.
destruct H2 as [h3' [H2]].
assert (n0 < S n0); try omega.
destruct (H _ H4 _ _ _ _ _ _ _ _ _ _ _ _ _ H2 H3) as [h3'' [H5]]; exists h3''; split; auto.
apply LStep_succ with (cf' := Cf (St i'' s'' h3') C'' K''); auto.
Qed.
Corollary lstepn_nonincreasing : forall n i s h i' s' h' C K C' K' o a,
lstepn n (Cf (St i s h) C K) (Cf (St i' s' h') C' K') o -> h a = None -> h' a = None.
Proof.
intros.
apply lstepn_ff with (h2 := fun n => if eq_nat_dec n a then Some (0%Z,Lo) else None) (h3 := upd h a (0%Z,Lo)) in H.
destruct H as [h3' [H]].
specialize (H a).
destruct (h3' a); decomp H; auto.
destruct (eq_nat_dec a a); inv H4.
intro a'.
unfold upd; destruct (eq_nat_dec a' a); subst; auto.
destruct (h a'); auto.
Qed.
Lemma lstep_bf : forall C K C' K' i s h1 h2 h3 i' s' h3' o,
mydot h1 h2 h3 -> lstep (Cf (St i s h3) C K) (Cf (St i' s' h3') C' K') o -> lsafe (Cf (St i s h1) C K) ->
exists h1', mydot h1' h2 h3' /\ lstep (Cf (St i s h1) C K) (Cf (St i' s' h1') C' K') o.
Proof.
intros.
inv H0.
exists h1; split; auto; apply LStep_skip.
exists h1; split; auto; apply LStep_output; auto.
exists h1; split; auto; apply LStep_assign with (l := l); auto.
exists h1; split; auto; apply LStep_read with (l1 := l1) (l2 := l2) (pf := pf); auto.
specialize (H (nat_of_Z v1 pf)); rewrite H14 in H; decomp H; auto.
specialize (H1 0 (Cf (St i' s h1) (Read x e) K') [] (LStep_zero _) (refl_equal _)).
inv H1.
inv H.
rewrite H10 in H13; inv H13.
rewrite (proof_irrelevance _ pf0 pf) in H11; rewrite H11 in H2; inv H2.
specialize (H1 0 (Cf (St i' s' h1) (Write e1 e2) K') [] (LStep_zero _) (refl_equal _)).
inv H1.
inv H0.
rewrite H9 in H13; inv H13.
rewrite (proof_irrelevance _ pf0 pf) in H11.
exists (upd h1 (nat_of_Z v1 pf) (v2, l1 \_/ l2)); split.
apply mydot_upd; auto.
specialize (H (nat_of_Z v1 pf)); destruct (h3 (nat_of_Z v1 pf)); decomp H; auto; try contradiction.
apply LStep_write with (l1 := l1) (l2 := l2); auto.
exists h1; split; auto; apply LStep_seq.
exists h1; split; auto; apply LStep_if_true; auto.
exists h1; split; auto; apply LStep_if_false; auto.
exists h1; split; auto; apply LStep_while_true; auto.
exists h1; split; auto; apply LStep_while_false; auto.
assert (hsafe (taint_vars_cf (Cf (St i s h1) (If b C1 C2) []))).
specialize (H1 0 (Cf (St i s h1) (If b C1 C2) K') [] (LStep_zero _) (refl_equal _)).
inv H1.
inv H0; auto.
rewrite H10 in H11; inv H11.
rewrite H10 in H11; inv H11.
apply hstepn_bf with (h1 := h1) (h2 := h2) in H13; auto.
destruct H13 as [h1' [H13]]; exists h1'; split; auto.
apply LStep_if_hi with (v := v) (n := n); auto.
exists h1; split; auto.
apply LStep_if_hi_dvg with (v := v); auto.
specialize (H1 0 (Cf (St i' s' h1) (If b C1 C2) K') [] (LStep_zero _) (refl_equal _)).
inv H1.
inv H0; auto.
rewrite H10 in H13; inv H13.
rewrite H10 in H13; inv H13.
intros; intro.
destruct st' as [i'' s'' h'']; apply hstepn_ff with (h2 := h2) (h3 := h3') in H0; auto.
destruct H0 as [h3'' [H0]].
contradiction (H15 n (St i'' s'' h3'')).
assert (hsafe (taint_vars_cf (Cf (St i s h1) (While b C0) []))).
specialize (H1 0 (Cf (St i s h1) (While b C0) K') [] (LStep_zero _) (refl_equal _)).
inv H1.
inv H0; auto.
rewrite H9 in H11; inv H11.
rewrite H9 in H11; inv H11.
apply hstepn_bf with (h1 := h1) (h2 := h2) in H13; auto.
destruct H13 as [h1' [H13]]; exists h1'; split; auto.
apply LStep_while_hi with (v := v) (n := n); auto.
exists h1; split; auto.
apply LStep_while_hi_dvg with (v := v); auto.
specialize (H1 0 (Cf (St i' s' h1) (While b C0) K') [] (LStep_zero _) (refl_equal _)).
inv H1.
inv H0; auto.
rewrite H9 in H13; inv H13.
rewrite H9 in H13; inv H13.
intros; intro.
destruct st' as [i'' s'' h'']; apply hstepn_ff with (h2 := h2) (h3 := h3') in H0; auto.
destruct H0 as [h3'' [H0]].
contradiction (H15 n (St i'' s'' h3'')).
Qed.
Lemma lstepn_bf : forall n C K C' K' i s h1 h2 h3 i' s' h3' o,
mydot h1 h2 h3 -> lstepn n (Cf (St i s h3) C K) (Cf (St i' s' h3') C' K') o -> lsafe (Cf (St i s h1) C K) ->
exists h1', mydot h1' h2 h3' /\ lstepn n (Cf (St i s h1) C K) (Cf (St i' s' h1') C' K') o.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H1.
exists h1; split; auto; apply LStep_zero.
destruct cf' as [[i'' s'' h''] C'' K'']; apply lstep_bf with (h1 := h1) (h2 := h2) in H3; auto.
destruct H3 as [h1' [H3]].
assert (n0 < S n0); try omega.
assert (lsafe (Cf (St i'' s'' h1') C'' K'')).
unfold lsafe; intros.
apply (H2 (S n) _ (o0++o)); auto.
apply LStep_succ with (cf' := Cf (St i'' s'' h1') C'' K''); auto.
destruct (H _ H5 _ _ _ _ _ _ _ _ _ _ _ _ _ H3 H4 H6) as [h1'' [H7]]; exists h1''; split; auto.
apply LStep_succ with (cf' := Cf (St i'' s'' h1') C'' K''); auto.
Qed.
Lemma hstep_modifies_monotonic : forall st st' C C' K K' x,
hstep (Cf st C K) (Cf st' C' K') -> In x (modifies (C'::K')) -> In x (modifies (C::K)).
Proof.
intros.
inv H; simpl in *; auto.
rewrite app_assoc; auto.
repeat rewrite in_app_iff in H0 |- *; intuit.
repeat rewrite in_app_iff in H0 |- *; intuit.
repeat rewrite in_app_iff in H0 |- *; intuit.
rewrite in_app_iff; intuit.
Qed.
Lemma hstepn_modifies_monotonic : forall n st st' C C' K K' x,
hstepn n (Cf st C K) (Cf st' C' K') -> In x (modifies (C'::K')) -> In x (modifies (C::K)).
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0; auto.
destruct cf' as [st'' C'' K'']; apply H with (x := x) in H3; auto.
apply hstep_modifies_monotonic with (x := x) in H2; auto.
Qed.
Lemma lstep_modifies_monotonic : forall st st' C C' K K' x o,
lstep (Cf st C K) (Cf st' C' K') o -> In x (modifies (C'::K')) -> In x (modifies (C::K)).
Proof.
intros.
inv H; simpl in *; auto.
rewrite app_assoc; auto.
repeat rewrite in_app_iff in H0 |- *; intuit.
repeat rewrite in_app_iff in H0 |- *; intuit.
repeat rewrite in_app_iff in H0 |- *; intuit.
rewrite in_app_iff; intuit.
repeat rewrite in_app_iff; intuit.
rewrite in_app_iff; intuit.
Qed.
Lemma lstepn_modifies_monotonic : forall n st st' C C' K K' x o,
lstepn n (Cf st C K) (Cf st' C' K') o -> In x (modifies (C'::K')) -> In x (modifies (C::K)).
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0; auto.
destruct cf' as [st'' C'' K'']; apply H with (x := x) in H3; auto.
apply lstep_modifies_monotonic with (x := x) in H2; auto.
Qed.
Lemma hstep_modifies_const : forall st st' C C' K K' x,
hstep (Cf st C K) (Cf st' C' K') -> ~ In x (modifies (C::K)) -> (st:store) x = (st':store) x.
Proof.
intros.
inv H; simpl; auto.
unfold upd; destruct (eq_nat_dec x x0); auto.
unfold upd; destruct (eq_nat_dec x x0); auto.
Qed.
Lemma hstepn_modifies_const : forall n st st' C C' K K' x,
hstepn n (Cf st C K) (Cf st' C' K') -> ~ In x (modifies (C::K)) -> (st:store) x = (st':store) x.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0; auto.
destruct cf' as [st'' C'' K''].
apply H with (x := x) in H3; auto.
apply hstep_modifies_const with (x := x) in H2; auto.
rewrite H2; rewrite H3; auto.
apply hstep_modifies_monotonic with (x := x) in H2; auto.
Qed.
Lemma lstep_modifies_const : forall st st' C C' K K' x o,
lstep (Cf st C K) (Cf st' C' K') o -> ~ In x (modifies (C::K)) -> (st:store) x = (st':store) x.
Proof.
intros.
inv H; simpl; auto.
unfold upd; destruct (eq_nat_dec x x0); auto.
unfold upd; destruct (eq_nat_dec x x0); auto.
apply hstepn_modifies_const with (x := x) in H10; simpl in *.
rewrite <- H10; unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies [If b C1 C2])); auto.
contradiction H0; simpl in i0; rewrite in_app_iff in i0 |- *.
destruct i0; auto.
inv H.
rewrite in_app_iff in H |- *.
destruct H; auto.
inv H.
apply hstepn_modifies_const with (x := x) in H10; simpl in *.
rewrite <- H10; unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies [While b C0])); auto.
contradiction H0; simpl in i0; rewrite in_app_iff in i0 |- *.
destruct i0; auto.
inv H.
rewrite in_app_iff in H |- *.
destruct H; auto.
inv H.
Qed.
Lemma lstepn_modifies_const : forall n st st' C C' K K' x o,
lstepn n (Cf st C K) (Cf st' C' K') o -> ~ In x (modifies (C::K)) -> (st:store) x = (st':store) x.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0; auto.
destruct cf' as [st'' C'' K''].
apply H with (x := x) in H3; auto.
apply lstep_modifies_const with (x := x) in H2; auto.
rewrite H2; rewrite H3; auto.
apply lstep_modifies_monotonic with (x := x) in H2; auto.
Qed.
Lemma hstep_taints_s : forall i s h i' s' h' C K C' K' x,
hstep (Cf (St i s h) C K) (Cf (St i' s' h') C' K') ->
s x <> s' x -> exists v, s' x = Some (v,Hi).
Proof.
intros.
inv H; try solve [contradiction H0; auto].
destruct (eq_nat_dec x x0); subst.
exists v; unfold upd; destruct (eq_nat_dec x0 x0); auto.
destruct (eq_nat_dec x x0); auto; contradiction.
destruct (eq_nat_dec x x0); subst.
exists v2; unfold upd; destruct (eq_nat_dec x0 x0); auto.
destruct (eq_nat_dec x x0); auto; contradiction.
Qed.
Lemma hstepn_taints_s : forall n i s h i' s' h' C K C' K' x,
hstepn n (Cf (St i s h) C K) (Cf (St i' s' h') C' K') ->
s x <> s' x -> exists v, s' x = Some (v,Hi).
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0.
destruct cf' as [[i'' s'' h''] C'' K''].
destruct (opt_eq_dec val_eq_dec (s'' x) (s' x)).
rewrite <- e in H1 |- *.
apply hstep_taints_s with (x := x) in H2; auto.
assert (n0 < S n0); try omega.
apply (H _ H0 _ _ _ _ _ _ _ _ _ _ _ H3 n).
Qed.
Lemma hstep_taints_h : forall i s h i' s' h' C K C' K' a,
hstep (Cf (St i s h) C K) (Cf (St i' s' h') C' K') ->
h a <> h' a -> exists v, h' a = Some (v,Hi).
Proof.
intros.
inv H; try solve [contradiction H0; auto].
destruct (eq_nat_dec (nat_of_Z v1 pf) a); subst.
exists v2; unfold upd; destruct (eq_nat_dec (nat_of_Z v1 pf) (nat_of_Z v1 pf)); auto.
destruct (eq_nat_dec a (nat_of_Z v1 pf)); auto; subst.
Qed.
Lemma hstepn_taints_h : forall n i s h i' s' h' C K C' K' a,
hstepn n (Cf (St i s h) C K) (Cf (St i' s' h') C' K') ->
h a <> h' a -> exists v, h' a = Some (v,Hi).
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0.
destruct cf' as [[i'' s'' h''] C'' K''].
destruct (opt_eq_dec val_eq_dec (h'' a) (h' a)).
rewrite <- e in H1 |- *.
apply hstep_taints_h with (a := a) in H2; auto.
assert (n0 < S n0); try omega.
apply (H _ H0 _ _ _ _ _ _ _ _ _ _ _ H3 n).
Qed.
Proposition hstep_i_const : forall i s h i' s' h' C C' K K',
hstep (Cf (St i s h) C K) (Cf (St i' s' h') C' K') -> i' = i.
Proof.
intros.
inv H; auto.
Qed.
Proposition hstepn_i_const : forall n i s h i' s' h' C C' K K',
hstepn n (Cf (St i s h) C K) (Cf (St i' s' h') C' K') -> i' = i.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0; auto.
destruct cf' as [[i'' s'' h''] C'' K'']; apply H in H2; subst; auto.
apply hstep_i_const in H1; auto.
Qed.
Proposition lstep_i_const : forall i s h i' s' h' C C' K K' o,
lstep (Cf (St i s h) C K) (Cf (St i' s' h') C' K') o -> i' = i.
Proof.
intros.
inv H; auto.
apply hstepn_i_const in H11; auto.
apply hstepn_i_const in H11; auto.
Qed.
Proposition lstepn_i_const : forall n i s h i' s' h' C C' K K' o,
lstepn n (Cf (St i s h) C K) (Cf (St i' s' h') C' K') o -> i' = i.
Proof.
induction n using (well_founded_induction lt_wf); intros.
inv H0; auto.
destruct cf' as [[i'' s'' h''] C'' K'']; apply H in H2; subst; auto.
apply lstep_i_const in H1; auto.
Qed.
Close Scope Z_scope.
Definition obs_eq_s (s1 s2 : store) : Prop := forall x,
match s1 x, s2 x with
| None, None => True
| Some (v1,l1), Some (v2,l2) => l1 = l2 /\ (l1 = Lo -> v1 = v2)
| _, _ => False
end.
Definition obs_eq_h (h1 h2 : heap) : Prop := forall n,
match h1 n, h2 n with
| Some (v1,l1), Some (v2,l2) => l1 = Lo -> l2 = Lo -> v1 = v2
| _, _ => True
end.
Definition obs_eq (st1 st2 : state) : Prop := (st1:lmap) = (st2:lmap) /\ obs_eq_s st1 st2 /\ obs_eq_h st1 st2.
Proposition obs_eq_s_refl : forall s, obs_eq_s s s.
Proof.
unfold obs_eq_s; intros.
destruct (s x) as [[v l]|]; auto.
Qed.
Proposition obs_eq_h_refl : forall h, obs_eq_h h h.
Proof.
unfold obs_eq_h; intros.
destruct (h n) as [[v l]|]; auto.
Qed.
Proposition obs_eq_refl : forall st, obs_eq st st.
Proof.
unfold obs_eq; intuition.
apply obs_eq_s_refl.
apply obs_eq_h_refl.
Qed.
Proposition obs_eq_s_sym : forall s1 s2, obs_eq_s s1 s2 -> obs_eq_s s2 s1.
Proof.
unfold obs_eq_s; intros.
specialize (H x); destruct (s1 x) as [[v1 l1]|]; destruct (s2 x) as [[v2 l2]|]; auto.
destruct H; split; auto; intros.
subst; intuit.
Qed.
Proposition obs_eq_h_sym : forall h1 h2, obs_eq_h h1 h2 -> obs_eq_h h2 h1.
Proof.
unfold obs_eq_h; intros.
specialize (H n); destruct (h1 n) as [[v1 l1]|]; destruct (h2 n) as [[v2 l2]|]; intuit.
Qed.
Proposition obs_eq_sym : forall st1 st2, obs_eq st1 st2 -> obs_eq st2 st1.
Proof.
unfold obs_eq; intuition.
apply obs_eq_s_sym; auto.
apply obs_eq_h_sym; auto.
Qed.
Lemma obs_eq_exp : forall e i1 s1 h1 i2 s2 h2, obs_eq (St i1 s1 h1) (St i2 s2 h2) ->
match eden e i1 s1, eden e i2 s2 with
| None, None => True
| Some (v1,l1), Some (v2,l2) => l1 = l2 /\ (l1 = Lo -> v1 = v2)
| _, _ => False
end.
Proof.
induction e; simpl; intros; auto.
unfold obs_eq in H; decomp H.
apply H2.
unfold obs_eq in H; decomp H; simpl in *; subst; auto.
specialize (IHe1 _ _ _ _ _ _ H); specialize (IHe2 _ _ _ _ _ _ H).
destruct (eden e1 i1 s1) as [[v1 l1]|]; destruct (eden e2 i2 s2) as [[v2 l2]|];
destruct (eden e1 i2 s2) as [[v1' l1']|]; destruct (eden e2 i1 s1) as [[v2' l2']|]; simpl in *; intuit.
destruct IHe1; destruct IHe2; destruct H; subst; split; auto; intros.
glub_simpl H0; rewrite H1; auto; rewrite H3; auto.
Qed.
Lemma obs_eq_bexp : forall b i1 s1 h1 i2 s2 h2, obs_eq (St i1 s1 h1) (St i2 s2 h2) ->
match bden b i1 s1, bden b i2 s2 with
| None, None => True
| Some (v1,l1), Some (v2,l2) => l1 = l2 /\ (l1 = Lo -> v1 = v2)
| _, _ => False
end.
Proof.
induction b; simpl; intros; auto.
dup H; apply (obs_eq_exp e) in H.
apply (obs_eq_exp e0) in H0.
destruct (eden e i1 s1) as [[v1 l1]|]; destruct (eden e0 i2 s2) as [[v2 l2]|];
destruct (eden e i2 s2) as [[v1' l1']|]; destruct (eden e0 i1 s1) as [[v2' l2']|]; simpl in *; intuit.
destruct H; destruct H0; subst; split; auto; intros.
glub_simpl H; rewrite H1; auto; rewrite H2; auto.
apply IHb in H.
destruct (bden b i1 s1) as [[v1 l1]|].
destruct (bden b i2 s2) as [[v2 l2]|]; auto; simpl.
destruct H; subst; split; auto; intros.
rewrite H0; auto.
destruct (bden b i2 s2); intuit.
dup H.
apply IHb1 in H; apply IHb2 in H0.
destruct (bden b2 i1 s1) as [[v1 l1]|]; destruct (bden b3 i2 s2) as [[v2 l2]|];
destruct (bden b2 i2 s2) as [[v1' l1']|]; destruct (bden b3 i1 s1) as [[v2' l2']|]; simpl in *; intuit.
destruct H; destruct H0; subst; split; auto; intros.
glub_simpl H; rewrite H1; auto; rewrite H2; auto.
Qed.
Inductive lexp :=
| Lbl : glbl -> lexp
| Lblvar : nat -> lexp
| Lub : lexp -> lexp -> lexp.
Definition toLexp (l : glbl) : lexp := Lbl l.
Coercion toLexp : glbl >-> lexp.
Fixpoint lden (L : lexp) (i : lmap) : glbl :=
match L with
| Lbl l => l
| Lblvar X => snd i X
| Lub L1 L2 => glub (lden L1 i) (lden L2 i)
end.
Proposition lden_lblvars : forall L i1 i2 i, lden L (i1,i) = lden L (i2,i).
Proof.
induction L; simpl; auto; intros.
rewrite (IHL1 _ i2); rewrite (IHL2 _ i2); auto.
Qed.
Inductive assert :=
| TrueA : assert
| FalseA : assert
| Emp : assert
| Allocated : exp -> assert
| Mapsto : exp -> exp -> lexp -> assert
| BoolExp : bexp -> assert
| EqLbl : lexp -> lexp -> assert
| LblEq : var -> lexp -> assert
| LblLeq : var -> lexp -> assert
| LblLeq' : lexp -> var -> assert
| LblExp : exp -> lexp -> assert
| LblBexp : bexp -> lexp -> assert
| Conj : assert -> assert -> assert
| Disj : assert -> assert -> assert
| Star : assert -> assert -> assert.
Fixpoint vars (P : assert) (x : var) : bool :=
match P with
| TrueA => false
| FalseA => false
| Emp => false
| Allocated e => expvars e x
| Mapsto e e' L => orb (expvars e x) (expvars e' x)
| BoolExp b => bexpvars b x
| EqLbl L1 L2 => false
| LblEq y L => if eq_nat_dec y x then true else false
| LblLeq y L => if eq_nat_dec y x then true else false
| LblLeq' L y => if eq_nat_dec y x then true else false
| LblExp e L => expvars e x
| LblBexp b L => bexpvars b x
| Conj P Q => orb (vars P x) (vars Q x)
| Disj P Q => orb (vars P x) (vars Q x)
| Star P Q => orb (vars P x) (vars Q x)
end.
Notation " P `AND` Q " := (Conj P Q) (at level 91, left associativity).
Notation " P `OR` Q " := (Disj P Q) (at level 91, left associativity).
Notation " P ** Q " := (Star P Q) (at level 91, left associativity).
Fixpoint ereplace e x ex : exp :=
match e with
| Var y => if eq_nat_dec y x then ex else Var y
| BinOp bop e1 e2 => BinOp bop (ereplace e1 x ex) (ereplace e2 x ex)
| _ => e
end.
Proposition ereplace_deletes : forall e x ex, expvars ex x = false -> expvars (ereplace e x ex) x = false.
Proof.
induction e; simpl; intros; auto.
destruct (eq_nat_dec v x); subst; simpl; auto.
destruct (eq_nat_dec v x); try contradiction; auto.
rewrite (IHe1 _ _ H); rewrite (IHe2 _ _ H); auto.
Qed.
Proposition eden_ereplace : forall e x ex i s, eden (Var x) i s = eden ex i s -> eden (ereplace e x ex) i s = eden e i s.
Proof.
induction e; simpl; intros; auto.
destruct (eq_nat_dec v x); subst; auto.
rewrite (IHe1 _ _ _ _ H); rewrite (IHe2 _ _ _ _ H); auto.
Qed.
Proposition edenZ_ereplace : forall e x ex i s, edenZ (Var x) i s = edenZ ex i s -> edenZ (ereplace e x ex) i s = edenZ e i s.
Proof.
induction e; simpl; intros; auto.
destruct (eq_nat_dec v x); subst; auto.
rewrite (IHe1 _ _ _ _ H); rewrite (IHe2 _ _ _ _ H); auto.
Qed.
Fixpoint aden (P : assert) (st : state) : Prop :=
match st with St i s h =>
match P with
| TrueA => True
| FalseA => False
| Emp => h = fun _ => None
| Allocated e => exists v : Z, exists pf : (v>=0)%Z, edenZ e i s = Some v /\
exists v', exists l', h = fun n => if eq_nat_dec n (nat_of_Z v pf) then Some (v',l') else None
| Mapsto e e' L => exists v : Z, exists pf : (v>=0)%Z, edenZ e i s = Some v /\ exists v', edenZ e' i s = Some v' /\
h = fun n => if eq_nat_dec n (nat_of_Z v pf) then Some (v', lden L i) else None
| BoolExp b => bdenZ b i s = Some true
| EqLbl L1 L2 => lden L1 i = lden L2 i
| LblEq x L => exists v, s x = Some (v, lden L i)
| LblLeq x L => exists v, exists l, s x = Some (v,l) /\ gleq l (lden L i) = true
| LblLeq' L x => exists v, exists l, s x = Some (v,l) /\ gleq (lden L i) l = true
| LblExp e L => exists v, eden e i s = Some (v, lden L i)
| LblBexp b L => exists v, bden b i s = Some (v, lden L i)
| Conj P Q => aden P st /\ aden Q st
| Disj P Q => aden P st \/ aden Q st
| Star P Q => exists h1, exists h2, mydot h1 h2 h /\ aden P (St i s h1) /\ aden Q (St i s h2)
end
end.
Definition aden2 (P : assert) (st1 st2 : state) : Prop := aden P st1 /\ aden P st2 /\ obs_eq st1 st2.
Definition implies (P Q : assert) := forall st, aden P st -> aden Q st.
Fixpoint haslbl (P : assert) (x : var) : bool :=
match P with
| LblEq y L => if eq_nat_dec y x then true else false
| LblLeq y L => if eq_nat_dec y x then true else false
| LblLeq' L y => if eq_nat_dec y x then true else false
| LblExp e L => expvars e x
| LblBexp b L => bexpvars b x
| Conj P Q => orb (haslbl P x) (haslbl Q x)
| Disj P Q => orb (haslbl P x) (haslbl Q x)
| Star P Q => orb (haslbl P x) (haslbl Q x)
| _ => false
end.
Proposition eden_upd : forall e x i s v l, expvars e x = false -> eden e i (upd s x (v,l)) = eden e i s.
Proof.
induction e; simpl; intros; auto.
unfold upd; destruct (eq_nat_dec v x); inv H; auto.
rewrite IHe1.
rewrite IHe2; auto.
destruct (expvars e1 x); destruct (expvars e2 x); inv H; auto.
destruct (expvars e1 x); inv H; auto.
Qed.
Proposition edenZ_upd : forall e x i s v l, expvars e x = false -> edenZ e i (upd s x (v,l)) = edenZ e i s.
Proof.
induction e; simpl; intros; auto.
unfold upd; destruct (eq_nat_dec v x); inv H; auto.
rewrite IHe1.
rewrite IHe2; auto.
destruct (expvars e1 x); destruct (expvars e2 x); inv H; auto.
destruct (expvars e1 x); inv H; auto.
Qed.
Proposition bden_upd : forall b x i s v l, bexpvars b x = false -> bden b i (upd s x (v,l)) = bden b i s.
Proof.
induction b; simpl; intros; auto.
repeat rewrite eden_upd; auto.
destruct (expvars e x); destruct (expvars e0 x); inv H; auto.
destruct (expvars e x); inv H; auto.
rewrite IHb; auto.
rewrite IHb1.
rewrite IHb2; auto.
destruct (bexpvars b2 x); destruct (bexpvars b3 x); inv H; auto.
destruct (bexpvars b2 x); inv H; auto.
Qed.
Proposition bdenZ_upd : forall b x i s v l, bexpvars b x = false -> bdenZ b i (upd s x (v,l)) = bdenZ b i s.
Proof.
induction b; simpl; intros; auto.
repeat rewrite edenZ_upd; auto.
destruct (expvars e x); destruct (expvars e0 x); inv H; auto.
destruct (expvars e x); inv H; auto.
rewrite IHb; auto.
rewrite IHb1.
rewrite IHb2; auto.
destruct (bexpvars b2 x); destruct (bexpvars b3 x); inv H; auto.
destruct (bexpvars b2 x); inv H; auto.
Qed.
Proposition aden_upd : forall P x i s h v l, vars P x = false -> aden P (St i s h) -> aden P (St i (upd s x (v,l)) h).
Proof.
induction P; simpl; intros; auto.
rewrite edenZ_upd; auto.
apply orb_false_elim in H.
repeat rewrite edenZ_upd; intuit.
rewrite bdenZ_upd; auto.
unfold upd; destruct (eq_nat_dec v x); inv H; auto.
unfold upd; destruct (eq_nat_dec v x); inv H; auto.
unfold upd; destruct (eq_nat_dec v x); inv H; auto.
rewrite eden_upd; auto.
rewrite bden_upd; auto.
apply orb_false_elim in H; intuit.
apply orb_false_elim in H; intuit.
apply orb_false_elim in H; destruct H0 as [h1 [h2]]; exists h1; exists h2; intuit.
Qed.
Proposition eden_vars_same : forall e i s s',
(forall x, expvars e x = true -> s x = s' x) -> eden e i s = eden e i s'.
Proof.
induction e; simpl; intros; auto.
apply H; destruct (eq_nat_dec v v); auto.
rewrite IHe1 with (s' := s'); intros.
rewrite IHe2 with (s' := s'); auto; intros.
apply H; rewrite H0; destruct (expvars e1 x); auto.
apply H; rewrite H0; auto.
Qed.
Proposition edenZ_vars_same : forall e i s s',
(forall x, expvars e x = true -> s x = s' x) -> edenZ e i s = edenZ e i s'.
Proof.
induction e; simpl; intros; auto.
rewrite H; destruct (eq_nat_dec v v); auto.
rewrite IHe1 with (s' := s'); intros.
rewrite IHe2 with (s' := s'); auto; intros.
apply H; rewrite H0; destruct (expvars e1 x); auto.
apply H; rewrite H0; auto.
Qed.
Proposition bden_vars_same : forall b i s s',
(forall x, bexpvars b x = true -> s x = s' x) -> bden b i s = bden b i s'.
Proof.
induction b; simpl; intros; auto.
rewrite eden_vars_same with (s' := s'); intros.
rewrite (eden_vars_same e0) with (s' := s'); auto; intros.
apply H; rewrite H0; destruct (expvars e x); auto.
apply H; rewrite H0; auto.
rewrite IHb with (s' := s'); auto.
rewrite IHb1 with (s' := s'); intros.
rewrite IHb2 with (s' := s'); auto; intros.
apply H; rewrite H0; destruct (bexpvars b2 x); auto.
apply H; rewrite H0; auto.
Qed.
Proposition bdenZ_vars_same : forall b i s s',
(forall x, bexpvars b x = true -> s x = s' x) -> bdenZ b i s = bdenZ b i s'.
Proof.
induction b; simpl; intros; auto.
rewrite edenZ_vars_same with (s' := s'); intros.
rewrite (edenZ_vars_same e0) with (s' := s'); auto; intros.
apply H; rewrite H0; destruct (expvars e x); auto.
apply H; rewrite H0; auto.
rewrite IHb with (s' := s'); auto.
rewrite IHb1 with (s' := s'); intros.
rewrite IHb2 with (s' := s'); auto; intros.
apply H; rewrite H0; destruct (bexpvars b2 x); auto.
apply H; rewrite H0; auto.
Qed.
Proposition aden_vars_same : forall P i s s' h,
(forall x, vars P x = true -> s x = s' x) -> aden P (St i s h) -> aden P (St i s' h).
Proof.
induction P; simpl; intros; auto.
rewrite edenZ_vars_same with (s' := s') in H0; auto.
rewrite edenZ_vars_same with (s' := s') in H0; intuit.
rewrite (edenZ_vars_same e0) with (s' := s') in H0; intuit.
rewrite bdenZ_vars_same with (s' := s') in H0; auto.
rewrite <- H; auto.
destruct (eq_nat_dec v v); auto.
rewrite <- H; auto.
destruct (eq_nat_dec v v); auto.
rewrite <- H; auto.
destruct (eq_nat_dec v v); auto.
rewrite eden_vars_same with (s' := s') in H0; auto.
rewrite bden_vars_same with (s' := s') in H0; auto.
split; [apply IHP1 with (s := s) | apply IHP2 with (s := s)]; intuit.
destruct H0; [left; apply IHP1 with (s := s) | right; apply IHP2 with (s := s)]; intuit.
destruct H0 as [h1 [h2]]; exists h1; exists h2; intuition.
apply IHP1 with (s := s); intuit.
apply IHP2 with (s := s); intuit.
Qed.
Proposition expvars_none : forall e i s x v l, eden e i s = Some (v,l) -> s x = None -> expvars e x = false.
Proof.
induction e; simpl; intros; auto.
destruct (eq_nat_dec v x); subst; auto.
rewrite H in H0; inv H0.
case_eq (eden e1 i s); intros.
case_eq (eden e2 i s); intros.
destruct v1 as [v2 l2]; destruct v0 as [v1 l1].
rewrite H1 in H; rewrite H2 in H; inv H.
apply IHe1 with (x := x) in H1; auto.
apply IHe2 with (x := x) in H2; auto.
rewrite H1; rewrite H2; auto.
rewrite H2 in H; destruct (eden e1 i s); inv H.
rewrite H1 in H; inv H.
Qed.
Proposition bexpvars_none : forall b i s x v l, bden b i s = Some (v,l) -> s x = None -> bexpvars b x = false.
Proof.
induction b; simpl; intros; auto.
case_eq (eden e i s); intros.
case_eq (eden e0 i s); intros.
destruct v1 as [v2 l2]; destruct v0 as [v1 l1].
rewrite H1 in H; rewrite H2 in H; inv H.
apply expvars_none with (x := x) in H1; auto.
apply expvars_none with (x := x) in H2; auto.
rewrite H1; rewrite H2; auto.
rewrite H2 in H; destruct (eden e i s); inv H.
rewrite H1 in H; inv H.
case_eq (bden b i s); intros.
destruct p; apply IHb with (x := x) in H1; auto.
rewrite H1 in H; inv H.
case_eq (bden b2 i s); intros.
case_eq (bden b3 i s); intros.
destruct p0 as [v2 l2]; destruct p as [v1 l1].
rewrite H1 in H; rewrite H2 in H; inv H.
apply IHb1 with (x := x) in H1; auto.
apply IHb2 with (x := x) in H2; auto.
rewrite H1; rewrite H2; auto.
rewrite H2 in H; destruct (bden b2 i s); inv H.
rewrite H1 in H; inv H.
Qed.
Proposition aden_upd_none : forall P x i s h v l, s x = None -> aden P (St i s h) -> aden P (St i (upd s x (v,l)) h).
Proof.
induction P; simpl; intros; intuit.
rewrite edenZ_upd; auto.
destruct H0 as [v1 [pf [H0]]].
rewrite edenZ_some in H0; destruct H0 as [l1].
apply expvars_none with (x := x) in H0; auto.
repeat rewrite edenZ_upd; auto.
destruct H0 as [v1 [pf [H0 [v2 [H1]]]]].
rewrite edenZ_some in H1; destruct H1 as [l2].
apply expvars_none with (x := x) in H1; auto.
destruct H0 as [v1 [pf [H0]]].
rewrite edenZ_some in H0; destruct H0 as [l1].
apply expvars_none with (x := x) in H0; auto.
rewrite bdenZ_upd; auto.
rewrite bdenZ_some in H0; destruct H0 as [l1].
apply bexpvars_none with (x := x) in H0; auto.
unfold upd; destruct (eq_nat_dec v x); subst; auto.
destruct H0 as [v]; rewrite H in H0; inv H0.
unfold upd; destruct (eq_nat_dec v x); subst; auto.
destruct H0 as [v1 [l1 [H0]]]; rewrite H in H0; inv H0.
unfold upd; destruct (eq_nat_dec v x); subst; auto.
destruct H0 as [v1 [l1 [H0]]]; rewrite H in H0; inv H0.
rewrite eden_upd; auto.
destruct H0 as [v1]; apply expvars_none with (x := x) in H0; auto.
rewrite bden_upd; auto.
destruct H0 as [v1]; apply bexpvars_none with (x := x) in H0; auto.
destruct H0 as [h1 [h2]]; exists h1; exists h2; intuit.
Qed.
Proposition eden_taint_vars : forall e i s K v l, eden e i s = Some (v,l) ->
exists l', eden e i (taint_vars K s) = Some (v,l') /\ l <<= l'.
Proof.
induction e; simpl; intros.
unfold taint_vars.
destruct (In_dec eq_nat_dec v (modifies K)).
exists Hi; rewrite H; split; auto.
destruct l; auto.
exists l; rewrite H; split; auto.
destruct l; auto.
inv H; exists Lo; split; auto.
inv H; exists Lo; split; auto.
case_eq (eden e1 i s); case_eq (eden e2 i s); intros.
rewrite H1 in H; rewrite H0 in H; simpl in H; inv H.
destruct v1 as [v1 l1]; destruct v0 as [v2 l2].
apply IHe1 with (K := K) in H1; apply IHe2 with (K := K) in H0.
destruct H1 as [l1' [H1]]; destruct H0 as [l2' [H0]].
exists (l1' \_/ l2'); simpl; split.
rewrite H1; rewrite H0; simpl; auto.
destruct l1; destruct l1'; destruct l2; destruct l2'; intuit.
rewrite H1 in H; rewrite H0 in H; inv H.
rewrite H1 in H; rewrite H0 in H; inv H.
rewrite H1 in H; rewrite H0 in H; inv H.
Qed.
Proposition bden_taint_vars : forall b i s K v l, bden b i s = Some (v,l) ->
exists l', bden b i (taint_vars K s) = Some (v,l') /\ l <<= l'.
Proof.
induction b; simpl; intros.
inv H; exists Lo; split; auto.
inv H; exists Lo; split; auto.
case_eq (eden e i s); case_eq (eden e0 i s); intros.
destruct v1 as [v1 l1]; destruct v0 as [v2 l2].
rewrite H1 in H; rewrite H0 in H; inv H.
apply eden_taint_vars with (K := K) in H1; apply eden_taint_vars with (K := K) in H0.
destruct H1 as [l1' [H1]]; destruct H0 as [l2' [H0]].
exists (l1' \_/ l2'); split.
rewrite H1; rewrite H0; auto.
destruct l1; destruct l1'; destruct l2; destruct l2'; intuit.
rewrite H1 in H; rewrite H0 in H; inv H.
rewrite H1 in H; rewrite H0 in H; inv H.
rewrite H1 in H; rewrite H0 in H; inv H.
case_eq (bden b i s); intros.
destruct p as [v' l']; rewrite H0 in H; inv H.
apply IHb with (K := K) in H0; destruct H0 as [l' [H0]].
exists l'; split; auto.
rewrite H0; auto.
rewrite H0 in H; inv H.
case_eq (bden b2 i s); case_eq (bden b3 i s); intros.
rewrite H1 in H; rewrite H0 in H; simpl in H; inv H.
destruct p0 as [v1 l1]; destruct p as [v2 l2].
apply IHb1 with (K := K) in H1; apply IHb2 with (K := K) in H0.
destruct H1 as [l1' [H1]]; destruct H0 as [l2' [H0]].
exists (l1' \_/ l2'); simpl; split.
rewrite H1; rewrite H0; simpl; auto.
destruct l1; destruct l1'; destruct l2; destruct l2'; intuit.
rewrite H1 in H; rewrite H0 in H; inv H.
rewrite H1 in H; rewrite H0 in H; inv H.
rewrite H1 in H; rewrite H0 in H; inv H.
Qed.
Proposition edenZ_ignores_lbl : forall e i s x v l l',
s x = Some (v,l) -> edenZ e i (upd s x (v,l')) = edenZ e i s.
Proof.
induction e; simpl; intros; auto.
unfold upd; destruct (eq_nat_dec v x); subst; auto.
rewrite H; auto.
rewrite IHe1 with (l := l); auto.
rewrite IHe2 with (l := l); auto.
Qed.
Proposition bdenZ_ignores_lbl : forall b i s x v l l',
s x = Some (v,l) -> bdenZ b i (upd s x (v,l')) = bdenZ b i s.
Proof.
induction b; simpl; intros; auto.
repeat rewrite edenZ_ignores_lbl with (l := l); auto.
rewrite IHb with (l := l); auto.
rewrite IHb1 with (l := l); auto.
rewrite IHb2 with (l := l); auto.
Qed.
Proposition aden_haslbl : forall P x i s h v l l', haslbl P x = false -> s x = Some (v,l) ->
aden P (St i s h) -> aden P (St i (upd s x (v,l')) h).
Proof.
induction P; simpl; intros; auto.
rewrite edenZ_ignores_lbl with (l := l); auto.
repeat rewrite edenZ_ignores_lbl with (l := l0); auto.
rewrite bdenZ_ignores_lbl with (l := l); auto.
unfold upd; destruct (eq_nat_dec v x); auto; inv H.
unfold upd; destruct (eq_nat_dec v x); auto; inv H.
unfold upd; destruct (eq_nat_dec v x); auto; inv H.
rewrite eden_upd; auto.
rewrite bden_upd; auto.
apply orb_false_elim in H; destruct H; destruct H1; split.
apply IHP1 with (l := l); auto.
apply IHP2 with (l := l); auto.
apply orb_false_elim in H; destruct H; destruct H1; [left | right].
apply IHP1 with (l := l); auto.
apply IHP2 with (l := l); auto.
apply orb_false_elim in H; destruct H; destruct H1 as [h1 [h2]]; decomp H1.
exists h1; exists h2; repeat (split; auto).
apply IHP1 with (l := l); auto.
apply IHP2 with (l := l); auto.
Qed.
Definition taint_vars_assert (P : assert) (xs : list var) (l1 l2 : glbl) : assert :=
if gleq l1 l2 then P else P `AND` fold_right (fun x P => P `AND` LblLeq' (glub l1 l2) x) TrueA xs.
Proposition aden_fold : forall (f : var -> assert) xs st,
(forall x, In x xs -> aden (f x) st) -> aden (fold_right (fun x P => P `AND` f x) TrueA xs) st.
Proof.
induction xs; destruct st as [i s h]; simpl; intros; auto.
Qed.
Proposition aden_fold_inv : forall (f : var -> assert) xs st,
aden (fold_right (fun x P => P `AND` f x) TrueA xs) st -> forall x, In x xs -> aden (f x) st.
Proof.
induction xs; destruct st as [i s h]; simpl; intros; intuit.
destruct H0; subst; intuit.
Qed.
Fixpoint no_lbls (P : assert) (xs : list var) :=
match xs with
| [] => true
| x::xs => andb (negb (haslbl P x)) (no_lbls P xs)
end.
Definition same_values (s1 s2 : store) (xs : list var) := forall x,
if In_dec eq_nat_dec x xs then
match s1 x, s2 x with
| Some (v1,_), Some (v2,_) => v1 = v2
| Some _, None => False
| _, _ => True
end
else s1 x = s2 x.
Proposition no_lbls_same_values : forall P xs i s1 s2 h,
no_lbls P xs = true -> same_values s1 s2 xs -> aden P (St i s1 h) -> aden P (St i s2 h).
Proof.
induction xs; simpl; intros.
assert (s1 = s2).
extensionality x; specialize (H0 x); simpl in H0; auto.
subst; auto.
rewrite andb_true_iff in H; destruct H.
destruct (In_dec eq_nat_dec a xs).
apply IHxs with (s1 := s1); auto; intro x; specialize (H0 x).
simpl in H0.
destruct (eq_nat_dec a x); subst.
destruct (In_dec eq_nat_dec x xs); try contradiction; auto.
destruct (In_dec eq_nat_dec x xs); auto.
dup H0; specialize (H0 a).
simpl in H0; destruct (eq_nat_dec a a).
case_eq (s1 a); case_eq (s2 a); intros.
destruct v as [v2 l2]; destruct v0 as [v1 l1].
rewrite H4 in H0; rewrite H5 in H0; subst.
apply IHxs with (s1 := upd s1 a (v2,l2)); auto.
intro x; specialize (H3 x); simpl in H3; unfold upd.
destruct (In_dec eq_nat_dec x xs).
destruct (eq_nat_dec a x); destruct (eq_nat_dec x a); try subst; try contradiction; auto.
subst x; rewrite H5 in H3; auto.
destruct (eq_nat_dec a x); destruct (eq_nat_dec x a); try subst; try contradiction; auto.
subst x; auto.
apply aden_haslbl with (l := l1); auto.
destruct (haslbl P a); auto; inv H.
destruct v; rewrite H4 in H0; rewrite H5 in H0; inv H0.
destruct v as [v l]; apply IHxs with (s1 := upd s1 a (v,l)); auto.
intro x; specialize (H3 x); simpl in H3; unfold upd.
destruct (In_dec eq_nat_dec x xs).
destruct (eq_nat_dec a x); destruct (eq_nat_dec x a); try subst; try contradiction; auto.
subst x; rewrite H4; auto.
destruct (eq_nat_dec a x); destruct (eq_nat_dec x a); try subst; try contradiction; auto.
subst x; auto.
apply IHxs with (s1 := s1); auto; intro x; specialize (H3 x).
simpl in H3.
destruct (eq_nat_dec a x); subst.
destruct (In_dec eq_nat_dec x xs); try contradiction.
rewrite H4; rewrite H5; auto.
destruct (In_dec eq_nat_dec x xs); auto.
Qed.
Proposition taint_vars_same_values : forall K s, same_values s (taint_vars K s) (modifies K).
Proof.
intros; intro x; unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies K)); destruct (s x) as [[v l]|]; auto.
Qed.
Proposition no_lbls_taint_vars : forall P K i s h,
no_lbls P (modifies K) = true -> aden P (St i s h) -> aden P (St i (taint_vars K s) h).
Proof.
intros; apply no_lbls_same_values with (xs := modifies K) (s1 := s); auto.
apply taint_vars_same_values.
Qed.
Proposition taint_vars_assert_inv : forall P K l l' i s h, gleq l l' = false ->
aden (taint_vars_assert P (modifies K) l l') (St i s h) -> s = taint_vars K s.
Proof.
unfold taint_vars_assert; intros.
rewrite H in H0; simpl in H0; destruct H0.
extensionality x; unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies K)); auto.
apply aden_fold_inv with (x := x) in H1; auto.
simpl in H1.
destruct H1 as [vx [lx [H1]]].
rewrite H1.
destruct l; destruct l'; destruct lx; auto; inv H2; inv H.
Qed.
Proposition taint_vars_idempotent : forall K s, taint_vars K (taint_vars K s) = taint_vars K s.
Proof.
unfold taint_vars; intros.
extensionality x; destruct (In_dec eq_nat_dec x (modifies K)); auto.
destruct (s x) as [[v l]|]; auto.
Qed.
Inductive judge : nat -> context -> assert -> cmd -> assert -> Prop :=
| Judge_skip : forall pc, judge 0 pc Emp Skip Emp
| Judge_output : forall e, judge 0 Lo (LblExp e Lo `AND` Emp) (Output e) (LblExp e Lo `AND` Emp)
| Judge_assign : forall x e e' pc L, expvars e' x = false ->
judge 0 pc (BoolExp (Eq e e') `AND` LblExp e L `AND` Emp) (Assign x e)
(BoolExp (Eq (Var x) e') `AND` LblEq x (Lub L pc) `AND` Emp)
| Judge_read : forall x e e1 e2 pc L1 L2, expvars e1 x = false -> expvars e2 x = false ->
judge 0 pc (BoolExp (Eq (Var x) e1) `AND` LblExp e L1 `AND` Mapsto e e2 L2) (Read x e)
(BoolExp (Eq (Var x) e2) `AND` LblEq x (Lub (Lub L1 L2) pc) `AND` Mapsto (ereplace e x e1) e2 L2)
| Judge_write : forall e1 e2 pc L1 L2,
judge 0 pc (LblExp e1 L1 `AND` LblExp e2 L2 `AND` Allocated e1) (Write e1 e2)
(Mapsto e1 e2 (Lub (Lub L1 L2) pc))
| Judge_seq : forall N1 N2 P Q R C1 C2 pc, judge N1 pc P C1 Q -> judge N2 pc Q C2 R -> judge (S (N1+N2)) pc P (Seq C1 C2) R
| Judge_if : forall N1 N2 P Q b C1 C2 pc (lt lf : glbl),
implies P (BoolExp b `OR` BoolExp (Not b)) ->
implies (BoolExp b `AND` P) (LblBexp b lt) -> implies (BoolExp (Not b) `AND` P) (LblBexp b lf) ->
(gleq (glub lt lf) pc = false -> no_lbls P (modifies [If b C1 C2]) = true) ->
judge N1 (glub lt pc) (BoolExp b `AND` taint_vars_assert P (modifies [If b C1 C2]) lt pc) C1 Q ->
judge N2 (glub lf pc) (BoolExp (Not b) `AND` taint_vars_assert P (modifies [If b C1 C2]) lf pc) C2 Q ->
judge (S (N1+N2)) pc P (If b C1 C2) Q
| Judge_while : forall N P b C pc (l : glbl),
implies P (LblBexp b l) -> (gleq l pc = false -> no_lbls P (modifies [While b C]) = true) ->
judge N (glub l pc) (BoolExp b `AND` taint_vars_assert P (modifies [While b C]) l pc) C
(taint_vars_assert P (modifies [While b C]) l pc) ->
judge (S N) pc P (While b C) (BoolExp (Not b) `AND` taint_vars_assert P (modifies [While b C]) l pc)
| Judge_conseq : forall N P P' Q Q' C pc, implies P' P -> implies Q Q' -> judge N pc P C Q -> judge (S N) pc P' C Q'
| Judge_conj : forall N1 N2 P1 P2 Q1 Q2 C pc, judge N1 pc P1 C Q1 -> judge N2 pc P2 C Q2 ->
judge (S (N1+N2)) pc (P1 `AND` P2) C (Q1 `AND` Q2)
| Judge_frame : forall N P Q R C pc, judge N pc P C Q -> (forall x, In x (modifies [C]) -> vars R x = false) ->
judge (S N) pc (P ** R) C (Q ** R).
Inductive sound : context -> assert -> cmd -> assert -> Prop :=
| Jden_hi : forall P C Q,
(forall st, aden P st -> hsafe (Cf st C [])) ->
(forall n st st', aden P st -> hstepn n (Cf st C []) (Cf st' Skip []) -> aden Q st') ->
sound Hi P C Q
| Jden_lo : forall P C Q,
(forall st, aden P st -> lsafe (Cf st C [])) ->
(forall n st st' o, aden P st -> lstepn n (Cf st C []) (Cf st' Skip []) o -> aden Q st') ->
(forall n st1 st2 st1' st2' C' K' o1 o2, aden2 P st1 st2 ->
lstepn n (Cf st1 C []) (Cf st1' C' K') o1 -> lstepn n (Cf st2 C []) (Cf st2' C' K') o2 ->
diverge (Cf st1 C []) \/ diverge (Cf st2 C []) \/ side_condition C' st1' st2') ->
(forall n1 n2 st1 st2 st1' st2' o1 o2, aden2 P st1 st2 -> side_condition C st1 st2 ->
lstepn n1 (Cf st1 C []) (Cf st1' Skip []) o1 -> lstepn n2 (Cf st2 C []) (Cf st2' Skip []) o2 ->
obs_eq st1' st2' /\ o1 = o2) ->
(forall n st1 st2 st1' C' K' o1, aden2 P st1 st2 ->
lstepn n (Cf st1 C []) (Cf st1' C' K') o1 ->
diverge (Cf st1 C []) \/ diverge (Cf st2 C []) \/
exists st2', exists o2, lstepn n (Cf st2 C []) (Cf st2' C' K') o2) ->
(forall n1 n2 i1 s1 h1 i1' s1' h1' i2 s2 h2 i2' s2' h2' o1 o2 a,
aden2 P (St i1 s1 h1) (St i2 s2 h2) ->
lstepn n1 (Cf (St i1 s1 h1) C []) (Cf (St i1' s1' h1') Skip []) o1 ->
lstepn n2 (Cf (St i2 s2 h2) C []) (Cf (St i2' s2' h2') Skip []) o2 ->
h1 a <> h1' a -> (exists v, h1' a = Some (v,Lo)) -> h2 a <> None) ->
sound Lo P C Q.
Lemma soundness_skip : forall ct, sound ct Emp Skip Emp.
Proof.
destruct ct.
apply Jden_lo; intros.
unfold lsafe; intros.
inv H0.
inv H1.
inv H2.
inv H0; auto.
inv H1.
right; right; inv H0; simpl; auto.
inv H2.
inv H1.
inv H2.
inv H; intuit.
inv H1.
inv H3.
right; right; inv H0.
exists st2; exists []; apply LStep_zero.
inv H1.
inv H0.
inv H4.
apply Jden_hi; intros.
unfold hsafe; intros.
inv H0.
inv H1.
inv H2.
inv H0; auto.
inv H1.
Qed.
Lemma soundness_output : forall e, sound Lo (LblExp e Lo `AND` Emp) (Output e) (LblExp e Lo `AND` Emp).
Proof.
intros.
apply Jden_lo; intros.
unfold lsafe; intros.
inv H0.
destruct st as [i s h].
destruct H as [[v]].
apply (Can_lstep _ (Cf (St i s h) Skip []) [v]); apply LStep_output; auto.
inv H2.
inv H3.
inv H1.
inv H0.
inv H0.
inv H1.
inv H2; auto.
inv H0.
right; right; inv H0; simpl; auto.
inv H2.
inv H3; simpl; auto.
inv H0.
inv H1.
inv H3.
inv H4.
inv H2.
inv H1.
inv H3.
inv H.
destruct H2.
dup (obs_eq_exp e _ _ _ _ _ _ H2).
rewrite H9 in H3; rewrite H8 in H3; destruct H3.
apply H4 in H3; subst; split; auto.
inv H1.
inv H1.
right; right; inv H0.
exists st2; exists []; apply LStep_zero.
inv H1.
inv H2.
destruct H.
destruct H0.
destruct st2 as [i2 s2 h2].
destruct H0 as [[v2]].
exists (St i2 s2 h2); exists ([v2]++[]); apply LStep_succ with (cf' := Cf (St i2 s2 h2) Skip []).
apply LStep_output; auto.
apply LStep_zero.
inv H0.
inv H0.
inv H4.
inv H5.
inv H0.
Qed.
Lemma soundness_assign : forall e e' x L ct, expvars e' x = false ->
sound ct (BoolExp (Eq e e') `AND` LblExp e L `AND` Emp) (Assign x e)
(BoolExp (Eq (Var x) e') `AND` LblEq x (Lub L ct) `AND` Emp).
Proof.
intros; destruct ct.
apply Jden_lo; intros.
unfold lsafe; intros.
inv H1.
destruct st as [i s h]; destruct H0 as [[H0 [v]]].
apply (Can_lstep _ (Cf (St i (upd s x (v, lden L i)) h) Skip []) []).
apply LStep_assign; auto.
inv H3.
inv H4.
inv H2.
inv H1.
inv H1.
inv H2.
inv H3.
simpl in *.
decomp H0; subst.
destruct H4 as [v'].
rewrite H0 in H9; inv H9.
repeat (split; auto).
rewrite edenZ_upd; auto.
unfold upd.
destruct (eq_nat_dec x x); simpl.
destruct (edenZ e' i s).
assert (exists l, eden e i s = Some (v,l)).
exists (lden L i); auto.
rewrite <- edenZ_some in H1; rewrite H1 in H3; simpl in H3.
destruct (Z_eq_dec v z); auto.
destruct (edenZ e i s); inv H3.
exists v; unfold upd.
destruct (eq_nat_dec x x).
destruct (lden L i); auto.
inv H1.
right; right; inv H1; simpl; auto.
inv H3.
inv H4; simpl; auto.
inv H1.
inv H2.
inv H4.
inv H5.
inv H3.
inv H2.
inv H4.
split; auto.
inv H0.
destruct H3.
dup (obs_eq_exp e _ _ _ _ _ _ H3).
rewrite H11 in H4; rewrite H10 in H4.
destruct H4; subst.
dup H3; inv H3.
simpl in *; subst; destruct H7.
repeat (split; auto).
intro y; simpl.
unfold upd; destruct (eq_nat_dec y x); subst; intuit.
apply H4.
inv H2.
inv H2.
right; right; inv H1.
exists st2; exists []; apply LStep_zero.
inv H2.
inv H3.
destruct st2 as [i' s' h'].
inv H0.
destruct H2.
simpl in H0; decomp H0.
destruct H6 as [v']; exists (St i' (upd s' x (v',lden L i')) h'); exists ([]++[]).
apply LStep_succ with (cf' := Cf (St i' (upd s' x (v',lden L i')) h') Skip []).
apply LStep_assign; auto.
apply LStep_zero.
inv H1.
inv H1.
inv H5.
inv H6.
inv H1.
apply Jden_hi; intros.
unfold hsafe; intros.
inv H1.
destruct st as [i s h]; destruct H0 as [[H0 [v]]].
apply (Can_hstep _ (Cf (St i (upd s x (v,Hi)) h) Skip [])).
apply HStep_assign with (l := lden L i); auto.
inv H3.
inv H4.
inv H2.
inv H1.
inv H1.
inv H2.
inv H3.
simpl in *.
decomp H0; subst.
destruct H4 as [v'].
rewrite H0 in H8; inv H8.
repeat (split; auto).
rewrite edenZ_upd; auto.
unfold upd.
destruct (eq_nat_dec x x); simpl.
destruct (edenZ e' i s).
assert (exists l, eden e i s = Some (v,l)).
exists (lden L i); auto.
rewrite <- edenZ_some in H1; rewrite H1 in H3; simpl in H3.
destruct (Z_eq_dec v z); auto.
destruct (edenZ e i s); inv H3.
exists v; unfold upd.
destruct (eq_nat_dec x x).
destruct (lden L i); auto.
inv H1.
Qed.
Lemma soundness_read : forall ct e e1 e2 x L1 L2, expvars e1 x = false -> expvars e2 x = false ->
sound ct (BoolExp (Eq (Var x) e1) `AND` LblExp e L1 `AND` Mapsto e e2 L2)
(Read x e) (BoolExp (Eq (Var x) e2) `AND` LblEq x (Lub (Lub L1 L2) ct)
`AND` Mapsto (ereplace e x e1) e2 L2).
Proof.
destruct ct; intros.
apply Jden_lo; intros.
unfold lsafe; intros.
inv H2.
destruct st as [i s h].
destruct H1 as [[H1 [v]]].
destruct H4 as [v' [pf [H4 [v'' [H5]]]]].
apply (Can_lstep _ (Cf (St i (upd s x (v'', lden L1 i \_/ lden L2 i)) h) Skip []) []).
rewrite edenZ_some in H4; destruct H4 as [l'].
rewrite H4 in H2; inv H2.
apply LStep_read with (v1 := v) (pf := pf); auto.
destruct (eq_nat_dec (nat_of_Z v pf) (nat_of_Z v pf)); auto.
inv H4.
inv H5.
inv H3.
inv H2.
inv H2.
inv H3.
inv H4.
inv H1.
destruct H2 as [H2 [v]].
destruct H3 as [v' [pf' [H3 [v'' [H4]]]]].
rewrite edenZ_some in H3; destruct H3 as [l'].
rewrite H3 in H1; inv H1.
rewrite H3 in H10; inv H10.
rewrite (proof_irrelevance _ pf' pf) in H11.
destruct (eq_nat_dec (nat_of_Z v1 pf) (nat_of_Z v1 pf)); inv H11.
simpl; repeat split.
unfold upd at 1; destruct (eq_nat_dec x x); simpl.
rewrite edenZ_upd; auto; rewrite H4.
destruct (Z_eq_dec v2 v2); auto.
exists v2; unfold upd; destruct (eq_nat_dec x x).
destruct (lden L1 i \_/ lden L2 i); auto.
exists v1; exists pf; split.
rewrite edenZ_upd.
rewrite edenZ_ereplace.
rewrite edenZ_some; exists (lden L1 i); auto.
simpl in H2 |- *.
destruct (s x); destruct (edenZ e1 i s); simpl in H2 |- *; try solve [inv H2].
destruct (Z_eq_dec (fst v) z); inv H2; auto.
apply ereplace_deletes; auto.
exists v2; split.
rewrite edenZ_upd; auto.
rewrite (proof_irrelevance _ pf' pf); auto.
inv H2.
right; right; inv H2.
inv H3; simpl.
inv H1.
destruct H3.
destruct st1' as [i1 s1 h1]; destruct st2' as [i2 s2 h2]; simpl in *.
decomp H2; decomp H1.
destruct H5 as [v1 [pf1 [H5 [v1' [H10]]]]].
destruct H4 as [v2 [pf2 [H4 [v2' [H11]]]]].
apply edenZ_some in H5; destruct H5 as [l1].
apply edenZ_some in H4; destruct H4 as [l2].
destruct H7 as [v3]; destruct H9 as [v4].
rewrite H5 in H7; inv H7; rewrite H4 in H9; inv H9.
rewrite H5; rewrite H4.
rewrite (proof_irrelevance _ g pf1); rewrite (proof_irrelevance _ g0 pf2).
destruct (eq_nat_dec (nat_of_Z v3 pf1) (nat_of_Z v3 pf1)); intuit.
destruct (eq_nat_dec (nat_of_Z v4 pf2) (nat_of_Z v4 pf2)); intuit.
destruct H3.
simpl in H1; subst; auto.
inv H4.
inv H5; simpl; auto.
inv H2.
inv H3.
inv H5.
inv H6.
inv H4.
inv H3.
inv H5.
split; auto.
simpl in H2.
rewrite H12 in H2; rewrite H11 in H2.
rewrite (proof_irrelevance _ g pf) in H2; rewrite H13 in H2.
rewrite (proof_irrelevance _ g0 pf0) in H2; rewrite H14 in H2; subst.
destruct H1.
destruct H2.
dup H3; destruct H3.
destruct H5; repeat (split; auto).
intro y; simpl.
unfold upd; destruct (eq_nat_dec y x); subst.
dup (obs_eq_exp e _ _ _ _ _ _ H4).
rewrite H12 in H7; rewrite H11 in H7.
destruct H7; subst; intuition.
glub_simpl H7; subst.
specialize (H8 (refl_equal _)); subst.
rewrite (proof_irrelevance _ pf0 pf) in H14.
specialize (H6 (nat_of_Z v0 pf)); simpl in H6.
rewrite H13 in H6; rewrite H14 in H6; intuit.
apply H5.
inv H3.
inv H3.
right; right; inv H2.
exists st2; exists []; apply LStep_zero.
inv H3.
inv H4.
inv H1.
destruct H3.
destruct st2 as [i' s' h']; simpl in H1.
decomp H1.
destruct H5 as [v1' [pf1 [H5 [v1'' [H12]]]]].
exists (St i' (upd s' x (v1'', lden L1 i' \_/ lden L2 i')) h'); exists ([]++[]).
apply LStep_succ with (cf' := Cf (St i' (upd s' x (v1'', lden L1 i' \_/ lden L2 i')) h') Skip []).
apply LStep_read with (v1 := v1') (pf := pf1).
apply edenZ_some in H5.
destruct H7 as [v']; destruct H5 as [l'].
rewrite H5 in H4; inv H4; auto.
subst; destruct (eq_nat_dec (nat_of_Z v1' pf1) (nat_of_Z v1' pf1)); auto.
apply LStep_zero.
inv H2.
inv H2.
inv H6.
inv H7.
inv H2.
apply Jden_hi; intros.
unfold hsafe; intros.
inv H2.
destruct st as [i s h].
destruct H1 as [[H1 [v]]].
destruct H4 as [v' [pf [H4 [v'' [H5]]]]].
apply (Can_hstep _ (Cf (St i (upd s x (v'',Hi)) h) Skip [])).
rewrite edenZ_some in H4; destruct H4 as [l'].
rewrite H4 in H2; inv H2.
apply HStep_read with (v1 := v) (pf := pf) (l1 := lden L1 i) (l2 := lden L2 i); auto.
destruct (eq_nat_dec (nat_of_Z v pf) (nat_of_Z v pf)); auto.
inv H4.
inv H5.
inv H3.
inv H2.
inv H2.
inv H3.
inv H4.
inv H1.
destruct H2 as [H2 [v]].
destruct H3 as [v' [pf' [H3 [v'' [H4]]]]].
rewrite edenZ_some in H3; destruct H3 as [l'].
rewrite H3 in H1; inv H1.
rewrite H3 in H9; inv H9.
rewrite (proof_irrelevance _ pf' pf) in H10.
destruct (eq_nat_dec (nat_of_Z v1 pf) (nat_of_Z v1 pf)); inv H10.
simpl; repeat split.
unfold upd at 1; destruct (eq_nat_dec x x); simpl.
rewrite edenZ_upd; auto; rewrite H4.
destruct (Z_eq_dec v2 v2); auto.
exists v2; unfold upd; destruct (eq_nat_dec x x).
destruct (lden L1 i \_/ lden L2 i); auto.
exists v1; exists pf; split.
rewrite edenZ_upd.
rewrite edenZ_ereplace.
rewrite edenZ_some; exists (lden L1 i); auto.
simpl in H2 |- *.
destruct (s x); destruct (edenZ e1 i s); simpl in H2 |- *; try solve [inv H2].
destruct (Z_eq_dec (fst v) z); inv H2; auto.
apply ereplace_deletes; auto.
exists v2; split.
rewrite edenZ_upd; auto.
rewrite (proof_irrelevance _ pf' pf); auto.
inv H2.
Qed.
Lemma soundness_write : forall e1 e2 ct L1 L2,
sound ct (LblExp e1 L1 `AND` LblExp e2 L2 `AND` Allocated e1) (Write e1 e2)
(Mapsto e1 e2 (Lub (Lub L1 L2) ct)).
Proof.
destruct ct; intros.
apply Jden_lo; intros.
unfold lsafe; intros.
inv H0.
destruct st as [i s h].
simpl in H; decomp H.
destruct H2 as [v' [pf [H2 [v'' [l'']]]]].
destruct H3 as [v1]; destruct H4 as [v2].
apply (Can_lstep _ (Cf (St i s (upd h (nat_of_Z v' pf) (v2, lden L1 i \_/ lden L2 i))) Skip []) []).
rewrite edenZ_some in H2; destruct H2 as [l'].
rewrite H0 in H2; inv H2.
apply LStep_write; auto.
destruct (eq_nat_dec (nat_of_Z v' pf) (nat_of_Z v' pf)); auto; try discriminate.
inv H2.
inv H3.
inv H1.
inv H0.
inv H0.
inv H1.
inv H2.
simpl in H; decomp H.
destruct H2 as [v1']; destruct H3 as [v2'].
destruct H1 as [v' [pf' [H1 [v'' [l'']]]]].
rewrite edenZ_some in H1; destruct H1 as [l'].
rewrite H1 in H; inv H.
rewrite H0 in H9; inv H9.
rewrite H1 in H8; inv H8.
simpl.
exists v1; exists pf; split.
rewrite edenZ_some; exists (lden L1 i); auto.
exists v2; split.
rewrite edenZ_some; exists (lden L2 i); auto.
unfold upd; rewrite (proof_irrelevance _ pf' pf); extensionality n.
destruct (eq_nat_dec n (nat_of_Z v1 pf)); auto.
destruct (lden L1 i \_/ lden L2 i); auto.
inv H0.
right; right; inv H0.
inv H1; simpl; auto.
inv H2.
inv H3; simpl; auto.
inv H0.
inv H1.
inv H3.
inv H4.
inv H2.
inv H1.
inv H3.
split; auto.
destruct H.
destruct H1.
dup H2; destruct H2.
destruct H4; repeat (split; auto).
intro n; simpl.
dup (obs_eq_exp e1 _ _ _ _ _ _ H3).
dup (obs_eq_exp e2 _ _ _ _ _ _ H3).
rewrite H10 in H6; rewrite H9 in H6.
rewrite H13 in H7; rewrite H11 in H7.
destruct H6; destruct H7; subst.
simpl in H2; unfold upd; destruct (eq_nat_dec n (nat_of_Z v1 pf)); subst.
destruct l0.
specialize (H8 (refl_equal _)); subst.
rewrite (proof_irrelevance _ pf0 pf).
destruct (eq_nat_dec (nat_of_Z v0 pf) (nat_of_Z v0 pf)); auto.
destruct (eq_nat_dec (nat_of_Z v1 pf) (nat_of_Z v0 pf0)); intros.
inv H2.
destruct (h0 (nat_of_Z v1 pf)) as [[v l]|]; auto; intros.
inv H2.
specialize (H5 n); simpl in H5.
destruct (h n) as [[v l]|]; auto.
destruct l0.
specialize (H8 (refl_equal _)); subst.
destruct (eq_nat_dec n (nat_of_Z v0 pf0)); subst.
contradiction n0; rewrite (proof_irrelevance _ pf0 pf); auto.
destruct (h0 n) as [[v' l']|]; auto.
destruct (eq_nat_dec n (nat_of_Z v0 pf0)); subst; auto.
intros.
inv H6.
inv H1.
inv H1.
right; right; inv H0.
exists st2; exists []; apply LStep_zero.
inv H1.
inv H2.
inv H.
destruct H1.
destruct st2 as [i' s' h']; simpl in H.
decomp H.
destruct H3 as [v1' [pf1 [H3 [v1'' [l1'']]]]].
destruct H4 as [v3]; destruct H5 as [v4].
exists (St i' s' (upd h' (nat_of_Z v1' pf1) (v4, lden L1 i' \_/ lden L2 i'))); exists ([]++[]).
apply LStep_succ with (cf' := Cf (St i' s' (upd h' (nat_of_Z v1' pf1) (v4, lden L1 i' \_/ lden L2 i'))) Skip []).
apply LStep_write; auto.
rewrite edenZ_some in H3; destruct H3 as [l''].
rewrite H2 in H3; inv H3; auto.
subst.
destruct (eq_nat_dec (nat_of_Z v1' pf1) (nat_of_Z v1' pf1)); auto; try discriminate.
apply LStep_zero.
inv H0.
inv H0; inv H1.
inv H4; inv H0.
inv H5.
inv H6.
unfold upd in H2, H3.
destruct H3 as [v]; destruct (eq_nat_dec a (nat_of_Z v1 pf)).
inv H0.
glub_simpl H4; subst.
inv H.
destruct H1.
dup (obs_eq_exp e1 _ _ _ _ _ _ H1).
rewrite H13 in H3; rewrite H14 in H3; destruct H3.
specialize (H4 (refl_equal _)); subst.
rewrite (proof_irrelevance _ pf pf0); auto.
inv H0.
inv H0.
apply Jden_hi; intros.
unfold hsafe; intros.
inv H0.
destruct st as [i s h].
simpl in H; decomp H.
destruct H2 as [v' [pf [H2 [v'' [l'']]]]].
destruct H3 as [v1]; destruct H4 as [v2].
apply (Can_hstep _ (Cf (St i s (upd h (nat_of_Z v' pf) (v2, Hi))) Skip [])).
rewrite edenZ_some in H2; destruct H2 as [l'].
rewrite H0 in H2; inv H2.
apply HStep_write with (l1 := lden L1 i) (l2 := lden L2 i); auto.
destruct (eq_nat_dec (nat_of_Z v' pf) (nat_of_Z v' pf)); auto; try discriminate.
inv H2.
inv H3.
inv H1.
inv H0.
inv H0.
inv H1.
inv H2.
simpl in H; decomp H.
destruct H2 as [v1']; destruct H3 as [v2'].
destruct H1 as [v' [pf' [H1 [v'' [l'']]]]].
rewrite edenZ_some in H1; destruct H1 as [l'].
rewrite H1 in H; inv H.
rewrite H0 in H8; inv H8.
rewrite H1 in H7; inv H7.
simpl.
exists v1; exists pf; split.
rewrite edenZ_some; exists (lden L1 i); auto.
exists v2; split.
rewrite edenZ_some; exists (lden L2 i); auto.
unfold upd; rewrite (proof_irrelevance _ pf' pf); extensionality n.
destruct (eq_nat_dec n (nat_of_Z v1 pf)); auto.
destruct (lden L1 i \_/ lden L2 i); auto.
inv H0.
Qed.
Lemma soundness_seq : forall N1 N2 P Q R C1 C2 ct,
(forall y : nat, y < S (N1 + N2) ->
forall (ct : context) (P : assert) (C : cmd) (Q : assert),
judge y ct P C Q -> sound ct P C Q) ->
judge N1 ct P C1 Q -> judge N2 ct Q C2 R -> sound ct P (Seq C1 C2) R.
Proof.
intros.
rename H1 into H2; rename H0 into H1; destruct ct.
apply Jden_lo; intros.
unfold lsafe; intros.
inv H3.
apply (Can_lstep _ (Cf st C1 [C2]) []); apply LStep_seq.
inv H5.
change (lstepn n0 (Cf st C1 ([]++[C2])) cf' o') in H6.
destruct cf' as [st' C' K']; apply lstep_trans_inv in H6.
destruct H6.
destruct H3 as [K [H3]]; subst.
apply H in H1; try omega; inv H1.
case_eq (halt_config (Cf st' C' K)); intros.
destruct C'; destruct K; inv H1.
apply (Can_lstep _ (Cf st' C2 []) []); apply LStep_skip.
specialize (H5 st H0 _ _ _ H3 H1).
inv H5.
destruct cf' as [st'' C'' K''].
apply lstep_extend with (K0 := [C2]) in H11.
apply (Can_lstep _ (Cf st'' C'' (K''++[C2])) o); auto.
destruct H3 as [st'' [n1 [n2 [o1 [o2]]]]]; decomp H3; subst.
apply H in H1; try omega; apply H in H2; try omega; inv H1; inv H2.
apply H6 in H5; auto.
apply H1 in H5.
inv H7.
apply (Can_lstep _ (Cf st' C2 []) []); apply LStep_skip.
inv H2.
apply H5 in H17; auto.
inv H3.
inv H4.
change (lstepn n0 (Cf st C1 ([]++[C2])) (Cf st' Skip []) o') in H5.
apply lstep_trans_inv in H5; destruct H5.
destruct H3 as [K [H3]].
apply sym_eq in H4; apply app_eq_nil in H4; destruct H4.
inv H5.
destruct H3 as [st'' [n1 [n2 [o1 [o2]]]]]; decomp H3; subst.
apply H in H1; try omega; apply H in H2; try omega; inv H1; inv H2.
inv H6.
inv H2.
apply H5 in H4; auto; apply H11 in H16; auto.
inv H3; simpl; auto.
inv H5.
inv H4.
inv H5.
change (lstepn n0 (Cf st1 C1 ([]++[C2])) (Cf st1' C' K') o') in H6.
change (lstepn n0 (Cf st2 C1 ([]++[C2])) (Cf st2' C' K') o'0) in H7.
apply lstep_trans_inv in H6; apply lstep_trans_inv in H7.
destruct H6.
destruct H7.
destruct H3 as [K1 [H3]]; destruct H4 as [K2 [H4]]; subst.
apply app_cancel_r in H6; subst.
apply H in H1; try omega; inv H1.
dup (H7 _ _ _ _ _ _ _ _ _ H0 H3 H4).
decomp H1; auto.
left; apply diverge_seq1; auto.
right; left; apply diverge_seq1; auto.
destruct H3 as [K1 [H3]].
destruct H4 as [st'' [n1 [n2 [o1 [o2]]]]]; decomp H4; subst.
apply H in H1; try omega; inv H1.
apply H10 with (st2 := st1) in H6.
decomp H6.
right; left; apply diverge_seq1; auto.
left; apply diverge_seq1; auto.
destruct H12 as [st2'' [o2']].
apply lstep_trans_inv' in H3.
destruct H3 as [cf'' [o1'' [o2'']]]; decomp H3.
destruct (lstepn_det _ _ _ _ _ _ H6 H1); subst.
inv H13; simpl; auto.
inv H3.
inv H0.
destruct H12; split; auto; split; auto.
apply obs_eq_sym; auto.
destruct H7.
destruct H4 as [K1 [H5]].
destruct H3 as [st'' [n1 [n2 [o1 [o2]]]]]; decomp H3; subst.
apply H in H1; try omega; inv H1.
apply H10 with (st2 := st2) in H6; auto.
decomp H6.
left; apply diverge_seq1; auto.
right; left; apply diverge_seq1; auto.
destruct H12 as [st2'' [o2']].
apply lstep_trans_inv' in H5.
destruct H5 as [cf'' [o1'' [o2'']]]; decomp H5.
destruct (lstepn_det _ _ _ _ _ _ H6 H1); subst.
inv H13; simpl; auto.
inv H5.
destruct H3 as [st1'' [n1 [n2 [o1 [o2]]]]]; decomp H3; subst.
destruct H4 as [st2'' [n3 [n4 [o3 [o4]]]]].
decomp H3; subst.
apply H in H1; try omega; inv H1.
apply H in H2; try omega; inv H2.
assert (n1 = n3).
dup H5; apply H12 with (st2 := st2) in H5; auto.
decomp H5.
apply (False_ind _ (diverge_halt _ _ _ _ H19 H2)).
apply (False_ind _ (diverge_halt _ _ _ _ H20 H4)).
destruct H20 as [st2''' [o2']].
apply (lstepn_det_term _ _ _ _ _ _ _ H5 H4).
assert (n2 = n4); subst; try omega.
destruct n4.
inv H8; simpl; auto.
inv H8.
inv H19.
inv H7.
inv H8.
dup H0; inv H0.
destruct H8; split; try split.
apply (H9 _ _ _ _ H7 H5).
apply (H9 _ _ _ _ H0 H4).
apply (H11 n3 n3 st1 st2 st1'' st2'' o1 o3); auto.
repeat (split; auto).
decomp (H10 0 st1 st2 st1 st2 C1 [] [] [] H2 (LStep_zero _) (LStep_zero _)); auto.
apply (False_ind _ (diverge_halt _ _ _ _ H21 H5)).
apply (False_ind _ (diverge_halt _ _ _ _ H22 H4)).
decomp (H15 n4 st1'' st2'' st1' st2' C' K' o'0 o' H2 H19 H20); auto.
left; apply diverge_seq2 with (st' := st1'') (n := n3) (o := o1); auto.
right; left; apply diverge_seq2 with (st' := st2'') (n := n3) (o := o3); auto.
inv H4.
inv H6.
inv H5.
inv H4.
change (lstepn n (Cf st1 C1 ([]++[C2])) (Cf st1' Skip []) o') in H7.
change (lstepn n0 (Cf st2 C1 ([]++[C2])) (Cf st2' Skip []) o'0) in H6.
apply lstep_trans_inv in H7; apply lstep_trans_inv in H6.
destruct H7.
destruct H4 as [K1 [H4]].
apply f_equal with (f := fun l => length l) in H5; simpl in H5.
destruct K1; inv H5.
destruct H6.
destruct H5 as [K2 [H5]].
apply f_equal with (f := fun l => length l) in H6; simpl in H6.
destruct K2; inv H6.
destruct H4 as [st1'' [n1 [n2 [o1 [o2]]]]]; decomp H4; subst.
destruct H5 as [st2'' [n3 [n4 [o3 [o4]]]]].
decomp H4; subst.
apply H in H1; try omega; inv H1.
apply H in H2; try omega; inv H2.
assert (n1 = n3).
dup H6; apply H12 with (st2 := st2) in H6; auto.
decomp H6.
apply (False_ind _ (diverge_halt _ _ _ _ H19 H2)).
apply (False_ind _ (diverge_halt _ _ _ _ H20 H5)).
destruct H20 as [st [o]].
apply (lstepn_det_term _ _ _ _ _ _ _ H6 H5).
subst.
inv H8.
inv H2.
inv H9.
inv H2.
assert (obs_eq st1' st2' /\ o' = o'0).
apply (H16 n n0 st1'' st2'' st1' st2' o' o'0); auto.
dup H0; inv H0.
destruct H20; split; try split.
apply H7 in H6; auto.
apply H7 in H5; auto.
apply (H11 n3 n3 st1 st2 st1'' st2'' o1 o3); auto.
repeat (split; auto).
decomp (H10 0 st1 st2 st1 st2 C1 [] [] [] H2 (LStep_zero _) (LStep_zero _)); auto.
apply (False_ind _ (diverge_halt _ _ _ _ H21 H6)).
apply (False_ind _ (diverge_halt _ _ _ _ H22 H5)).
dup H0; inv H0.
destruct H20; split; try split.
apply (H7 _ _ _ _ H9 H6).
apply (H7 _ _ _ _ H0 H5).
apply (H11 n3 n3 st1 st2 st1'' st2'' o1 o3); auto.
decomp (H10 0 st1 st2 st1 st2 C1 [] [] [] H2 (LStep_zero _) (LStep_zero _)); auto.
apply (False_ind _ (diverge_halt _ _ _ _ H21 H6)).
apply (False_ind _ (diverge_halt _ _ _ _ H22 H5)).
decomp (H15 0 st1'' st2'' st1'' st2'' C2 [] [] [] H2 (LStep_zero _) (LStep_zero _)); auto.
apply (False_ind _ (diverge_halt _ _ _ _ H9 H19)).
apply (False_ind _ (diverge_halt _ _ _ _ H20 H8)).
destruct H2; subst; split; auto.
assert (o1 = o3).
apply (H11 n3 n3 st1 st2 st1'' st2'' o1 o3); auto.
decomp (H10 0 st1 st2 st1 st2 C1 [] [] [] H0 (LStep_zero _) (LStep_zero _)); auto.
apply (False_ind _ (diverge_halt _ _ _ _ H9 H6)).
apply (False_ind _ (diverge_halt _ _ _ _ H20 H5)).
subst; auto.
inv H3.
right; right; exists st2; exists []; apply LStep_zero.
inv H4.
change (lstepn n0 (Cf st1 C1 ([]++[C2])) (Cf st1' C' K') o') in H5.
apply lstep_trans_inv in H5; destruct H5.
destruct H3 as [K'' [H3]]; subst.
apply H in H1; try omega; inv H1.
apply H8 with (st2 := st2) in H3; auto.
decomp H3.
left; apply diverge_seq1; auto.
right; left; apply diverge_seq1; auto.
right; right; destruct H10 as [st2' [o2]]; exists st2'; exists ([]++o2).
apply LStep_succ with (cf' := Cf st2 C1 [C2]); auto.
apply LStep_seq.
apply lstepn_extend with (K0 := [C2]) in H1; auto.
destruct H3 as [st1'' [n1 [n2 [o1 [o2]]]]]; decomp H3; subst.
apply H in H1; apply H in H2; try omega; inv H1; inv H2.
dup H4; apply H9 with (st2 := st2) in H4; auto.
decomp H4.
left; apply diverge_seq1; auto.
right; left; apply diverge_seq1; auto.
destruct H17 as [st2'' [o2']].
inv H6.
right; right; exists st2''; exists ([]++o2').
apply LStep_succ with (cf' := Cf st2 C1 [C2]).
apply LStep_seq.
assert (n1 + 0 = n1); try omega.
rewrite H6; apply lstepn_extend with (K0 := [C2]) in H4; auto.
inv H16.
apply H14 with (st2 := st2'') in H17.
decomp H17.
left; apply diverge_seq2 with (st' := st1'') (n := n1) (o := o1); auto.
right; left; apply diverge_seq2 with (st' := st2'') (n := n1) (o := o2'); auto.
right; right; destruct H16 as [st2' [o2'']].
exists st2'; exists ([]++o2'++[]++o2'').
apply LStep_succ with (cf' := Cf st2 C1 [C2]).
apply LStep_seq.
apply lstep_trans with (cf2 := Cf st2'' Skip [C2]).
apply lstepn_extend with (K0 := [C2]) in H4; auto.
apply LStep_succ with (cf' := Cf st2'' C2 []); auto.
apply LStep_skip.
dup H0; inv H0.
destruct H18; split; try split.
apply H5 in H2; auto.
apply H5 in H4; auto.
apply (H8 n1 n1 st1 st2 st1'' st2'' o1 o2'); auto.
decomp (H7 0 st1 st2 st1 st2 C1 [] [] [] H6 (LStep_zero _) (LStep_zero _)); auto.
apply (False_ind _ (diverge_halt _ _ _ _ H19 H2)).
apply (False_ind _ (diverge_halt _ _ _ _ H20 H4)).
apply H in H1; try omega; inv H1.
apply H in H2; try omega; inv H2.
inv H3; inv H4.
inv H2; inv H3.
change (lstepn n (Cf (St i1 s1 h1) C1 ([]++[C2])) (Cf (St i1' s1' h1') Skip []) o') in H18.
change (lstepn n0 (Cf (St i2 s2 h2) C1 ([]++[C2])) (Cf (St i2' s2' h2') Skip []) o'0) in H19.
apply lstep_trans_inv in H18; apply lstep_trans_inv in H19.
destruct H18.
destruct H2 as [K [H2]].
apply f_equal with (f := fun l => length l) in H3; simpl in H3.
destruct K; inv H3.
destruct H19.
destruct H3 as [K [H3]].
apply f_equal with (f := fun l => length l) in H4; simpl in H4.
destruct K; inv H4.
destruct H2 as [[i1'' s1'' h1''] [n1 [n2 [o1 [o2]]]]]; decomp H2.
destruct H3 as [[i2'' s2'' h2''] [n1' [n2' [o1' [o2']]]]].
decomp H2; subst.
inv H19; inv H22.
inv H2; inv H19.
destruct (opt_eq_dec val_eq_dec (h1 a) (h1'' a)).
destruct (opt_eq_dec val_eq_dec (h1'' a) (h1' a)).
apply (H17 _ n0 _ _ _ _ _ _ i2'' s2'' h2'' i2' s2' h2' _ o'0 a) in H18; auto.
intro; apply lstepn_nonincreasing with (a := a) in H3; auto.
split; try split.
destruct H0; apply H8 in H4; intuit.
destruct H0; apply H8 in H3; intuit.
decomp (H9 _ _ _ _ _ _ _ _ _ H0 (LStep_zero _) (LStep_zero _)).
apply (False_ind _ (diverge_halt _ _ _ _ H2 H4)).
apply (False_ind _ (diverge_halt _ _ _ _ H19 H3)).
destruct (H10 _ _ _ _ _ _ _ _ H0 H19 H4 H3); auto.
destruct (opt_eq_dec val_eq_dec (h1'' a) (h1' a)).
apply (H12 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ H0 H4 H3 n2).
rewrite e; auto.
apply (H17 _ n0 _ _ _ _ _ _ i2'' s2'' h2'' i2' s2' h2' _ o'0 a) in H18; auto.
intro; apply lstepn_nonincreasing with (a := a) in H3; auto.
split; try split.
destruct H0; apply H8 in H4; intuit.
destruct H0; apply H8 in H3; intuit.
decomp (H9 _ _ _ _ _ _ _ _ _ H0 (LStep_zero _) (LStep_zero _)).
apply (False_ind _ (diverge_halt _ _ _ _ H2 H4)).
apply (False_ind _ (diverge_halt _ _ _ _ H19 H3)).
destruct (H10 _ _ _ _ _ _ _ _ H0 H19 H4 H3); auto.
apply Jden_hi; intros.
unfold hsafe; intros.
inv H3.
apply (Can_hstep _ (Cf st C1 [C2])); apply HStep_seq.
inv H5.
change (hstepn n0 (Cf st C1 ([]++[C2])) cf') in H6.
destruct cf' as [st' C' K']; apply hstep_trans_inv in H6.
destruct H6.
destruct H3 as [K [H3]]; subst.
apply H in H1; try omega; inv H1.
case_eq (halt_config (Cf st' C' K)); intros.
destruct C'; destruct K; inv H1.
apply (Can_hstep _ (Cf st' C2 [])); apply HStep_skip.
specialize (H5 st H0 _ _ H3 H1).
inv H5.
destruct cf' as [st'' C'' K''].
apply hstep_extend with (K0 := [C2]) in H7.
apply (Can_hstep _ (Cf st'' C'' (K''++[C2]))); auto.
destruct H3 as [st'' [n1 [n2]]]; decomp H3; subst.
apply H in H1; try omega; apply H in H2; try omega; inv H1; inv H2.
apply H6 in H5; auto.
apply H1 in H5.
inv H7.
apply (Can_hstep _ (Cf st' C2 [])); apply HStep_skip.
inv H2.
apply H5 in H9; auto.
inv H3.
inv H4.
change (hstepn n0 (Cf st C1 ([]++[C2])) (Cf st' Skip [])) in H5.
apply hstep_trans_inv in H5; destruct H5.
destruct H3 as [K [H3]].
apply sym_eq in H4; apply app_eq_nil in H4; destruct H4.
inv H5.
destruct H3 as [st'' [n1 [n2]]]; decomp H3; subst.
apply H in H1; try omega; apply H in H2; try omega; inv H1; inv H2.
inv H6.
inv H2.
apply H5 in H4; auto; apply H7 in H8; auto.
Qed.
Lemma soundness_if : forall N1 N2 P Q b C1 C2 ct (lt lf : glbl),
(forall y : nat, y < S (N1 + N2) ->
forall (ct : context) (P : assert) (C : cmd) (Q : assert),
judge y ct P C Q -> sound ct P C Q) ->
implies P (BoolExp b `OR` BoolExp (Not b)) ->
implies (BoolExp b `AND` P) (LblBexp b lt) -> implies (BoolExp (Not b) `AND` P) (LblBexp b lf) ->
(gleq (glub lt lf) ct = false -> no_lbls P (modifies [If b C1 C2]) = true) ->
judge N1 (glub lt ct) (BoolExp b `AND` taint_vars_assert P (modifies [If b C1 C2]) lt ct) C1 Q ->
judge N2 (glub lf ct) (BoolExp (Not b) `AND` taint_vars_assert P (modifies [If b C1 C2]) lf ct) C2 Q ->
sound ct P (If b C1 C2) Q.
Proof.
intros.
rename H5 into H6; rename H4 into H5; rename H3 into H4;
rename H2 into H3; rename H1 into H2; rename H0 into H1; destruct ct.
apply Jden_lo; intros.
unfold lsafe; intros.
inv H7.
dup H0; apply H1 in H0.
destruct st as [i s h]; simpl in H0; destruct H0.
assert (aden (LblBexp b lt) (St i s h)).
apply H2; simpl; split; auto.
destruct H9 as [v].
rewrite bdenZ_some in H0; destruct H0 as [l].
rewrite H9 in H0; inv H0.
destruct l.
apply (Can_lstep _ (Cf (St i s h) C1 []) []).
apply LStep_if_true; auto.
destruct (dvg_ex_mid (taint_vars_cf (Cf (St i s h) (If b C1 C2) []))).
apply (Can_lstep _ (Cf (St i s h) (If b C1 C2) []) []).
apply LStep_if_hi_dvg with (v := true); auto.
unfold hsafe; intros.
apply H in H5; try omega; inv H5.
inv H10.
apply (Can_hstep _ (Cf (St i (taint_vars [If b C1 C2] s) h) C1 [])).
apply HStep_if_true with (l := Hi).
apply bden_taint_vars with (K := [If b C1 C2]) in H9; destruct H9 as [l [H9]].
destruct l; inv H5; auto.
inv H5.
apply H12 in H14; intuit.
simpl; split; try split.
rewrite bdenZ_some; exists l; auto.
apply no_lbls_taint_vars; auto.
simpl.
unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies [If b C1 C2])); try contradiction.
destruct (s x) as [[v1 l1]|].
exists v1; exists Hi; split; auto.
exists 0%Z; exists Hi; split; auto.
apply bden_taint_vars with (K := [If b C1 C2]) in H9.
destruct H9 as [l' [H9]].
rewrite H9 in H22; inv H22.
destruct H0 as [n [st]].
apply (Can_lstep _ (Cf st Skip []) []).
apply LStep_if_hi with (v := true) (n := n); auto.
unfold hsafe; intros.
apply H in H5; try omega; inv H5.
inv H10.
apply (Can_hstep _ (Cf (St i (taint_vars [If b C1 C2] s) h) C1 [])).
apply HStep_if_true with (l := Hi).
apply bden_taint_vars with (K := [If b C1 C2]) in H9; destruct H9 as [l [H9]].
destruct l; inv H5; auto.
inv H5.
apply H12 in H14; intuit.
simpl; split; try split.
rewrite bdenZ_some; exists l; auto.
apply no_lbls_taint_vars; auto.
simpl.
unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies [If b C1 C2])); try contradiction.
destruct (s x) as [[v1 l1]|].
exists v1; exists Hi; split; auto.
exists 0%Z; exists Hi; split; auto.
apply bden_taint_vars with (K := [If b C1 C2]) in H9.
destruct H9 as [l' [H9]].
rewrite H9 in H22; inv H22.
case_eq (bdenZ b i s); intros.
rewrite H9 in H0; inv H0.
destruct b0; inv H11.
apply bdenZ_some in H9; destruct H9 as [l]; destruct l.
apply (Can_lstep _ (Cf (St i s h) C2 []) []).
apply LStep_if_false; auto.
destruct (dvg_ex_mid (taint_vars_cf (Cf (St i s h) (If b C1 C2) []))).
apply (Can_lstep _ (Cf (St i s h) (If b C1 C2) []) []).
apply LStep_if_hi_dvg with (v := false); auto.
unfold hsafe; intros.
apply H in H6; try omega; inv H6.
inv H10.
apply (Can_hstep _ (Cf (St i (taint_vars [If b C1 C2] s) h) C2 [])).
apply HStep_if_false with (l := Hi).
apply bden_taint_vars with (K := [If b C1 C2]) in H0; destruct H0 as [l [H0]].
destruct l; inv H6; auto.
inv H6.
apply bden_taint_vars with (K := [If b C1 C2]) in H0.
destruct H0 as [l' [H0]].
rewrite H0 in H23; inv H23.
apply H13 in H15; intuit.
destruct lf; inv H12; simpl; split; try split.
assert (exists l, bden b i (taint_vars [If b C1 C2] s) = Some (false,l)).
exists l; auto.
rewrite <- bdenZ_some in H6; rewrite H6; auto.
apply no_lbls_taint_vars; auto.
apply H4.
destruct lt; auto.
simpl.
unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies [If b C1 C2])); try contradiction.
destruct (s x) as [[v1 l1]|].
exists v1; exists Hi; split; auto.
exists 0%Z; exists Hi; split; auto.
destruct lf; inv H12.
specialize (H3 (St i s h)); simpl in H3.
assert (exists v, bden b i s = Some (v,Lo)).
apply H3; split; auto.
case_eq (bdenZ b i s); intros.
destruct b0; simpl; auto.
apply bdenZ_some in H6; destruct H6 as [l].
rewrite H6 in H0; inv H0.
apply bdenZ_none in H6; rewrite H6 in H0; inv H0.
destruct H6 as [v]; rewrite H6 in H0; inv H0.
destruct H9 as [n [st]].
apply (Can_lstep _ (Cf st Skip []) []).
apply LStep_if_hi with (v := false) (n := n); auto.
unfold hsafe; intros.
apply H in H6; try omega; inv H6.
inv H10.
apply (Can_hstep _ (Cf (St i (taint_vars [If b C1 C2] s) h) C2 [])).
apply HStep_if_false with (l := Hi).
apply bden_taint_vars with (K := [If b C1 C2]) in H0; destruct H0 as [l [H0]].
destruct l; inv H6; auto.
inv H6.
apply bden_taint_vars with (K := [If b C1 C2]) in H0; destruct H0 as [l' [H0]].
rewrite H23 in H0; inv H0.
apply H13 in H15; intuit.
destruct lf; inv H12.
simpl; split; try split.
assert (exists l, bden b i (taint_vars [If b C1 C2] s) = Some (false,l)).
exists l; auto.
rewrite <- bdenZ_some in H6; rewrite H6; auto.
apply no_lbls_taint_vars; auto.
apply H4; destruct lt; auto.
simpl.
unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies [If b C1 C2])); try contradiction.
destruct (s x) as [[v1 l1]|].
exists v1; exists Hi; split; auto.
exists 0%Z; exists Hi; split; auto.
destruct lf; inv H12.
specialize (H3 (St i s h)); simpl in H3.
assert (exists v, bden b i s = Some (v,Lo)).
apply H3; split; auto.
case_eq (bdenZ b i s); intros.
destruct b0; simpl; auto.
apply bdenZ_some in H6; destruct H6 as [l].
rewrite H6 in H0; inv H0.
apply bdenZ_none in H6; rewrite H6 in H0; inv H0.
destruct H6 as [v]; rewrite H6 in H0; inv H0.
rewrite H9 in H0; inv H0.
inv H9.
apply H in H5; try omega; inv H5.
destruct lt; inv H7.
specialize (H2 (St i s h)); simpl in H2.
assert (exists v, bden b i s = Some (v,Hi)).
apply H2; split; auto.
rewrite bdenZ_some; exists Lo; auto.
destruct H5 as [v]; rewrite H5 in H17; inv H17.
apply H9 in H10; intuit.
destruct lt; inv H7.
unfold taint_vars_assert; simpl; split; auto.
rewrite bdenZ_some; exists Lo; auto.
apply H in H6; try omega; inv H6.
destruct lf; inv H7.
specialize (H3 (St i s h)); simpl in H3.
assert (exists v, bden b i s = Some (v,Hi)).
apply H3; split; auto.
case_eq (bdenZ b i s); intros.
destruct b0; auto.
apply bdenZ_some in H6; destruct H6 as [l].
rewrite H6 in H17; inv H17.
apply bdenZ_none in H6; rewrite H6 in H17; inv H17.
destruct H6 as [v]; rewrite H6 in H17; inv H17.
apply H9 in H10; intuit.
destruct lf; inv H7.
unfold taint_vars_assert; simpl; split; auto.
case_eq (bdenZ b i s); intros.
destruct b0; auto.
apply bdenZ_some in H6; destruct H6 as [l].
rewrite H6 in H17; inv H17.
apply bdenZ_none in H6; rewrite H6 in H17; inv H17.
inv H10.
inv H8.
inv H7.
generalize cf' o' H8 H10; clear cf' o' H8 H10.
induction n0; intros.
inv H10.
apply (Can_lstep _ (Cf (St i s h) (If b C1 C2) []) []).
apply LStep_if_hi_dvg with (v := v); auto.
inv H10.
inv H9.
rewrite H22 in H17; inv H17.
rewrite H22 in H17; inv H17.
inv H11.
inv H8.
inv H7.
apply IHn0 with (o' := o'0); auto.
inv H7.
inv H8.
apply H in H5; try omega; inv H5.
destruct lt; inv H7.
specialize (H2 (St i s h)); simpl in H2.
assert (exists v, bden b i s = Some (v,Hi)).
apply H2; split; auto.
rewrite bdenZ_some; exists Lo; auto.
destruct H5 as [v]; rewrite H5 in H16; inv H16.
apply H10 in H9; auto.
destruct lt; inv H7.
simpl; split; auto.
rewrite bdenZ_some; exists Lo; auto.
apply H in H6; try omega; inv H6.
destruct lf; inv H7.
specialize (H3 (St i s h)); simpl in H3.
assert (exists v, bden b i s = Some (v,Hi)).
apply H3; split; auto.
case_eq (bdenZ b i s); intros.
destruct b0; auto.
rewrite bdenZ_some in H6; destruct H6 as [l].
rewrite H6 in H16; inv H16.
rewrite bdenZ_none in H6; rewrite H6 in H16; inv H16.
destruct H6 as [v]; rewrite H6 in H16; inv H16.
apply H10 in H9; auto.
destruct lf; inv H7.
simpl; split; auto.
assert (exists l, bden b i s = Some (false,l)).
exists Lo; auto.
rewrite <- bdenZ_some in H6; rewrite H6; auto.
inv H9.
inv H18.
inv H7.
apply H in H5; try omega; inv H5.
apply H10 in H8; auto.
destruct lt; inv H7.
simpl; split; try split.
rewrite bdenZ_some; exists l; auto.
apply no_lbls_taint_vars; auto.
unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies [If b C1 C2])); try contradiction.
destruct (s x) as [[v1 l1]|].
exists v1; exists Hi; split; auto.
exists 0%Z; exists Hi; split; auto.
destruct lt; inv H7.
dup H16; apply bden_taint_vars with (K := [If b C1 C2]) in H16.
destruct H16 as [l' [H16]].
rewrite H16 in H19; inv H19.
destruct l; inv H7.
specialize (H2 (St i s h)); simpl in H2.
assert (exists v, bden b i s = Some (v,Lo)).
apply H2; split; auto.
rewrite bdenZ_some; exists Hi; auto.
destruct H7 as [v].
rewrite H7 in H5; inv H5.
apply H in H6; try omega; inv H6.
apply H10 in H8; auto.
destruct lf; inv H7.
simpl; split; try split.
assert (exists l, bden b i (taint_vars [If b C1 C2] s) = Some (false,l)).
exists l; auto.
rewrite <- bdenZ_some in H6; rewrite H6; auto.
apply no_lbls_taint_vars; auto.
apply H4; destruct lt; auto.
unfold taint_vars.
destruct (In_dec eq_nat_dec x (modifies [If b C1 C2])); try contradiction.
destruct (s x) as [[v1 l1]|].
exists v1; exists Hi; split; auto.
exists 0%Z; exists Hi; split; auto.
destruct lf; inv H7.
dup H16; apply bden_taint_vars with (K := [If b C1 C2]) in H16.
destruct H16 as [l' [H16]].
rewrite H16 in H19; inv H19.
destruct l; inv H7.
specialize (H3 (St i s h)); simpl in H3.
assert (exists v, bden b i s = Some (v,Lo)).
apply H3; split; auto.
assert (exists l, bden b i s = Some (false,l)).
exists Hi; auto.
rewrite <- bdenZ_some in H7; rewrite H7; auto.
destruct H7 as [v].
rewrite H7 in H6; inv H6.
inv H7.
generalize st' o' H9 H18; clear st' o' H9 H18.
induction n0; intros.
inv H9.
inv H9.
inv H8.
rewrite H21 in H16; inv H16.
rewrite H21 in H16; inv H16.
| 50,075
| 132,409
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.546875
| 3
|
CC-MAIN-2018-47
|
latest
|
en
| 0.3111
|
https://mathematica.stackexchange.com/questions/234515/group-a-list-based-on-multiples-of-sublists
| 1,716,400,612,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-22/segments/1715971058560.36/warc/CC-MAIN-20240522163251-20240522193251-00794.warc.gz
| 328,937,863
| 41,814
|
# Group a list based on multiples of sublists
I have a list of lists like this one, where the sublists vary in length.
list = {{0, 1, 2}, {0, -2, 4}, {0, 3, 6}, {1, 2, 3}, {0, 0}, {2, 4, 6}, {-2, -4, -6}, {1, 2, -3}, {1, -2, 3}, {-6, -8, -10}, {0.3, 0.4, 0.5}, {0, 0}};
I want to rearrange the list so that each group has all the multiples of the sublists grouped together. The above example should give this reorganized list:
{{0, 1, 2}, {0, 3, 6}}
{{0, -2, 4}}
{{1, 2, 3}, {2, 4, 6}, {-2, -4, -6}}
{{0, 0}, {0, 0}}
{{1, 2, -3}}
{{1, -2, 3}}
{{-6, -8, -10}, {0.3, 0.4, 0.5}}
I have read this post, but it’s mainly suitable for positive integer multiples and equal-length sublists. This method fails with my example.
GatherBy[list, #/Max[1, GCD @@ #] &]
I prefer the solution based on GatherBy instead of Gather, because GatherBy is much more efficient.
Updated
list = {{0, 1, 2}, {0, -2, 4}, {0, 3, 6}, {1, 2, 3}, {0, 0}, {2, 4,
6}, {-2, -4, -6}, {1, 2, -3}, {1, -2, 3}, {-6, -8, -10}, {0.3,
0.4, 0.5}, {0, 0}};
f[w_] := Sort@{Normalize[Rationalize@w], -Normalize[Rationalize@w]}
GatherBy[list, f]
{{{0, 1, 2}, {0, 3, 6}}, {{0, -2, 4}}, {{1, 2, 3}, {2, 4, 6}, {-2, -4, -6}}, {{0, 0}, {0, 0}}, {{1, 2, -3}}, {{1, -2, 3}}, {{-6, -8, -10}, {0.3, 0.4, 0.5}}}
The trick is, for a vector $$(x,y,z)$$, we map it into two vectors $$\{ (x,y,z),-(x,y,z) \}$$, and Sort the two vectors in this set.
So the two vectors $$(a,b,c)$$ and $$(-a,-b,-c)$$ are map into the same order set $$\{(a,b,c),(-a,-b,-c)\}$$
so they regard as the same object!
For example,
u = {1, -2, 3};
v = {-1, 2, -3};
Sort[{u, -u}] === Sort[{v, -v}]
(* True *)
Original
Not so effect. Here we define a normalize function.
Function[w, (Sign@First@w)*Normalize[Rationalize@w, Sqrt[#.#] &]]
for example, if w={x,y,z},we first Rationalize w,and then calculate $$\frac{x}{\sqrt{x^2+y^2+z^2}},\frac{y}{\sqrt{x^2+y^2+z^2}},\frac{z}{\sqrt{x^2+y^2+z^2}}$$
after this,we make the $$\frac{x}{\sqrt{x^2+y^2+z^2}}$$ is positive by multiple the vector the sign of x.
list = {{1, 2, 3}, {0, 0}, {2, 4, 6}, {-2, -4, -6}, {1,
2, -3}, {1, -2, 3}, {-6, -8, -10}, {0.3, 0.4, 0.5}, {0, 0}};
GatherBy[list,
Function[w, (Sign@First@w)*Normalize[Rationalize@w, Sqrt[#.#] &]]]
{{{1, 2, 3}, {2, 4, 6}, {-2, -4, -6}}, {{0, 0}, {0, 0}}, {{1, 2, -3}}, {{1, -2, 3}}, {{-6, -8, -10}, {0.3, 0.4, 0.5}}}
Other idea
Maybe MatrixRank is another way.
• Thank you. See my updated,{{0, 1, 2}, {0, -2, 4}} should not be grouped together. Nov 12, 2020 at 9:05
• list = {{0, 1, 2}, {0, -2, 4}, {0, 3, 6}, {1, 2, 3}, {0, 0}, {2, 4, 6}, {-2, -4, -6}, {1, 2, -3}, {1, -2, 3}, {-6, -8, -10}, {0.3, 0.4, 0.5}, {0, 0}}; Region[#, BaseStyle -> Blue] & /@ (AffineSpace @@ {{#, -#}} & /@ list) Nov 12, 2020 at 12:32
• GatherBy[list, Round[Sort@{-#, #} &@{Normalize[#, Norm[#, 1] &]}, .0001] &] modified by chy. Nov 14, 2020 at 6:52
ClearAll[proj]
proj = KroneckerProduct[#, #] &[Normalize @ Rationalize @ #] &;
GatherBy[list, proj] // Column
ClearAll[fit]
fit = Fit[{Table[0, Length@#], #}, Array[x, Length[#]-1], Array[x, Length[#]-1]]&;
GatherBy[list, fit] // Column
• There seems to be something wrong, consider this example, GatherBy[{{-11, -11, 22}, {2, -1, -1}, {-6, 10, -4}}, Projection[Table[1, Length@#], #] &] Nov 13, 2020 at 2:43
• @expression, updated with something that (I hope) works in general.
– kglr
Nov 13, 2020 at 2:49
• Thank you.This should be fine. Nov 13, 2020 at 3:00
If we does not insist on using GatherBy ,then there other way can be use to get the same result by using Gather.
Gather is easy to handle.
Method I
list = {{0, 1, 2}, {0, -2, 4}, {0, 3, 6}, {1, 2, 3}, {0, 0}, {2, 4,
6}, {-2, -4, -6}, {1, 2, -3}, {1, -2, 3}, {-6, -8, -10}, {0.3,
0.4, 0.5}, {0, 0}, {Sqrt[3], Sqrt[3]}, {1, 1}, {0, 0, 0}, {1, 1,
1}, {2., 2., 2.}, {π Sqrt[3], 2 π Sqrt[3],
3 π Sqrt[3]}};
Gather[list,
RegionEqual[AffineSpace[{#1, -#1}], AffineSpace[{#2, -#2}]] &]
Method II
list = {{0, 1, 2}, {0, -2, 4}, {0, 3, 6}, {1, 2, 3}, {0, 0}, {2, 4,
6}, {-2, -4, -6}, {1, 2, -3}, {1, -2, 3}, {-6, -8, -10}, {0.3,
0.4, 0.5}, {0, 0}, {Sqrt[3], Sqrt[3]}, {1, 1}, {0, 0, 0}, {1, 1,
1}, {2., 2., 2.}, {π Sqrt[3], 2 π Sqrt[3],
3 π Sqrt[3]}};
Gather[list,
Length[#1] == Length[#2] && Norm[#1] == Norm[#2] == 0 ||
Length[#1] == Length[#2] && Norm[#1]*Norm[#2] != 0 &&
MatrixRank[{#1, #2}] == 1 &]
{{{0, 1, 2}, {0, 3, 6}}, {{0, -2, 4}}, {{1, 2, 3}, {2, 4, 6}, {-2, -4, -6}, {Sqrt[3] π, 2 Sqrt[3] π, 3 Sqrt[3] π}}, {{0, 0}, {0, 0}}, {{1, 2, -3}}, {{1, -2, 3}}, {{-6, -8, -10}, {0.3, 0.4, 0.5}}, {{Sqrt[3], Sqrt[3]}, {1, 1}}, {{0, 0, 0}}, {{1, 1, 1}, {2., 2., 2.}}}
| 2,208
| 4,611
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.34375
| 3
|
CC-MAIN-2024-22
|
latest
|
en
| 0.694124
|
https://teamtreehouse.com/community/i-was-playing-in-the-repl-with-this-function-and-cant-figure-out-why-word2-returns-an-empty-string-after-join-method
| 1,718,934,993,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198862032.71/warc/CC-MAIN-20240620235751-20240621025751-00191.warc.gz
| 487,763,634
| 35,727
|
## Welcome to the Treehouse Community
Want to collaborate on code errors? Have bugs you need feedback on? Looking for an extra set of eyes on your latest project? Get support with fellow developers, designers, and programmers of all backgrounds and skill levels here with the Treehouse Community! While you're at it, check out some resources Treehouse students have shared here.
### Looking to learn something new?
Treehouse offers a seven day free trial for new students. Get access to thousands of hours of content and join thousands of Treehouse students and alumni in the community today.
# I was playing in the REPL with this function and can't figure out why word2 returns an empty string after join method?
Basically as I stated. In the REPL word1 joins into 'str' just like i was hoping, but word2 returns "". If it's the same logic what is causing this?
sillycase.py
```def sillycase(string):
string = list(string)
half = int(len(string)) // 2
word1 = string[:half]
word1 = "".join(word1[:half]).lower()
word2 = string[half:]
word2 = "".join(word2[half:]).upper()
return word1 + word2
```
You don't need the join, but what's happening here would be the same with or without it.
When "word2" is first assigned, it gets "half" of the iterable: "`word2 = string[half:]`"
Then, on the next line, the "join" starts with taking another slice: "`word2[half:]`"
But if the first slice made it only "half" long, then another slice that has a start value of "half" would not have any content.
HA that makes total sense now! I updated the code to:
word1 = "".join(word1).lower()
and
word 2 = "".join(word2).upper()
Now its perfect. Thank you for helping me understand why it had no content.
I don't kwow why you should use "join". You can do this:
```def sillycase(string):
half = int(len(string)) // 2
word1 = string[:half].lower()
word2 = string[half:].upper()
return word1 + word2
```
I was originally using concepts taught in the video.
Basically I was using join because I turned string into a list and had to turn it back.
Your code is how I passed the challenge, I was just trying to understand why word 2 returned and empty string with the same logic.
| 527
| 2,177
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.5625
| 3
|
CC-MAIN-2024-26
|
latest
|
en
| 0.929806
|
http://jwmason.org/slackwire/the-interest-rate-and-interest-rate/
| 1,516,658,290,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-05/segments/1516084891543.65/warc/CC-MAIN-20180122213051-20180122233051-00059.warc.gz
| 188,828,270
| 15,219
|
# The Interest Rate and the Interest Rate
We will return to secular stagnation. But we need to clear some ground first. What is an interest rate?
Imagine you are in a position to acquire a claim on a series of payments in the future. Since an asset is just anything that promises a stream of payments in the future, we will say you are thinking of buying of an asset. What will you look at to make your decision?
First is the size of the payments you will receive, as a fraction of what you pay today. We will call that the yield of the asset, or y. Against that we have to set the risk that the payments may be different from expected or not occur at all; we will call the amount you reduce your expected yield to account for this risk r. If you have to make regular payments beyond the purchase of the asset to receive income from it (perhaps taxes, or the costs of operating the asset if it is a capital good) then we also must subtract these carrying costs c. In addition, the asset may lose value over time, in which case we have to subtract the depreciation rate d. (In the case of an asset that only lasts one period — a loan to be paid back in full the next period, say — d will be equal to one.) On the other hand, owning an asset can have benefits beyond the yield. In particular, an asset can be sold or used as collateral. If this is easy to do, ownership of the asset allows you to make payments now, without having to waiting for its yield in the future. We call the value of the asset for making unexpected payments its liquidity premium, l. The market value of long-lasting assets may also change over time; assuming resale is possible, these market value changes will produce a capital gain g (positive or negative), which must be added to the return. Finally, you may place a lower value on the payments from the asset simply because they take place in the future; this might be because your needs now are more urgent than you expect them to be then, or simply because you prefer income in the present to income in the future. Either way, we have to subtract this pure time-substitution rate i.
So the value of an asset costing one unit (of whatever numeraire) will be 1 + y – r – c – d + l + g – i.
(EDIT: On rereading, this could use some clarification:
Of course all the terms can take on different (expected) values in different time periods, so they are vectors, not scalars. But if we assume they are constant, and that the asset lasts forever (i.e. a perpetuity), then we should write its equilibrium value as: V = Y/i, where Y is the total return in units of the numeraire, i.e. Y = V(y – r – c + l + g) and i is the discount rate. Divide through both sides by V/i and we have i = y – r – c + l + g. We can now proceed as below.)
In equilibrium, you should be just indifferent between purchasing and not purchasing this asset, so we can write:
y – r – c – d + l + g – i = 0, or
(1) y = r + c + d – l – g + i
So far, there is nothing controversial.
In formal economics, from Bohm-Bawerk through Cassel, Fisher and Samuelson to today’s standard models, the practice is to simplify this relationship by assuming that we can safely ignore most of these terms. Risk, carrying costs and depreciation can be netted out of yields, capital gains must be zero on average, and liquidity is assumed not to matter or just ignored. So then we have:
(2) y = i
In these models, it doesn’t matter if we use the term “interest rate” to mean y or to mean i, since they are always the same.
This assumption is appropriate for a world where there is only one kind of asset — a risk-free contract that exchanges one good in the present for 1 + i goods in the future. There’s nothing wrong with exploring what the value of i would be in such a world under various assumptions.
The problem arises when we carry equation (2) over to the real world and apply it to the yield of some particular asset. On the one hand, the yield of every existing asset reflects some or all of the other terms. And on the other hand, every contract that involves payments in more than one period — which is to say, every asset — equally incorporates i. If we are looking for the “interest rate” of economic theory in the economic world we observe around us, we could just as well pick the rent-price ratio for houses, or the profit rate, or the deflation rate, or the ratio of the college wage premium to tuition costs. These are just the yields of a house, of a share of the capital stock, of cash and of a college degree respectively. All of these are a ratio of expected future payments to present cost, and should reflect i to exactly the same extent as the yield of a bond does. Yet in everyday language, it is the yield of the bond that we call “interest”, even though it has no closer connection to the interest rate of theory than any of these other yields do.
This point was first made, as far as I know, by Sraffa in his review of Hayek’s Prices and Production. It was developed by Keynes, and stated clearly in chapters 13 and 17 of the General Theory.
For Keynes, there is an additional problem. The price we observe as an “interest rate” in credit markets is not even the y of the bond, which would be i modified by risk, expected capital gains and liquidity. That is because bonds do not trade against baskets of goods. They trade against money. When we see a bond being sold with a particular yield, we are not observing the exchange rate between a basket of goods equivalent to the bond’s value today and baskets of goods equivalent to its yield in the future. We are observing the exchange rate between the bond today and a quantity of money today. That’s what actually gets exchanged. So in equilibrium the price of the bond is what equates the expected returns on the two assets:
(3) y_B – r_B + l_B + g_B – i = l_M – i
(Neither bonds nor money depreciate or have carrying costs, and money has no risk. If our numeraire is money then money also cannot experience capital gains. If our numeraire was a basket of goods instead, then -g would be expected inflation, which would appear on both sides and cancel out.)
What we see is that i appears on both sides, so it cancels out. The yield of the bond is given by:
(4) y_B = r_B – g_B + (l_M – l_B)
The yield of the bond — the thing that in conventional usage we call the “interest rate” — depends on the risk of the bond, the expected price change of the bond, and the liquidity premium of money compared with the bond. Holding money today, and holding a bond today, are both means to enable you to make purchases in the future. So the intertemporal substitution rate i does not affect the bond yield.
(We might ask whether the arbitrage exists that would allow us to speak of a general rate of time-substitution i in real economies at all. But for present purposes we can ignore that question and focus on the fact that even if there is such a rate, it does not show up in the yields we normally call “interest rates”.)
This is the argument as Keynes makes it. It might seem decisive. But monetarists would reject it on the grounds that nobody in fact holds money as a store of value, so equation (3) does not apply. The bond-money market is not in equilibrium, because there is zero demand for money beyond that needed for current transactions at any price. (The corollary of this is the familiar monetarist claim that any change in the stock of money must result in a proportionate change in the value of transactions, which at full employment means a proportionate rise in the price level.) From the other side, endogenous money theorists might assert that the money supply is infinitely elastic for any credit-market interest rate, so l_M is endogenous and equation (4) is underdetermined.
As criticisms of the specific form of Keynes’ argument, these are valid objections. But if we take a more realistic view of credit markets, we come to the same conclusion: the yield on a credit instrument (call this the “credit interest rate”) has no relationship to the intertemporal substitution rate of theory (call this the “intertemporal interest rate.”)
Suppose you are buying a house, which you will pay for by taking out a mortgage equal to the value of the house. For simplicity we will assume an amortizing mortgage, so you make the same payment each period. We can also assume the value of housing services you receive from the house will also be the same each period. (In reality it might rise or fall, but an expectation that the house will get better over time is obviously not required for the transaction to take place.) So if the purchase is worth making at all, then it will result in a positive income to you in every period. There is no intertemporal substitution on your side. From the bank’s point of view, extending the mortgage means simultaneously creating an asset — their loan to you — and a liability — the newly created deposit you use to pay for the house. If the loan is worth making at all, then the expected payments from the mortgage exceed the expected default losses and other costs in every period. And the deposits are newly created, so no one associated with the bank has to forego any other expenditure in the present. There is no intertemporal substitution on the bank’s side either.
(It is worth noting that there are no net lenders or net borrowers in this scenario. Both sides have added an asset and a liability of equal value. The language of net lenders and net borrowers is carried over from models with consumption loans at the intertemporal interest rate. It is not relevant to the credit interest rate.)
If these transactions are income-positive for all periods for both sides, why aren’t they carried to infinity? One reason is that the yields for the home purchaser fall as more homes are purchased. In general, you will not value the housing services from a second home, or the additional housing services of a home that costs twice as much, as much as you value the housing services of the home you are buying now. But this only tells us that for any given interest rate there is a volume of mortgages at which the market will clear. It doesn’t tell us which of those mortgage volume-interest rate pairs we will actually see.
The answer is on the liquidity side. Buying a house makes you less liquid — it means you have less flexibility if you decide you’d like to move elsewhere, or if you need to reduce your housing costs because of unexpected fall in income or rise in other expenses. You also have a higher debt-income ratio, which may make it harder for you to borrow in the future. The loan also makes the bank less liquid — since its asset-capital ratio is now higher, there are more states of the world in which a fall in income would require it to sell assets or issue new liabilities to meet its scheduled commitments, which might be costly or, in a crisis, impossible. So the volume of mortgages rises until the excess of housing service value over debt service costs make taking out a mortgage just worth the incremental illiquidity for the marginal household, and where the excess of mortgage yield over funding costs makes issuing a new mortgage just worth the incremental illiquidity for the marginal bank. (Incremental illiquidity in the interbank market may — or may not — mean that funding costs rise with the volume of loans, but this is not necessary to the argument.)
Monetary policy affects the volume of these kinds of transactions by operating on the l terms. Normally, it does so by changing the quantity of liquid assets available to the financial system (and perhaps directly to the nonfinancial private sector as well). In this way the central bank makes banks (and perhaps households and businesses) more or less willing to accept the incremental illiquidity of a new loan contract. Monetary policy has nothing to do with substitution between expenditure in the present period and expenditure in some future period. Rather, it affects the terms of substitution between more and less liquid claims on income in the same future period.
Note that changing the quantity of liquid assets is not the only way the central bank can affect the liquidity premium. Banking regulation, lender of last resort operations and bailouts also change the liquidity premium, by chaining the subjective costs of bank balance sheet expansion. An expansion of the reserves available to the banking system makes it cheaper for banks to acquire a cushion to protect themselves against the possibility of an unexpected fall in income. This will make them more willing to hold relatively illiquid assets like mortgages. But a belief that the Fed will take emergency action prevent a bank from failing in the event of an unexpected fall in income also increases its willingness to hold assets like mortgages. And it does so by the same channel — reducing the liquidity premium. In this sense, there is no difference in principle between monetary policy and the central bank’s role as bank supervisor and lender of last resort. This is easy to understand once you think of “the interest rate” as the price of liquidity, but impossible to see when you think of “the interest rate” as the price of time substitution.
It is not only the central bank that changes the liquidity premium. If mortgages become more liquid — for instance through the development of a regular market in securitized mortgages — that reduces the liquidity cost of mortgage lending, exactly as looser monetary policy would.
The irrelevance of the time-substitution rate i to the credit-market interest rate y_B becomes clear when you compare observed interest rates with other prices that also should incorporate i. Courtesy of commenter rsj at Worthwhile Canadian Initiative, here’s one example: the Baa bond rate vs. the land price-rent ratio for residential property.
Both of these series are the ratio of one year’s payment from an asset, to the present value of all future payments. So they have an equal claim to be the “interest rate” of theory. But as we can see, none of the variation in credit-market interest rates (y_B, in my terms) show up in the price-rent ratio. Since variation in the time-substituion rate i should affect both ratios equally, this implies that none of the variation in credit-market interest rates is driven by changes in the time-substitution interest rate. The two “interest rates” have nothing to do with each other.
(Continued here.)
EDIT: Doesn’t it seem strange that I first assert that mortgages do not incorporate the intertemporal interest rate, then use the house price-rent ratio as an example of a price that should incorporate that rate? One reason to do this is to test the counterfactual claim that interest rates do, after all, incorporate Samuelson’s interest rate i. If i were important in both series, they should move together; if they don’t, it might be important in one, or in neither.
But beyond that, I think housing purchases do have an important intertemporal component, in a way that loan contracts do not. That’s because (with certain important exceptions we are all aware of) houses are not normally purchased entirely on credit. A substantial fraction of the price is paid is upfront. In effect, most house purchases are two separate transactions bundled together: A credit transaction (for, say, 80 percent of the house value) in which both parties expect positive income in all periods, at the cost of less liquid balance sheets; and a conceptually separate cash transaction (for, say, 20 percent) in which the buyer foregoes present expenditure in return for a stream of housing services in the future. Because house purchases must clear both of these markets, they incorporate i in way that loans do not. But note, i enters into house prices only to the extent that the credit-market interest rate does not. The more important the credit-market interest rate is in a given housing purchase, the less important the intertemporal interest rate is.
This is true in general, I think. Credit markets are not a means of trading off the present against the future. They are a means of avoiding tradeoffs between the present and the future.
## 5 thoughts on “The Interest Rate and the Interest Rate”
1. Data Tutashkhia says:
I haven't read all this carefully, nor do I understand any of the technical stuff, but just from looking at the graph:
your rent/price-red line there refers to the asset (land) that is scarce and is supposed to appreciate (roughly) with inflation, while your bond-blue line refers to the asset (bond's face value) that is constant.
So, the red line is normalized for inflation, while the blue line goes up and down reflecting the expectation for the inflation at any given moment. And so the blue line really is the interest rate, and the red line is, well, just rent.
Does it make sense, in the layman's world?
2. Philip,
Thanks for the response. It's interesting that Brad DeLong had the same reaction — that there is no reason to think of asset yields as incorporating separate terms for liquidity and risk. But he thinks it's all risk, whereas you think it's all liquidity! personally, I think there are reasons to think about both. For example, future Social Security benefits carry very little risk, but they are very illiquid, since they cannot be transferred to a third party or pledged as collateral.
This does not matter for the point here, though. The point is just that the margin on which the interest rate is et is not consumption today vs. consumption tomorrow, but more dangerous vs. safer balance sheet positions. Whether we think of "danger" here as risk or liquidity or both, is secondary.
One thing I do appreciate is that you describe my post as a restatement of Keynes' liquidity preference theory of interest. That's exactly right. The main problems with JMK's statement, in my opinion, are that he thinks of liquidity only in terms of the asset side of the balance sheet, and he is inconsistent in using liquidity as a generic property of assets and as a synonym for money, in ways that (again IMO) create serious contradictions in his argument and open the door for a reading in terms of what Perry Mehrling calls "monetary Walrasianism." What I'm trying to do here is develop a statement of the liquidity preference theory of interest rates that reflects the fact that liquidity is just as much a property of the liability side as of the asset side of balance sheets.
A third problem with Keynes presentation is that he ignores the transaction demand for money, and the accelerator principle of investment, in order to produce a model with a unidirectional casual structure from financial markets to investment to output, rather than a jointly determined equilibrium. This is the standard critique of liquidity preference as JMK stated it, and the problem that ISLM "corrects." But it is not such a big problem, in my opinion.
So I agree with you that Keynes' argument is a bit of a muddle. I don't think my ideas are muddled (of course not!), but I agree the presentation here is less clear than it could have been. it's a work in progress.
1. 1) I'm not talking about the liquidity of an asset. I'm talking about the fact that liquidity preference determines interest rates. This is entirely different. Of course some assets are riskier than others. But we're asking what determines yields. That's a different question.
2) It does matter because you've included a variable that can give a quanta of risk. Once you allow risk to be objectively quantified then you can model that risk and… poof! Liquidity preference disappears.
After all, why would a rational agent need liquidity preference in a world where all risk is modeled? Rather they would just choose their preferred level of risk (0, 0.5, 0.9999… whatever). We can then start drawing indifference curves and off we go!
2. I'm not talking about the liquidity of an asset. I'm talking about the fact that liquidity preference determines interest rates. This is entirely different. Of course some assets are riskier than others. But we're asking what determines yields. That's a different question.
It looks like the same question to me. Yields, like prices, are relative. To say that liquidity preference determines interest rates, is the same as saying that the relative liquidity of (some set of) assets determines their relative yields.
It does matter because you've included a variable that can give a quanta of risk. Once you allow risk to be objectively quantified then you can model that risk and… poof! Liquidity preference disappears.
I don't agree. All that liquidity preference requires is that not all future cashflows can be mobilized to make payments today. Fundamental uncertainty is one way of motivating that, but it's not logically necessary. Again, Social Security benefits are illiquid not because of uncertainty but because of legal restrictions on their sale or hypothecation.
Of course you are right that a great deal of mainstream work (tho not all of it!) ignores uncertainty and liquidity and thinks all asset prices can be reduced to stochastic risk. That leads to a lot of silliness. But I don't see why we have to go to the opposite extreme. Certainly Keynes was willing to describe future contingencies as being characterized as both risk and uncertainty. Now, it may well turn out to be the case that risk in the conventional sense is not very interesting, but the proposition that no events in the future have calculable probabilities seems hard to defend. Why is this the particular hill you want to die on?
After all, why would a rational agent need liquidity preference in a world where all risk is modeled?
I'm not understanding how you go from some risk to all risk.
| 4,599
| 21,804
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.546875
| 3
|
CC-MAIN-2018-05
|
latest
|
en
| 0.958815
|
https://savvycalculator.com/fuel-efficiency-calculator/
| 1,713,497,013,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296817253.5/warc/CC-MAIN-20240419013002-20240419043002-00409.warc.gz
| 450,761,056
| 45,325
|
# Fuel Efficiency Calculator
## About Fuel Efficiency Calculator (Formula)
A fuel efficiency calculator is a tool used to estimate or calculate the fuel efficiency of a vehicle based on the distance traveled and the amount of fuel consumed. The formula for calculating fuel efficiency involves dividing the distance traveled by the fuel consumption.
Here is the basic formula for calculating fuel efficiency:
Fuel Efficiency = Distance Traveled / Fuel Consumed
Let’s break down the formula components:
1. Distance Traveled: This represents the total distance traveled by the vehicle and is usually measured in miles or kilometers. It can be obtained from sources such as odometer readings or trip information.
2. Fuel Consumed: Fuel consumed refers to the amount of fuel consumed by the vehicle during the specified distance traveled. It is typically measured in units such as gallons or liters and can be obtained by tracking fuel fill-ups or using data from the vehicle’s fuel consumption monitoring system.
By using the above formula and plugging in the specific values for distance traveled and fuel consumed, you can calculate the fuel efficiency of the vehicle. The result will typically be expressed as a unit of distance per unit of fuel, such as miles per gallon (MPG) or kilometers per liter (km/L).
It’s important to note that fuel efficiency can vary based on driving conditions, vehicle type, maintenance, and other factors. Therefore, the calculated fuel efficiency is an estimate and may not reflect the actual performance under all circumstances.
For accurate fuel efficiency calculations, it is recommended to gather data over a representative period, consider different driving scenarios, and consult the vehicle’s manufacturer or official specifications for more precise information about fuel consumption.
| 329
| 1,834
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.515625
| 3
|
CC-MAIN-2024-18
|
longest
|
en
| 0.915937
|
https://plainmath.net/7194/researchers-compare-effectiveness-softener-filtering-softener-filtering
| 1,656,388,195,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00504.warc.gz
| 513,505,747
| 14,729
|
# Researchers wanted to compare the effectiveness of a water softener used with a filtering process with a water softener used without filtering. Ninety
Researchers wanted to compare the effectiveness of a water softener used with a filtering process with a water softener used without filtering. Ninety locations were randomly divided into two groups of equal size. Group A locations used a water softener and the filtering process, while group B used only the water softener. At the end of three months, a water sample was tested at each location for its level of softness. (Water softness was measured on a scale of 1 to 5, with 5 being the softest water.) The results were as follows: Group A (water softener and filtering) ${x}_{1}=2.1$
${s}_{1}=0.7$ Group B (water softener only) ${x}_{2}=1.7$
${s}_{2}=0.4$ Determine, at the 90% confidence level, whether there is a difference between the two types of treatments.
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
stuth1
Group A ${x}_{1}=2.1$
${s}_{1}=0.7$
${n}_{1}=45$ Group B ${x}_{2}=1.7$
${s}_{2}=0.4$
${n}_{2}=45$ $S.E.=\sqrt{\frac{\left(0.7{\right)}^{2}}{45}+\frac{\left(0.4{\right)}^{2}}{45}}=0.120$
$d={x}_{1}-{x}_{2}=2.1-1.7=0.4$
$\therefore C.I.=d±{t}_{c}{n}_{c}S.E.$
$=0.4±1.65\left(0.120\right)$
$=0.4±0.198$
$\therefore C.I.=\left(0.202,0.598\right)$
| 462
| 1,483
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 27, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.0625
| 4
|
CC-MAIN-2022-27
|
latest
|
en
| 0.925536
|
www.marketsemerging.com
| 1,712,940,349,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296816045.47/warc/CC-MAIN-20240412163227-20240412193227-00414.warc.gz
| 821,029,995
| 25,175
|
Finance
# BMI Calculator: What Do Your Results Mean?
Body mass index (BMI) is a metric calculated based on a person height and weight. It helps in understanding the body fatness of most people.
BMI is the easy-to-calculate metric of obesity. It is based on the ratio of a person’s weight and height. It is beneficial to know BMI as it can help in identifying the disease risk to a person.
BMI is good for measuring rates of obesity in a population. It allows researchers to understand how the rates of obesity and overweight differ over time. Furthermore, as BMI is a general measure of obesity, it can help in understanding the rates between populations.
The information received by calculating the BMI of a population can help researchers with various studies, such as understanding how dietary patterns can affect the risk of obesity in many people. BMI can also help a physician understand the general risk to a person due to diseases that can be caused by obesity.
Calculating BMI and What its Results Mean
A person’s body mass index can help him/her understand if he/she has a healthy weight. The formula to calculate BMI is easy-
Weight (kg) / [Height (m)]2
Let’s take a look at an example-
Height- 165 cm (1.65 m)
Weight- 68 kg
BMI- 68 / (1.65×1.65)
BMI- 24.98
BMI Categories
• If a person has BMI less than 18.5, then he/she is underweight.
• If a person has BMI between 18.5 and 24, then he/she is normal weight.
• If a person has BMI between 25 and 29.9, then he/she is overweight.
• If a person has BMI above 30, then he/she is obese.
Furthermore, obesity is divided into 3 classifications-
• Class I- BMI between 30 and 34.9
• Class II- BMI between 35 and 39.9
• Class III- BMI above 40
The well-being of a person can get affected if his/her BMI is 30 or above. He/she might face problems like stroke, gall bladder diseases and gallstones, heart disease, diabetes, etc.
Furthermore, a person might suffer from health problems if he/she has between 15 and 16. Being underweight can cause surgical complication risks. Females can use BMI calculator women to understand their BMI and know if they might face any risk.
BMI for Kids
A child’s body changes quickly. Therefore, the standard BMI result isn’t a good option for children. Generally, doctors use another method to calculate a child’s BMI. They first take a standard BMI calculation based on the child’s weight and height. The BMI value is compared with the BMI value of other kids of the same gender and age. For this, the doctors take the help of percentiles and percentages.
The weight categories for kids based on percentiles are-
• If BMI is less than the 5th percentile, then the child is underweight.
• If BMI is above the 5th percentile but below the 85th percentile, then the child has normal weight.
• If BMI is above 85th percentile but below 95th percentile, then the child is overweight.
• If BMI is equal to or greater than 95th percentile, then the child is obese.
| 690
| 2,970
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.828125
| 3
|
CC-MAIN-2024-18
|
longest
|
en
| 0.955751
|
https://www.gamedev.net/topic/622201-aabb-aabb-collision-response-sliding/
| 1,493,042,857,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00472-ip-10-145-167-34.ec2.internal.warc.gz
| 903,693,878
| 37,457
|
View more
View more
View more
### Image of the Day Submit
IOTD | Top Screenshots
### The latest, straight to your Inbox.
Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.
# AABB - AABB Collision response ( sliding )
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
3 replies to this topic
### #1xynapse Members
Posted 21 March 2012 - 11:21 AM
Here is the situation:
• I have a room with objects inside
• Each object has it's own AABB
Here is what i do:
• I check for collision between objects by checking AABB-AABB intersections
Here is what i need:
• How do i calculate the 'sliding' / response between those AABBs?
Here is what i have for AABB:
CBoundingBox::~CBoundingBox()
{
}
int CBoundingBox::classify(const CBoundingBox& rOther)
{
if( rOther.min.x >= min.x && rOther.max.x <= max.x &&
rOther.min.y >= min.y && rOther.max.y <= max.y &&
rOther.min.z >= min.z && rOther.max.z <= max.z )
{
return INSIDE;
}
if( max.x < rOther.min.x || min.x > rOther.max.x )
return OUTSIDE;
if( max.y < rOther.min.y || min.y > rOther.max.y )
return OUTSIDE;
if( max.z < rOther.min.z || min.z > rOther.max.z )
return OUTSIDE;
return INTERSECTS;
}
CVector3 CBoundingBox::closestPointOn(const CVector3& vPoint)
{
CVector3 xClosestPoint;
xClosestPoint.x = (vPoint.x < min.x)? min.x : (vPoint.x > max.x)? max.x : vPoint.x;
xClosestPoint.y = (vPoint.y < min.y)? min.y : (vPoint.y > max.y)? max.y : vPoint.y;
xClosestPoint.z = (vPoint.z < min.z)? min.z : (vPoint.z > max.z)? max.z : vPoint.z;
return xClosestPoint;
}
bool CBoundingBox::hasCollided(const CBoundingBox& rOther) const
{
if( min.x > rOther.max.x ) return false;
if( max.x < rOther.min.x ) return false;
if( min.y > rOther.max.y ) return false;
if( max.y < rOther.min.y ) return false;
if( min.z > rOther.max.z ) return false;
if( max.z < rOther.min.z ) return false;
return true;
}
bool CBoundingBox::hasCollided(const CVector3& vPosition) const
{
return vPosition.x <= max.x && vPosition.x >= min.x &&
vPosition.y <= max.y && vPosition.y >= min.y &&
vPosition.z <= max.z && vPosition.z >= min.z ;
}
CVector3 CBoundingBox::getCenter() const
{
return (min + max) * 0.5f;
}
{
return getSize() * 0.5f;
}
float CBoundingBox::getSize() const
{
return (max - min).magnitude();
}
Longer description:
I am absolutely sure you guys know what is the requirement here, i want "Player" to slide on other object's AABB's when in collision.
Can somebody please shed a bit of light on the calculations required ?
perfection.is.the.key
### #2Net Gnome Members
Posted 21 March 2012 - 11:51 AM
you need to figure out which face's axis-normal they collided the least, then move them that amount apart along that axis-normal. This will cancel movement causing the collision, but continue the other axial movements, allowing you to slide against the AABB.
### #3xynapse Members
Posted 21 March 2012 - 12:05 PM
Net Gnome,
Any chance for any code snippet ?
perfection.is.the.key
### #4Net Gnome Members
Posted 21 March 2012 - 03:54 PM
my journal Conquering SAT for TileMaps has a code reference in there, for Separating Axis Theorem for tilemaps (2D), but here are some excellent resources that should supply you with a lot of food for thought:
http://www.codezealot.org/archives/55
http://www.realtimerendering.com/intersections.html
http://www.gamasutra.com/view/feature/3383/simple_intersection_tests_for_games.php
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
| 1,025
| 3,717
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.84375
| 3
|
CC-MAIN-2017-17
|
latest
|
en
| 0.416165
|
http://www.slideserve.com/paul2/an-effective-hardware-architecture-for-bump-mapping-using-angular-operation
| 1,493,606,227,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2017-17/segments/1492917126538.54/warc/CC-MAIN-20170423031206-00060-ip-10-145-167-34.ec2.internal.warc.gz
| 680,189,048
| 20,693
|
# An Effective Hardware Architecture for Bump Mapping Using Angular Operation - PowerPoint PPT Presentation
1 / 30
An Effective Hardware Architecture for Bump Mapping Using Angular Operation. Seung-Gi Lee † , Woo-Chan Park, Won-Jong Lee, Tack-Don Han, and Sung-Bong Yang Media System Lab. (National Research Lab.) Dept. of Computer Science Yonsei University. Contents. Introduction
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
An Effective Hardware Architecture for Bump Mapping Using Angular Operation
An Image/Link below is provided (as is) to download presentation
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
## An Effective Hardware Architecture for Bump Mapping Using Angular Operation
Seung-Gi Lee†, Woo-Chan Park, Won-Jong Lee, Tack-Don Han, and Sung-Bong Yang
Media System Lab. (National Research Lab.)
Dept. of Computer Science
Yonsei University
### Contents
• Introduction
• Background and related work
• Bump mapping algorithm
• Vector rotation
• Illumination calculation
• Hardware architecture
• Experimental results
• Conclusions
### Contents
• Introduction
• Background and related work
• Bump mapping algorithm
• Vector rotation
• Illumination calculation
• Hardware architecture
• Experimental results
• Conclusions
### Background: bump mapping
• Represent the bumpy parts of the object surface in detail using geometry mapping without complex modeling
• Three steps
• Fetch the height values from a 2D bump map
• Perturb the normal vector N
• Calculate the illumination with three vectors, the perturbed vector N’, the light vector L, and the halfway vector H
• A large amount of per pixel computations is required.
### Background: reference space
• The normal vector perturbation can be preprocessed by defining the surface-independent space (reference space). [Peercy et al., Ernst et al.]
• Instead, transformations from the object space into the reference space should be provided for each pixel (or for each small polygon).
• Definition of a 33 matrix & a 33 matrix multiplication
• The normalization of the vectors for the illumination calculation is also required.
### Background: polar coordinates
• Representation of a vector P
• P = (P, P)
• P is an angle between the x-axis
and a vector Q
• P is an angle between P and
the z-axis
• An effective approach from the viewpoint of hardware requirements [Kim et al., Ikedo et al., Kugler]
• Only two angles
• No normalization of vectors
• However, the matrix multiplication for the transformation or a large map for the normal vector perturbation are still required.
### Previous work related with PCS
• Support bump, reflection, refraction, and texture mapping in a single LSI chip [Ikedo et al.]
• Classical straightforward method require a large amount of logic for matrix operations
• May produce the incorrect reflection angle to calculate the intensity of specular light
• IMEM: integrate the arithmetic units and the reference tables into one dedicated memory chip [Kugler]
• Support the simple normal vector perturbation method
• Reduce the amount of computations by using the pre-computed LUTs and maps
• However, a map of large size over 3 Mbytes is required
### Previous work related with PCS
• Hardware architecture supporting the bump-mapped illumination by using the Phong illumination hardware [Kim et al.]
• Give a small reduction to the hardware requirements for the illumination calculation
### Overview of this paper
• We propose a new transformation method and present its hardware architecture.
• Direct transformation of the vectors into the reference space
• No hardware for matrix transformation, but only a few hardware logics for the vector rotations
• Also, we present an effective illumination calculation hardware.
• Use of “the law of cosine”
### Contents
• Introduction
• Background and related work
• Bump mapping algorithm
• Vector rotation
• Illumination calculation
• Hardware architecture
• Experimental results
• Conclusions
### Processing flow
• Vector rotation stage
• 3D object space 3D reference space
• Use of the angular operation
• Use of the projection onto the plane and the proportion onto the sphere
• Bump vector fetch stage
• The perturbed normal vectors are fetched from the bump vector map.
• Illumination calculation stage
• The inner products are computed.
• The illumination is calculated by referring to the diffuse and specular tables.
### Vector rotation
The first rotation: -jN around the z-axis
The second rotation: -N around the y-axis
• The polar coordinates of the normal vector N are used as the rotation angles.
• A corresponds to a light vector L and a halfway vector H.
• The first rotation makes it possible to rotate accurately around the y-axis.
• As a result, A in the object space is transformed into Aref in the reference space.
### Geometric information
• Geometric information required to find the polar coordinate (jA’, A’) of the transformed vector A’
• The calculation of x
### Calculation of jA’
• The projection of A’ onto the xy-plane is required to find an angle jA’
### Geometric relationship for A’
• We assume two arbitrary vectors that begin with the origin and end with points at which the plane in parallel with the xz-plane intersects the circles, Cm and .
• The geometric relationship of the vectors on a sphere
• When these vectors move on Cm and Cmyz under the above assumption, the ratio of the angular variation to the variation range of each vector for Cm is equal to that of .
### Calculation of prop
• can be calculated by using an angle of .
• The projection of Amn and its related components onto the yz-plane
• The y coordinate of a point at which the extension line of intersects a unit circle C0 in the yz-plane is y0/rm.
### Calculation of A’
• Geometric information required to find the polar coordinate (jA’, A’) of the transformed vector A’
• X, Y, Z
• The law of cosine
### Illumination calculation
• Phong illumination model
• In order to calculate the inner products of the vectors, the vectors should be transformed into the vectors in the Cartesian coordinates.
• The inner products of the transformed vectors are calculated as follows.
### Illumination calculation
• The number of multiplications
6 2
• The number of cosines
10 6
• However, applying the law of cosine to these equations makes it possible to reduce the amount of computations for the inner products.
### Contents
• Introduction
• Background and related work
• Bump mapping algorithm
• Vector rotation
• Illumination calculation
• Hardware architecture
• Experimental results
• Conclusions
Proposed bump mapping hardware
• The illumination calculation unit consists of two parts that calculate the intensities of the diffuse and the specular light.
• The intensities of lights are obtained from the light tables referred to by the values of the inner products.
• This unit can be implemented with 6 cosine tables, 1 diffuse table, 1 specular table, 2 multipliers, and 13 adders.
The calculation of the inner products
• Light table method
• The table entries are indexed by scalar values.
• The size of the table is 2816 ~21016 bits.
### Vector rotation unit
, where
• This unit can be implemented with 6 tables, 3 multipliers, and 8 adders.
• For arbitrary values of yk’s, θprop’s are precomputed by the following equation.
• θprop’s are fetched from Tprop with the indices of yk’s.
### Contents
• Introduction
• Background and related work
• Bump mapping algorithm
• Vector rotation
• Illumination calculation
• Hardware architecture
• Experimental results
• Conclusions
### Experimental results
• We modified Mesa 3.0 to implement the conventional method and the proposed method.
• To differentiate the image qualities, we have performed texture- and bump-mapping using various objects with various maps.
• In case of wooden wall, there is little difference in the quality between these two images, to the extent of not being differentiated by the naked eyes.
Wooden wall : mapping onto a plane
The conventional method
The proposed method
Brick wall : mapping onto a cube
Map of the world : mapping onto a sphere
### Experimental results
• There is also little difference in the image quality as in the case of the previous simulation.
Wooden wall : mapping onto a plane
The conventional method
The proposed method
Brick wall : mapping onto a cube
Map of the world : mapping onto a sphere
### Experimental results
• The images don’t look vivid because these mapping methods wear the maps converted from 512256 resolution into 512512 resolution on the surface of the object.
• However, we can hardly differentiate the image quality between these two images.
Wooden wall : mapping onto a plane
The conventional method
The proposed method
Brick wall : mapping onto a cube
Map of the world : mapping onto a sphere
### Contents
• Introduction
• Background and related work
• Bump mapping algorithm
• Vector rotation
• Illumination calculation
• Hardware architecture
• Experimental results
• Conclusions
### Conclusions
• Bump mapping method with the effective vector rotation and illumination calculation algorithm.
• Reduce a large amount of computations and hardwares
• Generate nearly the same quality of images as the conventional method
### Thank you!!!
• We appreciate NRL project supported from the Ministry of Science & Technology of Korea.
• NRL Project homepage
• http://msl.yonsei.ac.kr/3d/
| 2,191
| 10,038
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.5625
| 3
|
CC-MAIN-2017-17
|
longest
|
en
| 0.749804
|
pmdh.fp-it.ru
| 1,618,511,133,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00048.warc.gz
| 76,080,767
| 7,056
|
# Manually calculate correlation matrix
## Matrix correlation manually
Add: yxozyx69 - Date: 2020-12-10 19:38:00 - Views: 5992 - Clicks: 6799
Use the formula (zx)i = (xi – x̄) / s x and calculate a standardized value for each xi. How to Calculate Correlation Matrix - Definition, Formula, Example Definition: Correlation matrix is a type of matrix, which provides the correlation between whole pairs of data sets in a matrix. In so doing, many of the distortions that infect the Pearson correlation are reduced considerably. Principal Component Analysis 2. To find correlation coefficient in Excel, leverage the CORREL or PEARSON function and get the result in a fraction of a second. Figure 3 – Partial Correlation Matrix.
The Correlation Matrix Definition Correlation manually calculate correlation matrix Matrix from Data Matrix We can calculate the correlation matrix such as R = 1 n X0 sXs where Xs = CXD 1 with C = In n 11n10 n denoting a centering matrix D = diag(s1;:::;sp) denoting a diagonal scaling matrix Note that the standardized matrix Xs has the form Xs = 0 B B B B B @ (x11 x 1)=s1 (x12. Input the matrix in the text field below in the same format as matrices given in the examples. The value (n-1) indicates the degrees of freedom. Typically, you use the closing price for each day to. The eigenvectors and eigenvalues are taken as the principal components and singular values and used to project the original data. How to calculate the Principal Component Analysis for reuse on more data in scikit-learn.
The default method is Pearson, but you can also compute Spearman or Kendall coefficients. In equation(B) with two variables x and y, it is called the sum of cross products. The example below defines a small 3×2 matrix, centers the data in the matrix, calculates the covariance matrix of the centered data, manually and then the eigendecomposition of the covariance matrix. Please type in the box below two or more samples.
Example 2: Calculate the partial correlation matrix for the data in Figure 1. Please press &39;&92;&92;&39; to start a new sample. Below are the details that they have gathered. In this tutorial, you discovered the Principal Component Analysis machine learning method for dimensionality reduction. The covariance matrix of any sample matrix can be expressed in the following way: where x i is the i&39;th row of the sample matrix. There is no pca() function in NumPy, but we can easily calculate the Principal Component Analysis step-by-step using NumPy functions. · This is a convenient way to calculate a correlation between just two data sets.
The correlation coefficient, or r, always falls between -1 and 1 and assesses the linear relationship between two sets of data points such as x and y. Now that we are done with mathematical theory, let us explore how and where it can be applied in the field of data analytics. This gives you the correlation, r. Load a dataset and calculate the PCA on it and compare the results from the two methods. cormat (), for calculating and visualizing easily a correlation matrix. Correlation is commonly used to test associations between quantitative variables or categorical variables. Steps to Create a Correlation Matrix using Pandas.
How do you calculate correlation in statistics? We have all the values in the above table with n = 4. Solution:Using the formula for corr. Type the samples (comma or space separated, press &39;Enter&39; for a new sample) Name of the samples (Separate with. (Note that for this data the x-values are 3, 3, 6, and the y-values are 2, 3, 4.
Reusable Principal Component Analysis. The formula for correlation is equal to Covariance of return of asset 1 and Covariance of return of asset 2 / Standard. How do you calculate correlation coefficient in Excel? If x n and y n are unrelated, the sum of positive and negative products will tend to zero. This returns a simple correlation matrix showing the correlations between pairs of variables (devices). XYZ laboratory is conducting research on height and weight and is interested in knowing if there is any kind of relationship between these variables. A correlation matrix is used to summarize data, as an input into a more advanced analysis, and as a diagnostic for advanced analyses. The manually calculate correlation matrix partial correlation matrix in range H19:K22 is calculated using the array formula.
Correlation analysis, as a lot of analysts would know is a vital tool for feature selection and multivariate analysis in data preprocessing and exploration. What is the formula for correlation analysis? Running the example first manually calculate correlation matrix prints the origina. · In this post, we will go through how to calculate a correlation matrix in Python with NumPy and Pandas. I want to find the covariance matrix or the correlation matrix.
Please, deselect the columns containing texts. Ask your questions in the comments below and I will. We can calculate a Principal Component Analysis on a dataset using the PCA() class in the scikit-learn library. obs adds a line to each row of the matrix reporting the number of observations used to calculate the correlation coefficient. corrcoef and Pandas DataFrame. In the above formula, n is the number of samples in the data set. The matrix depicts the correlation between all the possible pairs of values in a table.
Following is the history of Brent crude oil price and Rupee valuation both against dollars that prevailed on an average for those years per below. On the other hand, correlation is dimensionless. Calculate A=XXT 3. Click on the ‘Analyze’ button and select at least 2 variables to calculate the correlation matrix. See more results.
This is employed in feature selection before any kind of statistical modelling or data analysis. Your new data is PX, the new variables (a. This is precisely the range of the correlation values. In this example, the x variable is the height and the y variable is the weight. Solution:Using the above-mentioned formula, we need to first calculate the correlation coefficient. However, sometimes you are given a covariance matrix, but your numerical technique requires a correlation matrix. . How to calculate the Principal Component Analysis from scratch in NumPy.
This is because we divide the value of covariance by the product of standard deviations which have the same units. However, on doing the same, the value of correlation is not influenced by the change in scale of the values. First, we will read data from a CSV fil so we can, in a simple way, have a look at the numpy.
. r = ( 4 * 26,046. each value of p, the cross correlation is computed by shifting y n by pDt and calculating the average p roduct in Equation 83. In simple words, both the terms measure the relationship and the dependency between two variables.
It is a unit-free measure of the relationship between variables. This term can also be defined in the following manner: In the above formula, the numerator of the equation(A) is called the sum of squared deviations. For example, I have store a set of datas in Sasuser. Each random variable (Xi) in the table is correlated with each of the other values in the table (Xj).
If all the values of the given variable are multiplied by a constant and all the values of another variable are multiplied, by a similar or different constant, then the value of covariance also changes. PCA is an operation applied to a dataset, represented by an n x m matrix A that results in a projection of A which we will call B. A correlation matrix is a table showing correlation coefficients between variables. · How to calculate correlation coefficient in Excel To compute a correlation coefficient by hand, you&39;d have to use this lengthy formula. India a developing country wants to conduct an independent analysis whether changes in crude oil prices have affected its rupee value. Instructions: This correlation matrix calculator will provide you with a correlation matrix for a given set of samples. As we see from the formula of covariance, it assumes the units from the product of the units of the two variables.
To help you with implementation if needed, I shall be covering examples in both R and Python. While correlation coefficients lie between -1 and +1, covariance can take any value between -∞ and +∞. listwise handles missing values through listwise deletion, meaning that the entire observation is. 89)2 * (4 * 31,901. · Automatic correlation is a rule-based approach and dynamic values are identified based on the defined rules. This tutorial is divided into 3 parts; they are: 1. In simple words, you are advised to use the covariance matrix when the variable are on similar scales and the correlation matrix when the scales of the variables differ.
Determine whether the movements in crude oil affects movements in Rupee per dollar? This section lists some ideas for extending the tutorial that you may wish to explore. Conversely, is y n tends to follow x n, but with a time delay D, r xy (p) will show a peak at p = D/Dt. Step-by-step instructions for calculating the correlation coefficient (r) for sample data, to determine in there is a relationship between two variables. excel correlation Please SUBSCRIBE: add_user=mjmacarty 1 day ago · Correlation Matrix between A and B In case you want to modify the function to use it to calculate the correlation matrix the only difference is that you should subtract from the original matrices A and. Solution:Using the formul.
Principal Component Analysis, or PCA for short, is a method for reducing the dimensionality of data. It can be thought of as a projection method where data with m-columns (features) is projected into a subspace with m or fewer columns, whilst retaining the essence of the original data. The class is first fit on a dataset by calling the fit() function, and then the original dataset or other data can be projected into a subspace with the chosen number of di. · In finance, the correlation can measure the movement of a stock with that of a benchmark index. Re-run the examples with your own small contrived matrix values. button and find out the covariance matrix of a multivariate sample.
You can easily compute covariance and correlation matrices from data by using SAS software. You can also select the correlation methods (Pearson, Spearman or Kendall). The PCA method can be described and implemented using the tools of linear algebra. Correlation is a function of the covariance. · Correlation Coefficient Formula The correlation coefficient r can be calculated with the above formula where x and y are the variables which you want to test for correlation. You can choose the correlation coefficient to be computed using the method parameter. 88Correlation Coefficient will be-r manually calculate correlation matrix = 0. But Stella, which is focused on Differential equations modeling.
To do this, you need to use Excel&39;s. After gathering a sample of 5000 people for every category and came up with an average weight and average height in that particular group. The correlation matrix of any sample matrix is the quotient of the sample&39;s covariance matrix and the variance of the matrix.
### Manually calculate correlation matrix
email: ucuduce@gmail.com - phone:(815) 393-3922 x 3382
### Baixar manual para placa mae ipx1800g1 - Modulo aberta
-> Manual stata 15
-> Rastra de hoja manual
Sitemap 1
| 2,413
| 11,433
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.734375
| 4
|
CC-MAIN-2021-17
|
latest
|
en
| 0.820558
|
https://www.physicsforums.com/threads/i-am-confuse-in-finding-argumnet-of-complex-number.358327/
| 1,725,918,887,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00690.warc.gz
| 906,838,143
| 18,217
|
# I am confuse in finding Argumnet of Complex Number
• urduworld
In summary: For polar form, you can use the same logic. If your angle is in the 2nd or 3rd quadrant, you have to add 180 degrees to the value given by the inverse tangent function. In the 3rd quadrant, you have to add 360 degrees, but 180 will work just as well. You should be able to work out the details for yourself.
urduworld
hi PF
consider (2+1i) then Tan^-1(2/2) which will be 45 degree
if we consider (-2+2i) then it will be -45 degree but angle will not -45 degree actually we get answer by adding or deducting 180 or some like this i want to know what we have to add or subtract i am confuse about this
also what to do for third and fourth quadrants
i want to know this for Log of complex number and Polar form
urduworld said:
hi PF
consider (2+1i) then Tan^-1(2/2) which will be 45 degree
if we consider (-2+2i) then it will be -45 degree but angle will not -45 degree actually we get answer by adding or deducting 180 or some like this i want to know what we have to add or subtract i am confuse about this
You seem to be confused between the reference angle and the angle as measured from the positive real axis. For 2 + 2i, the reference angle and the angle itself are both 45 degrees, or pi/4. For -2 + 2i, the reference angle is also 45 degrees (not -45 degrees), but since the angle is in the second quadrant, the actual angle is 180 - 45 = 135 degrees, or 3pi/4. If you calculate the angle using the inverse tangent function, you have t-1(-2/2) = -45 degrees. You have to add 180 degrees to this, because your angle is in the 2nd quadrant, so you get 180 + (-45) = 135 degrees again.
The range of the inverse tangent function is (-90, 90) (in degrees), or (-pi/2, pi/2), so if your angle is not in the 1st or 4th quadrants you have to adjust the value to get the angle you need.
If your angle is in the third quadrant, as it is for -2 - 2i, you'll have tan-1(-2/(-2)) = 45 degrees. The actual angle is 180 + 45 = 225 degrees, or 5pi/4.
urduworld said:
also what to do for third and fourth quadrants
i want to know this for Log of complex number and Polar form
Last edited:
this means i have to add 180 in all the condition except if it is in first
I didn't talk about a fourth quadrant angle, but maybe you can figure out what you need to do. If z = 2 - 2i, the argument would be -45 degrees. As a positive angle, what would it be?
## 1. What is the argument of a complex number?
The argument of a complex number is the angle that the complex number forms with the positive real axis on the complex plane. It is typically measured in radians or degrees and can be thought of as the direction of the vector representing the complex number.
## 2. How do you find the argument of a complex number?
The argument of a complex number can be found using the formula arctan(b/a), where a is the real part of the complex number and b is the imaginary part. This formula can also be written as arctan(y/x), where x and y are the coordinates of the complex number on the complex plane.
## 3. Can the argument of a complex number be negative?
Yes, the argument of a complex number can be negative. This occurs when the complex number falls in the third or fourth quadrant of the complex plane, where the angle is measured clockwise from the positive real axis.
## 4. How do you represent the argument of a complex number in mathematical notation?
The argument of a complex number is typically represented using the Greek letter theta (θ). This notation is usually written as arg(z), where z is the complex number in question.
## 5. What is the range of values for the argument of a complex number?
The range of values for the argument of a complex number is between -π and π radians, or -180° and 180° in degrees. This is because the complex plane is periodic, meaning that any angle greater than 180° can be represented by a smaller angle in the range of -π to π.
Replies
8
Views
578
Replies
4
Views
708
Replies
4
Views
590
Replies
6
Views
1K
Replies
4
Views
1K
Replies
3
Views
1K
Replies
9
Views
5K
Replies
12
Views
1K
Replies
3
Views
2K
Replies
16
Views
2K
| 1,100
| 4,151
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.09375
| 4
|
CC-MAIN-2024-38
|
latest
|
en
| 0.920196
|
http://www.webmd.com/fitness-exercise/features/exercise-fitness-tips-improve-your-health?page=4
| 1,406,353,677,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2014-23/segments/1405997894983.24/warc/CC-MAIN-20140722025814-00196-ip-10-33-131-23.ec2.internal.warc.gz
| 1,283,376,684
| 41,591
|
# Fitness & Exercise
Font Size
## Exercise and Fitness Tips to Improve Your Health
### Q. What should my heart rate be during exercise? continued...
Here's how to use the formula:
• Determine your Maximum Heart Rate (MHR) by subtracting your age from 220.
• Then, subtract your resting heart rate (it's best to take this when you first wake up in the morning) from your Maximum Heart Rate to find your Heart Rate Reserve (HRR).
• Multiply your HRR by the percentage of your MHR at which you wish to train (60% to 85% is the usual range for people looking to increase fitness and health).
So, assuming an age of 27, a resting heart rate of 70 beats per minute, and a desired training range of 70%, the calculation would look like this:
220 - 27 = 193
193 - 70 = 123
123 x .70% = 86
86 + 70 = 156
Remember, this is an estimate, not an absolute. Also keep in mind that athletes may exceed the training zone, and even the maximum heart rate, during high-intensity training.
### Q. My weight has hit a plateau. What do I do?
There are several reasons why your weight can hit a plateau, including:
• Losing weight too quickly. When this happens, your metabolism (the rate at which your body burns calories) can slow down because your body senses it is starving. Rapid or large amounts of weight loss can slow your metabolism by as much as 40% in six months.
• Losing muscle. When you lose weight, up to 25% can come from muscle tissue. And since muscle is the engine in your body that burns calories and helps maintain your metabolism, losing it can hinder weight loss. Weightlifting can help preserve and build muscle.
• Reaching your body's particular set point -- the weight and metabolic rate your body is genetically programmed to be. Once you reach that point, it's much harder to lose weight and even if you do, you're likely to regain it. If you're at a weight at which you've hit a plateau in the past, if your body generally seems to gravitate toward that weight, and you're within a BMI (body-mass index) range of 20 to 25, then you may be at your set point.
• Decreasing your physical activity and/or increasing your caloric intake. People lose weight all the time by reducing their caloric intake without doing any exercise, but it's almost impossible to keep weight off without exercising. Many scientists agree that physical activity is the single best predictor of whether a person will maintain a weight loss.
• Other health factors, including thyroid or adrenal gland problems; medications like antidepressants; quitting smoking; menopause; and pregnancy.
Even with any of the above factors, the bottom line to losing weight is eating fewer calories than you burn. Studies show that people almost always underestimate how many calories they're eating. So if you're struggling with weight loss, you're still exercising, and you've ruled out any of the above reasons for weight plateaus, look at your calorie intake.
As for exercise and weight plateaus, sometimes a change in routine can help. Instead of the treadmill, try the bike, or the stepper. Instead of a dance class, try a stretch and tone class. If you're not weight lifting, this would be a good time to start. If you already do aerobic exercise, try adding intervals (short bursts of higher-intensity exercise) to your aerobic workouts. And keep reminding yourself that if you maintain an active lifestyle and continue with healthy eating, you will reach your goals.
### Ditch Those Inches
Set goals, tally calorie intake, track workouts and more, all via WebMD’s free Food & Fitness Planner.
Get Started
## Today on WebMD
Slideshow
Slideshow
Interactive
Slideshow
6-Week Challenges
Want to know more?
| 812
| 3,695
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.9375
| 3
|
CC-MAIN-2014-23
|
longest
|
en
| 0.949856
|
https://philosophy-question.com/library/lecture/read/42849-what-are-rational-irrational-and-real-numbers
| 1,675,421,226,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-06/segments/1674764500044.66/warc/CC-MAIN-20230203091020-20230203121020-00738.warc.gz
| 480,787,102
| 6,518
|
# What are rational irrational and real numbers?
## What are rational irrational and real numbers?
If a number is terminating or repeating, it must be rational; if it is both nonterminating and nonrepeating, the number is irrational. ... The real numbers include natural numbers or counting numbers, whole numbers, integers, rational numbers (fractions and repeating or terminating decimals), and irrational numbers.
## Is 2.75 rational or irrational?
2 Answers By Expert Tutors 2.
## What is whole number integer rational or irrational?
The following diagram shows that all whole numbers are integers, and all integers are rational numbers. Numbers that are not rational are called irrational.
## Is √ 23 a rational or irrational number?
Solution. Steps: i) √23=√231=pq, 23 = 23 1 = p q , but p is not an integer. Hence √23 is an irrational number.
## Why is 18 a rational number?
18 is a rational number because it can be expressed as the quotient of two integers: 18 ÷ 1.
0.
## Is the number irrational?
The number under discussion "i" belongs to the imaginary number set. Hence "i" does not fall into any of the subsets in the real number set. Making it neither rational nor irrational.
## Is 0.345345345 a rational number?
So 0 is not an irrational number./span>
## Is 3.456 a irrational number?
a number that can be written as a fraction Any number that is not an irrational number Examples: 2.
## Is 3.142 a irrational number?
Examples of irrational numbers are π (the ratio of a circle's circumference to its diameter) and the square roots of most positive integers, such as 2 . ... 7 22 = 3.
17.
## Is 7.787887888 A irrational number?
No; it has a pattern which is non repeating and non termination./span>
## Why is a irrational number?
In mathematics, the irrational numbers are all the real numbers which are not rational numbers. That is, irrational numbers cannot be expressed as the ratio of two integers. ... For example, the decimal representation of π starts with 3.
## Is negative 7 a irrational number?
YES, negative 7 (-7) is a rational number because -7 satisfies the definition of a rational number./span>
| 490
| 2,153
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.953125
| 4
|
CC-MAIN-2023-06
|
latest
|
en
| 0.903861
|
https://www.investopedia.com/ask/answers/011315/what-difference-between-yield-and-dividend.asp
| 1,537,962,356,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-39/segments/1537267164750.95/warc/CC-MAIN-20180926101408-20180926121808-00017.warc.gz
| 771,396,216
| 60,784
|
A:
The dividend, or dividend rate, is the total income an investor receives from a stock or other dividend-yielding asset during the fiscal year. However, stock dividends are often quoted instead, using another figure: theĀ dividend yield. The yield is calculated by taking the total annual dividends and dividing that figure by the current share price.
Dividend rates are expressed as an actual dollar amount; for example, Company Y paid out an annual dividend rate of \$5. When this dollar amount is quoted in terms of dollar amount per share, it may also be referred to as dividend per share, or DPS. You can see the accounting history of a company's dividend payments in the investor relations portion of most websites.
There are also other kinds of dividends. Some companies choose to pay out dividends in the form of extra stock or even property. Companies may do this when they decide they want to pay out dividends but need to hold on to some extra cash for liquidity or expansion.
The dividend yield is quoted as a percentage rather than a dollar amount. You are more likely to see the dividend yield quoted than the dividend rate. The initial reason for this makes sense; a company that pays out dividends at a higher percentage of its share price is offering a greater return for its shareholders' investments. It is better to receive \$3 in dividends on a \$50 stock than \$5 in dividends on a \$100 stock, because the investor could ostensibly just purchase two of the \$50 shares and receive \$6 in dividends that way. The dividend yield tells you the most efficient way to earn a return.
Unfortunately, the calculation for dividend yield presents some problems. Dividend yields can vary wildly, so the calculated yield may actually have little bearing on what the future rate of return (ROR) will be. Additionally, dividend yields are inversely related to share price; a rise in yield may be a bad thing if it only occurs because the company's stock price is plummeting.
RELATED FAQS
1. ### Do I receive the posted dividend yield every quarter?
Learn how companies with stock that pays dividends will typically distribute the dividend each quarter. Find out how much ... Read Answer >>
2. ### What is the difference between the dividend yield and the dividend payout ratio?
Learn the differences between a stock's dividend yield and its dividend payout ratio, and learn why the latter might be a ... Read Answer >>
3. ### Can dividends be paid out monthly?
Find out how stocks can pay dividends monthly and learn about the types of industries or companies that will most likely ... Read Answer >>
Related Articles
1. Investing
### Why Dividends Matter To Investors
There is much evidence as to why dividends matter for investors, profitability in the form of a dividend check can help investors sleep easily. Learn more.
2. Investing
### How Dividends Affect Stock Prices
Find out how dividends affect the underlying stock's price, the role of market psychology, and how to predict price changes after dividend declarations.
3. Investing
### Dividend Yield For The Downturn
High-dividend stocks make excellent bear market investments, but the payouts aren't a sure thing.
4. Investing
### AAPL: Apple Dividend Analysis
Apple's dividend has had healthy growth ever since its 2012 reinstatement, thanks to Apple's continuously rising revenue, earnings and operating cash flow.
5. Investing
### Put Dividends to Work in Your Portfolio
Find out how a company can put its profits directly into your hands.
### 4 Dividend ETFs to Help Fund Your Retirement
Investing in stocks that pay out dividends can be a smart way to establish a reliable income stream in retirement. Here are four low-fee dividend ETFs.
7. Investing
### The Top 5 Dividend Paying Software Stocks for 2016 (MSFT, INFY)
Find out which five dividend-paying stocks in the software sector can bring the best yields and growth potential to your portfolio.
8. Investing
### The Best Dividend Paying Stocks in Energy
Investors need to look beyond dividend yield to find stocks that will help generate a stable dividend income.
RELATED TERMS
1. ### Forward Dividend Yield
A forward dividend yield is an estimation of a year's dividend ...
2. ### Indicated Yield
Indicated yield is the dividend yield that a share of stock would ...
3. ### Dividend Yield
A financial ratio that shows how much a company pays out in dividends ...
4. ### Dividend Policy
Dividend policy structures the dividend payout a company distributes ...
5. ### Gross Dividends
Gross dividends are the sum total of all dividends received, ...
6. ### Property Dividend
A property dividend is an alternative to cash or stock dividends. ...
| 966
| 4,708
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.921875
| 3
|
CC-MAIN-2018-39
|
longest
|
en
| 0.948039
|
http://www.mathworks.com/matlabcentral/cody/problems/661-spot-the-outlier/solutions/218739
| 1,484,588,897,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00187-ip-10-171-10-70.ec2.internal.warc.gz
| 563,424,019
| 11,871
|
Cody
# Problem 661. Spot the outlier
Solution 218739
Submitted on 19 Mar 2013 by Ted
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
%% pts = [0 1; 0 2; 3 2; 0 3; 0 4 ]; outlier = 3; assert(isequal(spot_the_outlier(pts),outlier))
``` slope = -Inf b = NaN b = NaN NaN b = NaN NaN Inf b = Inf ```
2 Pass
%% pts = [10 -1;7 0;9.5 0.3;9 1.6;8.5 2.9]; outlier = 2; assert(isequal(spot_the_outlier(pts),outlier))
``` slope = -2.6000 b = 25 b = 25.0000 18.2000 b = 25.0000 18.2000 25.0000 b = 25 ```
3 Pass
%% pts = [-0.6 -6;-0.2 0;0 3;-0.8 -9;-2 1;-0.4 -3]; outlier = 5; assert(isequal(spot_the_outlier(pts),outlier))
``` slope = 15 b = 3 b = 3 3 b = 3 3 3 b = 3 ```
4 Pass
%% pts = [2 5;0 4;0 0;4 6;-2 3]; outlier = 3; assert(isequal(spot_the_outlier(pts),outlier))
``` slope = 0.5000 b = 4 b = 4 4 b = 4 4 0 b = 4 ```
5 Pass
%% pts = [1 0; 0 1; 1 2; 1.5 2.5; 2 3; 3 4 ]; outlier = 1; assert(isequal(spot_the_outlier(pts),outlier))
``` slope = 1 b = -1 b = -1 1 b = -1 1 1 b = 1 ```
| 495
| 1,116
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.265625
| 3
|
CC-MAIN-2017-04
|
latest
|
en
| 0.541638
|
http://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00701.html
| 1,524,750,851,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00094.warc.gz
| 120,236,040
| 7,197
|
Re: Solving Tanh[x]=Tanh[a]Tanh[b x + c]
• To: mathgroup at smc.vnet.net
• Subject: [mg83545] Re: Solving Tanh[x]=Tanh[a]Tanh[b x + c]
• From: dh <dh at metrohm.ch>
• Date: Thu, 22 Nov 2007 04:58:10 -0500 (EST)
• References: <fhu7h8\$79u\$1@smc.vnet.net>
Hi Yaroslav,
you must prevent the "uphill battle" by e.g. temporarily write
Exp[a]Exp[b] as {Exp[a],Exp[b]}, then do what you want and finally
eliminate the braces.E.g:
... //.{Exp[a__+b_]->{Exp[a],Exp[b]},Exp[2x]->x,{a_,b_}->a b}
hope this helps, Daniel
Yaroslav Bulatov wrote:
> I'd like to use Mathematica to show that solution of Tanh[x] - Tanh[a]
> Tanh[b x + c]=0 can be written as
> 1/2 Log (Root[c1 x^(1+b) + c2 x^b + c3 x -1]) for certain coefficients
> c1,c2,c3 when b is a positive integer
>
> Tanh[x] - Tanh[a] Tanh[b x + c]// TrigToExp // Together // Numerator
> gives me almost what I need, except now I need to factor out Exp[2x]
> as a separate variable. What's the best way of achieving it? Using
> syntactic replacement rules like {Exp[a_+b_]->Exp[a]Exp[b],Exp[2x]->x}
> seems like an uphill battle against the evaluator which automatically
> simplifies Exp expressions
>
> Yaroslav
>
• Prev by Date: Interpolating arrays
• Next by Date: Re: Matching string in Mathematica
• Previous by thread: Re: Solving Tanh[x]=Tanh[a]Tanh[b x + c]
• Next by thread: VectorFieldPlot Arrows
| 465
| 1,372
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.859375
| 3
|
CC-MAIN-2018-17
|
latest
|
en
| 0.744371
|
https://www.bhaklol.com/2019/06/sbi-po-review-analysis-8-june-2019.html
| 1,624,108,805,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-25/segments/1623487648194.49/warc/CC-MAIN-20210619111846-20210619141846-00362.warc.gz
| 589,968,035
| 42,067
|
# SBI PO 2019 Exam Review, Analysis & Questions Asked: 8th June 2019
SBI PO Review Prelims 2019
Dear Students, SBI PO 2019 PRELIMS Online Examination is scheduled today on 8th June 2019, Saturday in four shifts. First slot of today's SBI PO 2019 PRELIMS is conducted from 9am to 10am. We will share Detailed Exam Review, Analysis and Questions Asked shift-wise as it gets completed.
SBI PO Prelims Exam Analysis 2019(8th June 2019 - Shift 1Timing-9AM to 10AM):
SBI PO Prelims Exam Analysis 2019 for Reasoning Ability(8th June 2019 - Shift 1):
Topics of Reasoning Ability No. of Qs Level Syllogism 4 Easy to Moderate Puzzles and Seating Arrangement(Floor Puzzle, North South, Triangle and Box Puzzles) 19 Moderate Inequalities Machine Input Coding-Decoding 3 Easy to Moderate Blood Relations 4 Easy to Moderate Ranking Direction Sense Test 3 Easy to Moderate Logical Reasoning Seating Arrangement Alphabet Test 2 Easy to Moderate Number Series Total 35 Qs Easy to Moderate
SBI PO Prelims Exam Analysis 2019 for Quantitative Aptitude(8th June 2019 - Shift 1):
Topics of Quantitative Aptitude Difficulty Level No. of Questions Quadratic Equations Easy to Moderate 5 Simplifications Approximations Number Series Missing Type Moderae 5 Data Interpretation(2 DI- 1 bar graph, 1 Table and 1 Caselet) Moderate 15 Mensuration Height & Distances Problems on Ages Problems on Number Problems on Train Moderate 1 Time & Work Moderate 1 Time, Speed & Distance 1 Average Percentage Moderate 1 Ratio & proportion Profit & Loss Moderate 1 Partnership Moderate 1 Pipes & Cistern Moderate 1 Boats & Streams Moderate 1 Mixture Permutation & Combination Probability Simple Interest & Compound Interest Moderate 2 Total Moderate 35 Questions
SBI PO Prelims Exam Analysis 2019 for English Language(8th June 2019 - Shift 1):
Topics of English Language No. of Questions Level Cloze Test 5 Easy to Moderate RC 5 Moderate Match the following sentence formation ype 2 Easy to Moderate Fill in the Blanks single type 7 Easy to Moderate Phrase Replacement Spotting the Errors 6 Easy to Moderate Connectors Idioms & Phrases Sentence Improvement 5 Easy to Moderate Total 30 Qs Easy to Moderate
Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 1Timing-9AM to 10AM):
Reasoning Ability Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 1):
will update soon!
Quantitative Aptitude Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 1):
A invests Rs.x. After 6 months B invested x+400. Ratio of total profit after 1 year to profit of B is 7:3. Find the value of x .
26,63,124,215,?
Ans-342
5,30,150,600,?
Ans-1800
15,8,9,15,32,?
Ans-82.5
I. x^2=144
II. (Y+1)^2 =12
x= +-12
(Y+1)^2=12^2
Y2+1+2Y=144
Y2+2Y-143=0
After equating both,
Y=-13,11
No relationship can be established.
General Awareness Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 1):
will update soon!
English Language Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 1):
will update soon!
SBI PO Prelims Exam Analysis 2019(8th June 2019 - Shift 2Timing-11.30AM to 12.30PM):
SBI PO Prelims Exam Analysis 2019 for Reasoning Ability(8th June 2019 - Shift 2):
Topics of Reasoning Ability Difficulty Level No. of Questions Syllogism Easy to Moderate 5 Puzzles and Seating Arrangement(circular, linear, month based and floor based) Moderate 21 Inequalities Easy to Moderate 4 Machine Input Coding-Decoding Blood Relations Ranking Direction Sense Test Easy to Moderate 3 Logical Reasoning Seating Arrangement Alphabet Test Easy to Moderate 2 Number Series Total 35 Questions
SBI PO Prelims Exam Analysis 2019 for Quantitative Aptitude(8th June 2019 - Shift 2):
Topics of Quantitative Aptitude Difficulty Level No. of Questions Quadratic Equations Simplifications Approximations Easy to Moderate 5 Number Series Missing Type Easy to Moderate 5 Data Interpretation Moderate 15 Mensuration Moderate 1 Height & Distances Problems on Ages Moderate 1 Problems on Number Problems on Train Time & Work Moderate 1 Time, Speed & Distance Moderate 1 Average Moderate 1 Percentage Moderate 1 Ratio & proportion Moderate 1 Profit & Loss Moderate 1 Partnership Moderate 1 Pipes & Cistern Boats & Streams Moderate 1 Mixture Permutation & Combination Probability Simple Interest & Compound Interest Total 35 Questions
SBI PO Prelims Exam Analysis 2019 for English Language(8th June 2019 - Shift 2):
Topics of English Language Level of Difficulty No. of Questions Cloze Test RC Moderate 8 Parajumbles Easy to Moderate 5 Fill in the Blanks single type Easy to Moderate 5 Phrase Replacement Easy to Moderate 5 Spotting the Errors Easy to Moderate 7 Connectors Idioms & Phrases Sentence Improvement Total 30 Questions
Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 2Timing-11.30AM to 12.30PM):
Reasoning Ability Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 2):
will update soon!
Quantitative Aptitude Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 2):
will update soon!
General Awareness Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 2):
will update soon!
English Language Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 2):
will update soon!
SBI PO Prelims Exam Analysis 2019(8th June 2019 - Shift 3Timing-2PM to 3PM):
SBI PO Prelims Exam Analysis 2019 for Reasoning Ability(8th June 2019 - Shift 3):
Topics of Reasoning Ability Difficulty Level No. of Questions Syllogism Easy to Moderate 5 Puzzles and Seating Arrangement Moderate 20 Inequalities Machine Input Coding-Decoding Easy to Moderate 5 Blood Relations Ranking Direction Sense Test Moderate 3 Logical Reasoning Seating Arrangement Alphabet Test Easy to Moderate 2 Number Series Total 35 Questions
SBI PO Prelims Exam Analysis 2019 for Quantitative Aptitude(8th June 2019 - Shift 3):
Topics of Quantitative Aptitude Difficulty Level No. of Questions Quadratic Equations Easy to Moderate 5 Simplifications Approximations Easy to Moderate 5 Number Series Data Interpretation Moderate 15 Mensuration Moderate 1 Height & Distances Problems on Ages Easy to Moderate 1 Problems on Number Easy to Moderate 1 Problems on Train Moderate 1 Time & Work Moderate 1 Time, Speed & Distance Moderate 1 Average Moderate 1 Percentage Moderate 1 Ratio & proportion Moderate 1 Profit & Loss Moderate 1 Partnership Pipes & Cistern Boats & Streams Mixture Permutation & Combination Probability Simple Interest & Compound Interest Total 35 Questions
SBI PO Prelims Exam Analysis 2019 for English Language(8th June 2019 - Shift 3):
Topics of English Language Level of Difficulty No. of Questions Cloze Test RC Moderate 10 Parajumbles Easy to Moderate 5 Fill in the Blanks single type Easy to Moderate 5 Phrase Replacement Spotting the Errors Easy to Moderate 5 Column matching Moderate 5 Idioms & Phrases Sentence Improvement Total 30 Questions
Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 3, Timing-2PM to 3PM):
Reasoning Ability Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 3):
will update soon!
Quantitative Aptitude Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 3):
will update soon!
General Awareness Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 3):
will update soon!
English Language Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 3):
will update soon!
SBI PO Prelims Exam Analysis 2019(8th June 2019 - Shift 4, Timing-4.30PM to 5.30PM):
SBI PO Prelims Exam Analysis 2019 for Reasoning Ability(8th June 2019 - Shift 4):
Topics of Reasoning Ability Difficulty Level No. of Questions Syllogism Easy to Moderate 5 Puzzles and Seating Arrangement Moderate 21 Inequalities Machine Input Coding-Decoding Blood Relations Weight Based Order and Ranking Easy to Moderate 3 Direction Sense Test Easy to Moderate 4 Logical Reasoning Seating Arrangement Alphabet Test Easy to Moderate 2 Number Series Total 35 Questions
SBI PO Prelims Exam Analysis 2019 for Quantitative Aptitude(8th June 2019 - Shift 4):
Topics of Quantitative Aptitude Difficulty Level No. of Questions Quadratic Equations Easy to Moderate 5 Simplifications Data Sufficiency Moderate 5 Number Series Wrong Type Easy to Moderate 5 Data Interpretation Moderate 10 Mensuration Moderate 1 Height & Distances Problems on Ages Moderate 1 Problems on Number Moderate 1 Problems on Train Moderate 1 Time & Work Moderate 1 Time, Speed & Distance Moderate 1 Average Moderate 1 Percentage Moderate 1 Ratio & proportion Easy to Moderate 1 Profit & Loss Moderate 1 Partnership Pipes & Cistern Boats & Streams Mixture Permutation & Combination Probability Simple Interest & Compound Interest Total 35 Questions
SBI PO Prelims Exam Analysis 2019 for English Language(8th June 2019 - Shift 4):
Topics of English Language Level of Difficulty No. of Questions Cloze Test Moderate 7 RC Moderate 7 Parajumbles Fill in the Blanks Phrase Replacement Spotting the Errors Easy to Moderate 12 Connectors Idioms & Phrases Sentence Replacement Easy to Moderate 4 Total 30 Questions
Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 4, Timing-4.30PM to 5.30PM):
Reasoning Ability Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 4):
will update soon!
Quantitative Aptitude Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 4):
will update soon!
General Awareness Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 4):
will update soon!
English Language Questions Asked in SBI PO 2019 PRELIMS(8th June 2019 - Shift 4):
will update soon!
SBI PO 2019 PRELIMS Expected Cut Off:
Category SBI PO 2019 PRELIMS Expected Cut Off General 55 to 65 OBC 50 to 55 SC 40 to 45 ST 30 to 35
| 2,532
| 9,617
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.625
| 3
|
CC-MAIN-2021-25
|
longest
|
en
| 0.79403
|
https://www.intmath.com/blog/environment/whats-that-smell-the-math-of-air-quality-986
| 1,723,227,977,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-33/segments/1722640768597.52/warc/CC-MAIN-20240809173246-20240809203246-00224.warc.gz
| 647,309,901
| 29,128
|
Search IntMath
Close
# What’s that smell? The math of air quality
By Murray Bourne, 05 Nov 2008
This Air Contaminants Table from the U.S. Department of Labor Occupational Safety & Health Administration is interesting. Here's the first few entries:
TABLE Z-1. - LIMITS FOR AIR CONTAMINANTS
Substance ppm mg/m^3
Acetaldehyde 200 360
Acetic acid 10 25
Acetic anhydride 5 20
Acetone 1000 2400
Acetonitrile 40 70
Acetylene tetrabromide 1 14
Acrolein 0.1 0.25
"ppm" means "parts per million" and "mg/m^3" means "milligrams per cubic meter".
There are hundreds of airborne contaminants - what are you breathing right now?
The Air Contaminants Table could be the basis for an interesting applied math lesson. It could be part of a units topic (the meaning of concentration, mass per volume, parts per million, conversion between units, metric measure, etc).
First, students could find out more about the pollutants in their local area. Where do they come from? How bad is it? What can be done about it? If possible, get access to a 'real' air quality instrument (from the local council, or department of health, maybe?) and measure around the school and the local area.
They could plot the information using Excel and graph it. What are safe distances from the sources of pollution? What times of the day is it best — and worst? Posters of the results could be displayed as part of a community awareness event.
There's nothing like real, authentic data to make math more meaningful and interesting. And a key outcome is that students feel ownership for what they are discovering and learning - not like most textbook-based learning.
Aside: Singapore's air quality is generally good and certainly better than most cities in Asia. At least you can see the horizon on most days.
However, in my apartment, the fans are on almost all the time. When stopped, you can see a filthy black dust on them. Singapore is well known for having high concentrations of particulate matter (tiny particles in the 10μm range), mostly due to diesel use.
Be the first to comment below.
### Comment Preview
HTML: You can use simple tags like <b>, <a href="...">, etc.
To enter math, you can can either:
1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone):
a^2 = sqrt(b^2 + c^2)
(See more on ASCIIMath syntax); or
2. Use simple LaTeX in the following format. Surround your math with $$ and $$.
$$\int g dx = \sqrt{\frac{a}{b}}$$
(This is standard simple LaTeX.)
NOTE: You can mix both types of math entry in your comment.
From Math Blogs
| 643
| 2,609
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.515625
| 4
|
CC-MAIN-2024-33
|
latest
|
en
| 0.933908
|
https://newsroom.unsw.edu.au/news/science-tech/numbers-reveal-government-didn%E2%80%99t-%E2%80%98play-god%E2%80%99-vietnam-draft
| 1,696,149,529,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-40/segments/1695233510810.46/warc/CC-MAIN-20231001073649-20231001103649-00592.warc.gz
| 457,104,277
| 10,153
|
# Opinion The numbers reveal the government didn’t ‘play god’ with the Vietnam draft
Our expectations of randomness are often wrong, write Daniel Little and Chris Donkin.
Image: iStock
OPINION: Former Deputy Prime Minister Tim Fischer argued in The Age that public servants “played god” with the Vietnam draft. According to Fischer, they fudged the draw.
He provided a handful of examples to illustrate this claim, including that the geographical locations of the draftees were not uniformly distributed across Australia and that some birth dates were more likely to be drawn than others.
As quoted by David Ellery in The Age:
Only four marbles came up for men born on January 1, 1946, compared to 13 marbles for men born on June 30, 1946. This is almost beyond the standard deviation [you would expect].
As Ellery states, someone born on June 30 of that year was more than three times as likely to be conscripted as a man born on January 1.
But is this evidence that the draw was manipulated?
### Eyeing the bias
The total number of men drafted was 63,740. So is it actually unusual, if we draw 63,740 dates, to find that some days of the year have a high number of draws and other days do not?
The days of the year can be numbered from 1 to 365 (or 366 on a leap year). And since we aim to find the distribution of counts (or draws) for each date, the dates are distributed according to a multinomial distribution.
This is a generalisation of a binomial distribution, which everyone is familiar with through flipping coins. The binomial distribution tells us the probability of flipping a number of heads (or a number of tails) out of a total number of flips, given the probability of flipping a head.
The probability of flipping a head can be thought of as the bias in the coin. In most coin tosses, we’d like to think that coin is fair, and that the probability of flipping a head is the same as flipping a tail (that is, 1 out of 2).
For dates in a year, the multinomial distribution has an analogous concept, which is the probability that any one of the 365 dates is drawn. For the draw to be completely fair, the probability of drawing any one date should be 1 out of 365 (in a non-leap year).
Fischer’s claim can then be considered as follows: what’s the probability of drawing any one date three times more often than another, out of 63,740 draws with an equal probability of drawing each date?
Here is a figure of the dates ordered by their draw frequency. The first thing to note is that most samples are drawn near the same amount of times (around 174 times for 63,740 draws). In this sample, the most often drawn date (September 1, drawn 211 times) is only 1.48 times more likely than the least-drawn date (December 15, drawn 143 times).
However, these two counts are over 5 standard deviations apart. The reason for this is that the standard deviation of the multinomial distribution is the square root of n x p x (1 – p).
In this case, n is large (63,740) and p is small (1/365), so the standard deviation of the multinomial distribution is 13.2. Is it odd that the highest and lowest counts were 5 standard deviations apart? Not really, even for more familiar normally distributed values, the minimum and maximum sampled values will tend to be 4 to 6 standard deviations apart.
So these results are to be expected of a fair draw. In fact, you’d expect the most and the least drawn dates to be even further apart than one standard deviation.
### Is it random?
The difference between the most sampled and least sampled dates gets larger with smaller samples. The Age reports that the aim of the draft was to “quickly raise the strength of the army from 22,500 to 37,500 troops, by calling up 4,200 youths in the last half of 1965 and and 6,900 [later raised to 8,400] every subsequent year”.
If we repeat the above exercise with the first year’s draft amount of 4,200, then the most sampled date (day 365) was sampled 20 times, but the least sampled date (day 103) was only sampled twice. The least sampled date was sampled 10 times less often than the most sampled date, but the probability of sampling each date was again 1 out of 365.
Why the large discrepancy?
Again, it has to do with the standard deviation of the multinomial distribution. Due to the smaller total number of draws, the standard deviation is now equal to 3.39. So the 20 and the 2 draws are again only 5.3 standard deviations apart, which is completely expected under a uniform sampling scheme.
Tim Fischer may have more information than was reported in The Age, so we can’t rule out the possibility that his claim is correct. However, the difference between 13 draws for 20 June 1946 and 4 draws for 1 Jan 1946 is not as suspect as it might seem.
However, this does serve as a good example of how we humans are not very good at identifying randomness when we see it. In fact, our inability to identify randomness is what makes it possible to use statistics to detect instances of election fraud, such as in the 2009 Iranian election.
To provide another example of how our expectations of randomness might be wrong, we can ask how likely is it that the most frequently drawn date comes up 16 (or more) times, and the least frequently drawn date comes up 4 (or fewer) times.
The following graph shows the outcome of 1,000 simulated draws of 4,200 dates, each with an equal probability of being drawn. We plot how often (the frequency) the most drawn date comes up on the left, and the frequency of the least drawn date on the right.
If anything, we should expect that most and least drawn dates should be more different than they actually were. Again, our expectations for what is random do not reflect the underlying true outcomes from random sampling.
Daniel Little is a Senior Lecturer in Mathematical Psychology at University of Melbourne
Chris Donkin is a Senior Lecturer in Psychology at UNSW.
This opinion piece was first published in The Conversation.
| 1,353
| 5,974
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.515625
| 4
|
CC-MAIN-2023-40
|
latest
|
en
| 0.970183
|
https://electronics.stackexchange.com/questions/82543/designing-4-3-v-voltage-regulator/82653
| 1,723,125,694,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-33/segments/1722640728444.43/warc/CC-MAIN-20240808124926-20240808154926-00463.warc.gz
| 185,017,060
| 44,261
|
# Designing 4.3 V voltage regulator
In my design (for a hand-held device) I need a regulated power supply of 4.3 V. I am having 5 v voltage regulators. How do I convert this 5 v regulated supply to 4.3 v. I understand I can use potential divider circuit, but it will not be very accurate. Is there any better way to achieve this?
• Use an adjustable voltage regulator. Commented Sep 17, 2013 at 9:22
• Just out of curiosity, why do you need 4.3 V? Commented Sep 17, 2013 at 9:35
• How much current do you need? Is there a higher (unregulated) voltage available?
– Tut
Commented Sep 17, 2013 at 10:49
Maybe a Si diode (forward voltage ca. 0.6-0.7V) is a good enough regulator. You can use it to reduce 5V to ca. 4.3V.
simulate this circuit – Schematic created using CircuitLab
You can not use an ordinary voltage regulator (e.q. LM317) to regulate from 5V to 4.3V because the voltage difference between input and output voltage is to low. If you want to use an voltage regulator IC it must be a Low Drop one.
If you need output 4.3v with current up to 1.5 A, use LDO such as LD29150PTR
$V_I = 5V$
$V_{REF} = 1.23V$
$V_O = 4.3V$
eg,
$R1 = 100k\Omega$
$R2 = ?$
$V_O = V_{REF} \cdot (1+\dfrac{R1}{R2})$
$4.3V = 1.23V \cdot (1+\dfrac{100k\Omega}{R2})$
$\dfrac{4.3V}{1.23V} = 1 + \dfrac{100k \Omega}{R2}$
$3.49 - 1 = \dfrac {100k \Omega}{R2}$
$2.49 \cdot R2 = 100k\Omega$
$R2 = \dfrac{100k \Omega}{2.49}$
$R2 = 40k \Omega$
The LM317 Hemal suggests is no good. Just like most common three-leg regulators (78xx) they need a few volts input-output difference, so you'd need something like 6.5V in.
The solution is an LDO regulator, for Low Drop-Out. They're used the same way as the 78xx, i.e. there's input-ground-output, but they only need a few hundreds of millivolts between input and output.
If you only need low current you could also use a diode for the 0.7V drop, like a 1N4001 would give you, but that voltage will increase if you have higher current, so your output voltage may decrease to as low as 4.0V.
Sounds like you're trying to charge a lithium ion battery, or trying to run a circuit directly from a lithium ion source. The answer from Curd won't cut it (excuse the terrible attempt at a pun) - at no-load there will be some current trickling through there and slowly charging your circuit up to 5V which will definitely damage such a cell.
The easiest way to go is just to use an adjustable version of your voltage regulator, for instance a 1117-adj. This part just requires a resistive divider on the output to get any output voltage. The only catch is: your input voltage needs to be at least ~1.5V higher than your output voltage.
However, there are also specialized chips for lithium ion charging, for instance the ubiquitous MCP7383x.
Basic concept.
simulate this circuit – Schematic created using CircuitLab
The resistor is there to give the 7805 (actually, a 78L05) a nice, stable load. That creates a stiff bias point for the transistor, at Vb = 5V. The emitter will be at Vb-Vbe, and will be as stiff as the bias point (which, considering that the 7805 can deliver LOTS of current at 5V, will be pretty stiff). For the 2N3904, Vbe is about 0.6V, giving Ve = 4.4V.
| 957
| 3,210
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.5625
| 4
|
CC-MAIN-2024-33
|
latest
|
en
| 0.88894
|
https://www.theflatearthsociety.org/forum/index.php?topic=91282.msg2391699
| 1,718,712,480,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861752.43/warc/CC-MAIN-20240618105506-20240618135506-00537.warc.gz
| 879,999,874
| 25,524
|
The dip of the horizon
• 41 Replies
• 3716 Views
FlatAssembler
• 681
• Not a FE-er
The dip of the horizon
« on: December 26, 2022, 12:50:43 PM »
On multiple places on the Flat Earth Wiki, it is being claimed that the horizon is always at your eye level. That is demonstrably not true, and the Round Earth Theory can easily explain why. If you draw a diagram...
...you will see that the angle at which you see the horizon (the dip of the horizon) is given by the formula...
.
And that angle can be seen in two ways that come to my mind:
1) Only qualitatively: You can see the sunset twice if you watch it sitting down and then quickly stand up. The horizon fell as you stood up but the Sun stayed at your eye level.
2) Quantitatively: With a goniometer and a gyroscope. You can use a gyroscope to see exactly where your eye level is and measure the angle between your eye level and the horizon with a goniometer, and see that that formula is correct.
How does the Flat Earth Theory explain that? I suppose it does not.
Fan of Stephen Wolfram.
This is my parody of the conspiracy theorists:
https://www.theflatearthsociety.org/forum/index.php?topic=71184.0
This is my attempt to refute the Flat-Earth theory:
Curiouser and Curiouser
• 1830
Re: The dip of the horizon
« Reply #1 on: December 27, 2022, 05:37:33 PM »
(1) Please provide evidence that your thought experiment has any basis in validity.
Using your formula, the change in angle between sitting down and standing up can be calculated. I give you a generous h = 2m (maybe you were sitting on the ground and jumped high in the air).
This gives an alpha of 3.16e-07 radians.
How fast does the sun move? 2 pi radians in 24 hours; 6.28 rad/86400 sec, so let's say you're super fast and can stand up in 0.1 sec. The sun moves 7.27e-06 radians in that time.
Your "qualitative" thought experiment fails. In perfect conditions, you cannot stand up fast enough to see the sun set twice. Even if you could stand up in half the time (50 ms) and jump 4m high.
(2) I'd like to see your procedure for "You can use a gyroscope to see exactly where your eye level is and measure the angle between your eye level and the horizon with a goniometer" with the precision necessary. Please include the model number for each piece of equipment along with the required calibration method and statistical methods for assuring accuracy.
That assumes also that you can see the horizon with sufficient clarity.
And last, you also assume that light travels in a straight line. Which it does not.
ADDED: JackBlack found the error in my (1) calculation (I used the small angle approximation where I should not have). So, theoretically this should work. Any actual observations where this is documented?
« Last Edit: December 28, 2022, 01:09:13 AM by Curiouser and Curiouser »
JackBlack
• 22194
Re: The dip of the horizon
« Reply #2 on: December 27, 2022, 06:51:25 PM »
I calculated 0.000792 radians for a height of 2 m and a radius of 6371 km.
So assuming they go from their eyes level with the ground to standing up at 2 m, that gives them ~11 seconds to rise and watch the sun set again.
FlatAssembler
• 681
• Not a FE-er
Re: The dip of the horizon
« Reply #3 on: December 28, 2022, 12:15:33 PM »
Quote from: Curiouser and Curiouser
Please include the model number for each piece of equipment along with the required calibration method and statistical methods for assuring accuracy.
Oh, for God's sake, you can use an app such as Dioptra to do that for you, on any Android phone that has a digital gyroscope in it (which is almost all phones these days).
Fan of Stephen Wolfram.
This is my parody of the conspiracy theorists:
https://www.theflatearthsociety.org/forum/index.php?topic=71184.0
This is my attempt to refute the Flat-Earth theory:
FlatAssembler
• 681
• Not a FE-er
Re: The dip of the horizon
« Reply #4 on: December 28, 2022, 12:17:49 PM »
Quote from: Curiouser and Curiouser
Any actual observations where this is documented?
Sure, the MinutePhysics video "10 reasons why we know the Earth is round" (or something like that).
Fan of Stephen Wolfram.
This is my parody of the conspiracy theorists:
https://www.theflatearthsociety.org/forum/index.php?topic=71184.0
This is my attempt to refute the Flat-Earth theory:
FlatAssembler
• 681
• Not a FE-er
Re: The dip of the horizon
« Reply #5 on: December 28, 2022, 12:21:48 PM »
Here you go, the reference for "You can see the sunset twice if you watch it sitting down and then quickly stand up.":
It's at 1:45.
« Last Edit: December 28, 2022, 12:25:51 PM by FlatAssembler »
Fan of Stephen Wolfram.
This is my parody of the conspiracy theorists:
https://www.theflatearthsociety.org/forum/index.php?topic=71184.0
This is my attempt to refute the Flat-Earth theory:
FlatAssembler
• 681
• Not a FE-er
Re: The dip of the horizon
« Reply #6 on: December 30, 2022, 06:12:54 AM »
I think we have established that the dip of the horizon is a fact which can be observed with equipment most people have at home (as almost everybody these days has a phone with camera and a gyroscope). Unless Flat Earth Theory can explain that (preferably also explaining why this formula appears correct), that is to be considered a disproof of the Flat Earth Theory.
Fan of Stephen Wolfram.
This is my parody of the conspiracy theorists:
https://www.theflatearthsociety.org/forum/index.php?topic=71184.0
This is my attempt to refute the Flat-Earth theory:
FlatAssembler
• 681
• Not a FE-er
Re: The dip of the horizon
« Reply #7 on: January 02, 2023, 04:00:29 AM »
Will you please then correct this misinformation there?
A fact of basic perspective is that the line of the horizon is always at eye level with the observer.
That's not true. The horizon falls as you climb, as can be seen in two ways I described in the OP. The Round Earth Theory can easily explain why, in fact, with the basic trigonometry it predicts that the formula is:
How does the Flat Earth Theory explain that? Will you finally answer that question?
Fan of Stephen Wolfram.
This is my parody of the conspiracy theorists:
https://www.theflatearthsociety.org/forum/index.php?topic=71184.0
This is my attempt to refute the Flat-Earth theory:
FlatAssembler
• 681
• Not a FE-er
Re: The dip of the horizon
« Reply #8 on: January 06, 2023, 12:21:26 PM »
Of course, you are going to ignore the issue. What else can be expected from you?
Fan of Stephen Wolfram.
This is my parody of the conspiracy theorists:
https://www.theflatearthsociety.org/forum/index.php?topic=71184.0
This is my attempt to refute the Flat-Earth theory:
Slemon
• Flat Earth Researcher
• 12330
Re: The dip of the horizon
« Reply #9 on: January 06, 2023, 01:49:50 PM »
Of course, you are going to ignore the issue. What else can be expected from you?
To be fair, it's an experiment that requires very specific circumstances to set up. No FEer holds that the Earth is uniformly flat - that is, they don't deny the existence of hills and dips in the land. This kind of experiment is going to be more informed by local geography than the shape of the Earth. If I look out the window, the horizon is going to be way higher than my eye level because I'm staring at a hill, should I conclude the Earth is concave?
I'm sure there are wide, flat, open areas that allow you to do this, sea level is usually the best reference, but in general this is not the compelling case you put it forwards as.
Also, I'm not convinced anyone currently active can even edit the wiki.
We all know deep in our hearts that Jane is the last face we'll see before we're choked to death!
?
turbonium2
• 1717
Re: The dip of the horizon
« Reply #10 on: January 06, 2023, 08:48:49 PM »
And we can debate what 'EYE LEVEL' even means, beyond that. Those of us who believe Earth IS flat and not a speeding ball in endless space, say the horizon is 'eye level' because it is seen directly across from our view, and we never have to look down to see it, at ANY altitude above Earth we are. The ball Earth bunch jump on the term 'eye level', and say it is not 'directly level' with our view, just to skew the whole argument around their way.
In essence, when we see a horizon, no matter HOW HIGH ABOVE EARTH WE ARE, it IS directly seen across from us. That is the main point here. If the Earth was a ball, the horizon would NOT be seen directly across from us at any altitude above Earth, because it would be a SPHERE.
If we were rising ABOVE a sphere, the horizon would not only show an ever more pronounced curve ACROSS the Earth, but it would also be FURTHER AND FURTHER BELOW our view from above the Earth, the higher we rise above Earth.
That is why the ball Earth bunch cannot make a computer simulation of it, without making a VISIBLE curve at altitudes planes fly LOWER than, when it is seen perfectly flat across the Earth, as it ALWAYS is seen.
That's the problem they cannot resolve. In order to show Earth as a ball, when in 'space', they must start to CURVE the horizon at lower heights. When they DO that, they prove it is NOT a ball, because the horizon remains flat at those altitudes, in the REAL world!
Anyway, I don't know if you are or are not a ball Earth believer, I assume most likely it is the latter. But you've brought up a valid point here, without any agenda blinding your thoughts, so well done!
I wish more people did the same thing, but sadly, it's not the case.
JackBlack
• 22194
Re: The dip of the horizon
« Reply #11 on: January 06, 2023, 10:28:02 PM »
And we can debate what 'EYE LEVEL' even means, beyond that. Those of us who believe Earth IS flat and not a speeding ball in endless space, say the horizon is 'eye level' because it is seen directly across from our view, and we never have to look down to see it, at ANY altitude above Earth we are.
You mean it is WITHIN your view, not seen directly across.
As for any altitude, that is only the extremely limited altitudes you have been.
Even at the cruising altitude of a plane, that is still only roughly 10 km, or more honestly expressed, roughly 0.15% of the radius of Earth.
If you had a ball 1 m in diameter, that would be like looking at it from less than 1 mm above its surface.
The ball Earth bunch jump on the term 'eye level', and say it is not 'directly level' with our view, just to skew the whole argument around their way.
Quite the opposite.
FEers take an example where you cannot tell what the angle of dip to the horizon is and use that to falsely claim this means Earth can't be round.
The REers point out that is pure garbage, and when measured, there is an angle of dip to the horizon, which varies with altitude.
In essence, when we see a horizon, no matter HOW HIGH ABOVE EARTH WE ARE
Again, you see it when you are within 0.15% of the radius of Earth away from Earth.
That is basically nothing.
If the Earth was a ball, the horizon would NOT be seen directly across from us at any altitude above Earth, because it would be a SPHERE.
Why?
Because you say so?
Again, this only works if by "directly across" you do the very thing you are accusing REers of trying to skew the argument with.
That is that it must be at an angle of elevation of 0 degrees.
If instead you were doing it in the way you are trying to portray it as, then you most certainly would expect to be able to see the horizon.
Even at an altitude of 10 km above the surface, the angle of dip to the horizon would be 3.2 degrees. Well within your FOV.
As a comparison, that would be like looking at something 5.6 cm below your eye level at a distance of 1 m.
If instead we stick to more common altitudes, dropping it down to 1 km would put it equivalent to 1.8 cm. If you were at the beach, 10 m above sea level, then it would equate to 1.8 mm. You cannot tell if that is level or not.
So no, the horizon would most certainly be visible on the RE.
If we were rising ABOVE a sphere, the horizon would not only show an ever more pronounced curve ACROSS the Earth, but it would also be FURTHER AND FURTHER BELOW our view from above the Earth, the higher we rise above Earth.
How many times must it be explained that the horizon is a curve? It is a circle. What you are doing now is like picking up a hula hoop (or any other ring), looking at it as close as possible to being in the plane of the circle, and boldly proclaiming it isn't curved.
If you want to see it as a curve, you need a very large portion of the horizon taking up a relatively small amount of your FOV.
And it is observed to get lower with increasing altitude.
when it is seen perfectly flat across the Earth, as it ALWAYS is seen.
A flat circle, just like you would expect for a RE, and nothing like what you would expect for a FE.
That's the problem they cannot resolve. In order to show Earth as a ball, when in 'space', they must start to CURVE the horizon at lower heights. When they DO that, they prove it is NOT a ball, because the horizon remains flat at those altitudes, in the REAL world!
And more dishonest BS.
The horizon is curved. What matters is the angle you look at it from.
Anyway, I don't know if you are or are not a ball Earth believer, I assume most likely it is the latter. But you've brought up a valid point here, without any agenda blinding your thoughts, so well done!
I wish more people did the same thing, but sadly, it's not the case.
Well nothing is stopping you from trying, except your own agenda or irrationally attacking the RE.
?
turbonium2
• 1717
Re: The dip of the horizon
« Reply #12 on: January 07, 2023, 05:28:29 AM »
How many times must it be explained that the horizon is a curve? It is a circle. What you are doing now is like picking up a hula hoop (or any other ring), looking at it as close as possible to being in the plane of the circle, and boldly proclaiming it isn't curved.
No, your ball Earth is a sphere, which means it must have a completely curved surface, everywhere you are on that surface, it is always curved, in every direction you look at it, whether ON it, or ABOVE it, looking down to it below you.
A horizon seen on a ball Earth would always be curved. What we will see is around from one point, stationary, outward, which GOES AROUND FROM ONE POSITION, moving CIRCULAR around that one position, FROM the Earth's surface.
What ELSE do you EXPECT it would be, if looking outward from one position, around your position? A RECTANGLE, or SQUARE, maybe a SQUIGGLY SHAPE?
It is not even an ACTUAL, PHYSICAL CIRCLE, it is the movement from one point, rotating around it.
I'm talking about a REAL, PHYSICAL CURVE, of the ball Earth's surface, which WOULD exist, WOULD be seen, as an actual curved surface of Earth.
Try to draw a sphere, or look at some sort of ball, and imagine you were on it, as a speck, and imagine what you'd see on it. Obviously, as a tiny speck on a massive ball, it wouldn't appear to be a ball, viewed over it, outward from you. You cannot tell what shape it is you are on, from that one viewpoint on it. The horizon may appear flat, while it is slightly curved, but it is too slight of a curve, to see it, or identify it, or measure it, from your position on such a massive ball.
That's what we WOULD see, if Earth WERE a massive ball, viewed outward, from one position on it.
We don't actually KNOW what it would look like, what a horizon would look like from the ground, because Earth is NOT a ball, it is actually FLAT.
And these are two completely DIFFERENT surfaces, with one being a completely curved surface, while the other one is a completely FLAT surface, so they have VISUAL differences, for one thing, among many other differences as well, but the most obvious difference they have, is what would be SEEN, if it is curved, or is flat, and NOT curved at all!
And if Earth WERE a ball, and DID have an actual RATE of 'curvature', everyone on Earth would be told, and taught, in schools, and written in our textbooks, told as ANY OTHER FACT IS STATED AS A FACT, used as ALL ACTUAL RATES are used, and followed as actual rates are, because they are VERY important to use, and work with, as actual rates, with actual measurements, are used in ALL fields today.
Nobody says 'well, we can ignore that rate, it's not a big deal, anyway' In fact, it sounds completely STUPID, just to think about someone ever SAYING it, let alone an entire FIELD of professional surveyors, who DEPEND on being accurate, not idiots who ignore something which would be fundamental to their entire profession! It's completely INSANE to even suggest they would be complete morons! Not a chance in hell, it's total BS!
?
DataOverFlow2022
• 4248
Re: The dip of the horizon
« Reply #13 on: January 07, 2023, 06:39:30 AM »
I'm talking about a REAL, PHYSICAL CURVE, of the ball Earth's surface, which WOULD exist, WOULD be seen, as an actual curved surface of Earth.
It’s been repeatedly explained…
Quote
Do Clouds Show Evidence of Spherical Earth?
https://www.theflatearthsociety.org/forum/index.php?topic=90800.msg2374139#msg2374139
Shrugs…
Quote
Power lines over Lake Pontchartrain elegantly demonstrate the curvature of Earth
https://www.zmescience.com/science/news-science/power-lines-curvature-earth-04233/amp/
And you ignored this too.
And the Rainy Lake Experiment.
Quote
Proof of Earth Curvature: The Rainy Lake Experiment
http://walter.bislins.ch/bloge/index.asp?page=Proof+of+Earth+Curvature%3A+The+Rainy+Lake+Experiment
Both that don’t require the impossible calmness between there is always air currents, differences in temperature, biologics splashing around, the wake of boats, and tides. Measurements at the edge of a instruments tolerances.
And this has been cited for you too…
Quote
There is one huge towing tank of about 500m, so long that the tank has been built following the Earth curvature - as the water surface would do - and not straight to avoid vertical position offset of models under test (about 18 cm). The second towing tank is shorter (about 220m) but it can generate controlled waves to analyze hull behavior at difference sea force levels.
https://dewesoft.com/case-studies/naval-and-marine-performance-testing-and-simulation
Quote
You've seen a flat surface with a horizon, every day on Earth. THAT is what a vast flat surface looks like, there's NO CURVE IN SIGHT, ANYWHERE AT ALL.
And it’s been explained to you the earth is big enough there is no perceivable curvature for the limited capability of our eyes.
Just like this…
.
Is part of this concept curve curing to the right and down hill.
Quote
Do horizons look curved to you? Do you see a slight curve to horizons, anywhere? No, they are all flat, end to end, each and every one of them.
Lots of things look one way, and are something else.
Quote
So what's actually going on here? Turns out, these bizarre natural phenomena are just an elaborate optical illusion - an illusion so good, it'd be impossible to believe it without the proper equipment.
But if you get some surveying equipment or GPS markers to actually measure the difference between the 'top' of the slope and the 'bottom', you'll realise that everything is actually in reverse.
"The embankment is sloped in a way that gives you the effect that you are going uphill," materials physicist Brock Weiss from Pennsylvania State University told Discoveries and Breakthroughs in Science back in 2006.
"You are, indeed, going downhill, even though your brain gives you the impression that you're going uphill."
Again…
Experiments that prove your perspective crap doesn’t explain why the the sun becomes physically blocked from view at sunset where binoculars/zooming cannot bring it back into view.
Horizon did not block duck from view
https://www.theflatearthsociety.org/forum/index.php?topic=90722.0
Notice I post in context of actual extermination. You only offer opinionated BS.
Stash
• Ethical Stash
• 13398
• I am car!
Re: The dip of the horizon
« Reply #14 on: January 07, 2023, 06:41:06 AM »
A horizon seen on a ball Earth would always be curved.
What is the size of your flat earth?
Here's how it works on our globe:
?
turbonium2
• 1717
Re: The dip of the horizon
« Reply #15 on: January 07, 2023, 07:23:24 AM »
What height would the arc be in the middle of a 200 mile long horizon, as seen from planes?
According to YOUR rate of 'curvature', it would be quite a pronounced arc in the middle of that horizon, right?
But the horizon is entirely flat, there is NO arc at all to be seen, in the slightest.
Why is there no arc at all? It WOULD be there, if 'curvature' existed, no?
Your side has tried to show a 'simulation' of how the horizon appears flat, from the ground, but slowly, gradually appears as a curve, before it looks like a ball, in 'space'.
I freeze framed the clip, as it began to show a curve. It was lower than PLANES fly at!
That showed it was complete BS, right there. This is why NASA never showed the horizon starting to 'curve', from a rocket, too. It doesn't work at all.
One must show a curving horizon before the ball Earth in 'space' shows up, but it cannot be done. Because it is NOT a ball, and is NOT curved.
If one KEEPS a flat horizon, at greater altitudes, it must SUDDENLY start to curve, which looks even MORE ridiculous!
Look for yourself, if you wish
?
DataOverFlow2022
• 4248
Re: The dip of the horizon
« Reply #16 on: January 07, 2023, 07:33:06 AM »
What height would the arc be in the middle of a 200 mile long horizon, as seen from planes?
If the earth is flat, why does changing height relativity little increase the distance to the horizon…
Quote
Distance to the Horizon
https://aty.sdsu.edu/explain/atmos_refr/horizon.html
« Last Edit: January 07, 2023, 07:38:21 AM by DataOverFlow2022 »
Stash
• Ethical Stash
• 13398
• I am car!
Re: The dip of the horizon
« Reply #17 on: January 07, 2023, 07:46:48 AM »
What height would the arc be in the middle of a 200 mile long horizon, as seen from planes?
If you truly are a truthseeker, do the math.
What is the size of your flat earth?
« Last Edit: January 07, 2023, 07:51:35 AM by Stash »
Copper Knickers
• 901
Re: The dip of the horizon
« Reply #18 on: January 07, 2023, 12:03:39 PM »
What height would the arc be in the middle of a 200 mile long horizon, as seen from planes?
According to YOUR rate of 'curvature', it would be quite a pronounced arc in the middle of that horizon, right?
But the horizon is entirely flat, there is NO arc at all to be seen, in the slightest.
Why is there no arc at all? It WOULD be there, if 'curvature' existed, no?
No, it wouldn't be there. Over water, the horizon is the same distance away in all directions and the same height in all directions. In that sense, it is indeed flat.
When viewing from an increasing height it will at some point become apparent that you are looking at the edge of a circle, but the horizon will still be the same height in all directions. There is no reason why one direction would look different from any other.
This is entirely consistent with a round earth. A flat earth wouldn't have a distinct horizon at all.
JackBlack
• 22194
Re: The dip of the horizon
« Reply #19 on: January 07, 2023, 01:56:51 PM »
No, your ball Earth is a sphere, which means it must have a completely curved surface, everywhere you are on that surface, it is always curved, in every direction you look at it, whether ON it, or ABOVE it, looking down to it below you.
It being a curved surface does not mean that you will see that curve in every direction you look the exact way you claim.
A horizon seen on a ball Earth would always be curved.
And it is, it is a circle surrounding us.
Conversely, a FE would have no horizon.
What we will see is around from one point, stationary, outward, which GOES AROUND FROM ONE POSITION, moving CIRCULAR around that one position, FROM the Earth's surface.
And that is exactly what we do see.
From standing in one direction, looking towards the horizon, we can follow it in a curve going around us in a circular fashion, to get back to where it started.
We see exactly what is expected for a RE.
I'm talking about a REAL, PHYSICAL CURVE, of the ball Earth's surface, which WOULD exist, WOULD be seen, as an actual curved surface of Earth.
Which has been seen, countless times. We even have photos of it.
Try to draw a sphere, or look at some sort of ball, and imagine you were on it, as a speck, and imagine what you'd see on it. Obviously, as a tiny speck on a massive ball, it wouldn't appear to be a ball, viewed over it, outward from you. You cannot tell what shape it is you are on, from that one viewpoint on it. The horizon may appear flat, while it is slightly curved, but it is too slight of a curve, to see it, or identify it, or measure it, from your position on such a massive ball.
That's what we WOULD see, if Earth WERE a massive ball, viewed outward, from one position on it.
i.e. what we see? There you go admitting you are entirely wrong yet again. You make big grand claims about how it should so easily see it curved, only to then admit you should not be able to see it easily due to how close you are to Earth.
But no, the horizon would be flat.
How many times must this be repeated?
A circle is flat. It is 2D, it MUST be flat.
The horizon on Earth is the intersection of Earth's curved surface with a plane. That plane makes the horizon flat.
But that doesn't mean it doesn't curve.
It is a circle, centred on a point below you. So it curves around you, being the same distance in all directions.
What changes is the angle you see the curve from.
If you look a ring or hoop or anything like that, from inside, at basically the centre, it will look like a line curving around you. There will not be any noticeable drop from the middle of your vision to the side.
In order to see that you need to get above the ring.
We don't actually KNOW what it would look like, what a horizon would look like from the ground, because Earth is NOT a ball, it is actually FLAT.
We most certainly do know, as Earth is round. Again, a FE wouldn't have a horizon other than from the very edge of that FE.
But regardless, we don't need Earth to be round to know what it looks like.
And these are two completely DIFFERENT surfaces, with one being a completely curved surface, while the other one is a completely FLAT surface, so they have VISUAL differences, for one thing, among many other differences as well
That is right, and the most relevant difference for this discussion is the curved surface produces a near horizon. It obstructs the view to more distant objects. And the angle of dip to the horizon will increase with altitude.
Conversely a flat surface would produce no horizon other than the actual edge. It would not obstruct the view to more distant parts of the surface.
So what we see is entirely consistent with a curved surface, not a flat one.
And if Earth WERE a ball, and DID have an actual RATE of 'curvature', everyone on Earth would be told, and taught, in schools, and written in our textbooks, told as ANY OTHER FACT IS STATED AS A FACT, used as ALL ACTUAL RATES are used, and followed as actual rates are, because they are VERY important to use, and work with, as actual rates, with actual measurements, are used in ALL fields today.
And more delusional BS.
We don't need to be told the RATE of curvature.
Why should people be told it as a rate (which you probably actually mean the drop over distance rather than rate of curvature), rather than just the radius?
As for how they are used in fields today, do you mean how only if they would make a significant impact are they used?
Where for example, the variation in density of air with altitude would be used for the calculations of altitude in air, as it varies significantly; but the same is not done for depth in water as the density does not vary significantly?
The real world is not about trying to do everything perfectly.
Instead, it is about doing it to the required level of accuracy and precision.
If you have something simple that will do that, use it. If it is too inaccurate or too imprecise, use something more accurate and more precise.
Even you do the same.
Look at your comparison of a flat earth vs a round Earth.
You talk about them having a completely flat surface vs a completely round surface. Yet we both know that is not the case.
It doesn't matter if you want to have a flat or round Earth, it will still have mountains and valleys and plateaus and so on.
A round Earth can still have flat portions and a flat Earth can still have round portions.
But you ignore them, because they aren't important to the overall argument.
Nobody says 'well, we can ignore that rate, it's not a big deal, anyway' In fact, it sounds completely STUPID, just to think about someone ever SAYING it
Quite the opposite.
It is incredibly stupid to think no one would say that.
If something is so small and insignificant that it would not make an impact on the overall goal, it is often ignored.
It's completely INSANE to even suggest they would be complete morons! Not a chance in hell, it's total BS!
Yet you do repeatedly.
So does that mean you are insane and are spouting total BS?
JackBlack
• 22194
Re: The dip of the horizon
« Reply #20 on: January 07, 2023, 02:10:09 PM »
What height would the arc be in the middle of a 200 mile long horizon, as seen from planes?
You mean looking through a window which will restrict your FOV, at a small portion of the horizon?
If you are at cruising altitude of 10 km, that would be equivalent to being 0.8 mm away from a ball with a diameter of 1 m.
The horizon would be a circle of radius ~357 km a distance of ~20 km below you.
If you took a hula hoop with a 1 m diameter, that would be like looking at it with your eye in the middle from 2.8 cm above the ring, with something in front of you to restrict your view to a small portion of it.
According to YOUR rate of 'curvature', it would be quite a pronounced arc in the middle of that horizon, right?
No. Yet again you are appealing to the great circle of Earth. That curvature you are appealing to is hidden by the horizon.
But the horizon is entirely flat, there is NO arc at all to be seen, in the slightest.
Again, BS.
The horizon is flat, a flat circle curving around a point below you.
Why is there no arc at all? It WOULD be there, if 'curvature' existed, no?
There is an arc there. If there wasn't you wouldn't be able to follow it around you in a circle.
Your side has tried to show a 'simulation' of how the horizon appears flat, from the ground, but slowly, gradually appears as a curve, before it looks like a ball, in 'space'.
I freeze framed the clip, as it began to show a curve. It was lower than PLANES fly at!
Yet you don't even bother providing it, or any of the details of it.
Instead you just make the same pathetic claims.
If one KEEPS a flat horizon, at greater altitudes, it must SUDDENLY start to curve, which looks even MORE ridiculous!
Again, it is a circle. That is flat and it is curved.
So no, it keeps a flat horizon, all the way up.
Likewise, it is curved all the way up.
Once more, what changes is the orientation you are viewing the circle from.
When close to Earth, you are viewing the circle from quite close to the middle of the circle. Quite close to the plane of the circle.
But when far away, you are viewing it from far away, looking down at the circle.
FlatAssembler
• 681
• Not a FE-er
Re: The dip of the horizon
« Reply #21 on: January 08, 2023, 11:54:08 AM »
Yet again, I am not claiming you can see the horizon's curvature from an airplane. The horizon's curvature can only be seen from a much greater height. But, from an airplane, you can see the dip of the horizon with a gyroscope and goniometer or a device having both gyroscope and a camera (such as most mobile phones these days).
Fan of Stephen Wolfram.
This is my parody of the conspiracy theorists:
https://www.theflatearthsociety.org/forum/index.php?topic=71184.0
This is my attempt to refute the Flat-Earth theory:
?
turbonium2
• 1717
Re: The dip of the horizon
« Reply #22 on: January 14, 2023, 11:45:15 PM »
Yet you don't even bother providing it, or any of the details of it.
Instead you just make the same pathetic claims.
Again, it is a circle. That is flat and it is curved.
So no, it keeps a flat horizon, all the way up.
Likewise, it is curved all the way up.
For someone who claims the Earth is a ball, it's odd you've never once SEEN any simulations of ball Earth horizons from ground to 'space' before!
I found one in about 4 seconds, and this would be YOUR so-called 'evidence', not mine!
Take a look..
I'll ignore other problems here, and focus on the ALTITUDE indicated, and what the horizons look like at those altitudes indicated.
The first 10 seconds of the clip is all we need to look at, as you'll see.
On the ground, or 0 ft. altitude, we see a virtually flat horizon. They have a straight white line above, or along the horizon.
Now, look at the next 10 seconds or so, frame by frame, which allows you to see what altitude is indicated over that time.
By the 4 second mark, it indicates altitudes of about 9-10000 feet.
Look at the horizon shown at that altitude. There is already a slight CURVE over it, which is very obvious to see, or should be, anyway. The straight line put above it, shows how it's curving slightly from end to end. This horizon has an ARC, which is slight, but is visible.
Look at the horizon shown at 20-30000 feet, a second or 2 later. Now, the horizon shows MORE of a curve, MORE of an arc, than before, at half that altitude.
And now, look at the horizon at 40-50000 feet, which is normal cruising altitude for planes.
Of course, the horizon shows even more of a curve, which is even MORE noticeable to see, which is what we would expect to see, a horizon that shows more and more of a CURVE, at greater altitudes above the surface.
Here's the whole problem with it - REAL horizons do NOT have a curve, or an arc, at those altitudes indicated in that simulation. They are completely flat, throughout.
We know that for a fact, we can PROVE they are completely flat, at any time.
You've claimed they are completely flat, while tying to call them 'curved circles', to mislead and twist what they actually LOOK LIKE, which is a completely FLAT LINE, as seen from one viewpoint, side to side. We do NOT see any CIRCLE, or any CURVE, or any CURVING CIRCLE, when viewing horizons. We see them as flat, straight lines across the Earth, from any viewpoint. This 'circle' you call a 'curve', as a 'curved circle', is NEVER seen by us, at all.
The simulation has horizons with visible curves over them, at plane altitudes. We all KNOW that horizons are completely FLAT across at plane altitudes.
I've seen TWO simulations, one shown here, which completely fail to hold up. They show horizons with visible curves over them, at altitudes we KNOW, and SEE, are completely flat across.
I hope you don't try to dispute that. You've already said they ARE flat across, or 'flat and curved as a circle', because that sounds better to you, since it mentions both a CURVE, and a CIRCLE, even if we never see any curves or circles, we just see COMPLETELY FLAT horizons.
And that's why all of your ball Earth bunch, who have surely tried everything possible to simulate actual horizons magically transforming curves over them, at altitudes nobody has been, so nobody can prove it WRONG, which works like 'space claims' do, that nobody can ever prove wrong, and that's the whole POINT of it!
There WOULD be ACCURATE simulations of how horizons look, from above a ball Earth, at plane altitudes, if it WAS a ball Earth.
So look at these 'simulations' of horizons on a ball Earth, at plane altitudes, because THAT is how horizons WOULD look, in general. They'd CURVE over them.
Oh, I forgot, you believe that horizons DO look completely flat - they don't look like circles, or curved circles. Horizons always look completely FLAT, right? Yet you claim they are NOT completely flat, they are slightly CURVED, but appear flat, over such 'small' distances, as WE see them!
That is up to YOU to prove, to show a simulation of it, like that one above, but with flat horizons at those altitudes instead. You'd think some ball Earthers would've DONE that by now, to support their claims, but I've not found any, which is odd.......
JackBlack
• 22194
Re: The dip of the horizon
« Reply #23 on: January 15, 2023, 02:49:39 AM »
For someone who claims the Earth is a ball, it's odd you've never once SEEN any simulations of ball Earth horizons from ground to 'space' before!
There you go making more insane assumptions.
I have seen plenty, and even know how to make them.
I am yet to see any which fail to match reality.
Look at the horizon shown at that altitude. There is already a slight CURVE over it, which is very obvious to see, or should be, anyway. The straight line put above it, shows how it's curving slightly from end to end.
Is is a very slight curve, which is only obvious to see due to the straight line above it.
It is also compressing a 65 degree FOV into a smaller area, and flattening it out, and not tilting down.
If you are 1 m away from the screen, 65 degrees would correspond to 1.2 m wide.
This will distort the view, making the curve more obvious.
The exact settings used to produce the image will also effect how significant the curve appears. Just like the exact lens in a camera can.
It also has a very sharp horizon, compared to the horizon in planes which often has cloud cover, and a view through a plane window.
And a window also limits the FOV.
To get a 65 degree FOV from a plane window, such as the A330, with a width of 22.86 cm, you would need to be 17.9 cm from the outside of the aircraft.
Considering the wall & trim is typically around 15 cm thick, that is a significant ask, which not many people would do. So most photos wouldn't show the FOV used.
And even then, have you bothered looking at photos/videos of the horizon from a plane?
Here is an example:
Notice the curve?
Here's the whole problem with it - REAL horizons do NOT have a curve, or an arc, at those altitudes indicated in that simulation.
Again, that is your claim. But the fact you see it circling you proves it has a curve.
They are completely flat, throughout.
Again, the horizon is a circle. This is a flat object which is curved.
We do NOT see any CIRCLE, or any CURVE, or any CURVING CIRCLE, when viewing horizons.
You don't see it because you don't want to.
There WOULD be ACCURATE simulations of how horizons look, from above a ball Earth, at plane altitudes, if it WAS a ball Earth.
And you are yet to demonstrate that there isn't.
That is up to YOU to prove
That isn't how the burden of proof works.
The burden of proof is on you, it is on you to demonstrate that the horizon appears as a straight line at all altitudes.
Providing evidence such as photos and videos showing the horizon from these altitudes, and producing a simulated image with comparable parameters.
?
turbonium2
• 1717
Re: The dip of the horizon
« Reply #24 on: January 15, 2023, 03:56:50 AM »
The curve is seen without having a straight line above it, simply edit it out of the frames, it shows curve without it.
Excuses don't help you here. A valid simulation works or it is not valid.
You've already claimed this simulation is completely wrong, not showing how it really is, so you don't HAVE any valid simulation at all. You would, if it was a ball, but it's not a ball. One cannot simulate something to be seen, that is NOT seen in reality.
Stash
• Ethical Stash
• 13398
• I am car!
Re: The dip of the horizon
« Reply #25 on: January 15, 2023, 04:58:59 AM »
Concord at 60k feet, mach 2. No fisheye. If it were, the plane would be bowed upward nose to tail.
JackBlack
• 22194
Re: The dip of the horizon
« Reply #26 on: January 15, 2023, 12:53:39 PM »
The curve is seen without having a straight line above it, simply edit it out of the frames, it shows curve without it.
The curve is more noticeable with the straight line there.
But great job showing you will just ignore any evidence you are provided with.
I provided a picture from a plane, showing a curve.
But you just ignore it, because you aren't here seeking the truth.
You are here spouting delusional BS to try and prop up your delusional fantasy.
• 2731
• I'm thinkin flat
Re: The dip of the horizon
« Reply #27 on: January 15, 2023, 04:50:04 PM »
I have seen and have picures of stuff at distances that i am pretty sure should be impossible according to my investigations including looking at topo maps. So i completely conclude we can see further than mainstream theory should allow for.
"Using our vast surveillance system, we've uncovered revolutionary new information..."
-them
theoretical formula for Earths curvature = 8 inches multiplied by (miles squared) = inches drop from straight forward
kids: say no to drugs
• 2731
• I'm thinkin flat
Re: The dip of the horizon
« Reply #28 on: January 15, 2023, 06:04:32 PM »
Its interesting to note the curvature you guys are showing and then the curvature we have supposedly seen from amateur weather balloons not to mention the absolute lack of curvature i BELIEVE was visible in the military picture releases from the first high altitude stuff in the 50s i think i think maybe early rockets v2 or saturn. The curvature in these early releases ands possibly lower amateur weather balloons is not near as curved as this concord photo and i belkive uncannily similar to jb's airliner photo, though reportedly 3 or more times higher than plane, i think 2 times higher than concorde... maybe quite off. plane 30 000 ft, concorde 50 000, balloon 100 000... not sure.
« Last Edit: January 15, 2023, 06:10:29 PM by faded mike »
"Using our vast surveillance system, we've uncovered revolutionary new information..."
-them
theoretical formula for Earths curvature = 8 inches multiplied by (miles squared) = inches drop from straight forward
kids: say no to drugs
bulmabriefs144
• 2684
Re: The dip of the horizon
« Reply #29 on: January 15, 2023, 11:01:16 PM »
On multiple places on the Flat Earth Wiki, it is being claimed that the horizon is always at your eye level. That is demonstrably not true, and the Round Earth Theory can easily explain why. If you draw a diagram...
I'm sorry, but this is nonsense.
The horizon from a plane:
The horizon on a mountain:
And a horizon on a plain.
In all cases except those doctored by NASA, the horizon is also totally level straight across.
The only thing that might change is range. I have already addressed this with my parabola theory.
The thing about parabola theory is that it accounts for greater range of vision at higher altitudes. RE absolutely does not. If you back up significantly from a mountain, despite being the valley, you can see a horizon past it. But in a round Earth, any obstacles would curve about the curvature and you could not see horizon past a mountain.
Yet you can see the clouds ahead around the mountain.
In a RE, the sky by your own model curves under the mountains. This means you cannot see sky above the mountain.
| 10,600
| 43,122
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.1875
| 3
|
CC-MAIN-2024-26
|
latest
|
en
| 0.931067
|
https://globelivemedia.com/world/spain/national-lottery-draw-what-time-is-it-and-how-much-is-the-prize-in-the-valentines-lottery/
| 1,632,161,951,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00083.warc.gz
| 334,761,960
| 27,386
|
# National Lottery Draw: What time is it and how much is the prize in the Valentine’s Lottery?
This Sunday, February 14, the third of the extraordinary draws of the National Lottery. After the ‘El Niño’ Lottery Draw held on January 6 and the Winter Lottery Draw on January 16, the great prizes return to the State Lottery and Betting Draw Room with the Extraordinary Draw of the Valentine’s Lottery.
The issuance of this draw consists of ten series of 100,000 tickets each, one at 150 euros per ticket, that is, 15 euros for the tenth. A total of 105 million euros in prizes will be distributed, 70% of the issue with a total of 34,851 prizes.
### When is the Extraordinary Valentine’s Lottery Draw held?
The draw will begin at 9:00 p.m. and will be carried out through the multiple drum system. Two 2-digit withdrawals will be made, fifteen of 3, five of 4, one of 5 and the first prize. All of them will be made through five drums that will rotate simultaneously with numbers from 0 to 9.
### Valentine’s Day Lottery Draw: what are the prizes and how much does it play?
These are the top prizes distributed by the Valentine’s Lottery:
– A accumulated special prize of 15,000,000 euros to the tenth for a single fraction of one of the bills
– A first prize (extraction of 5 figures) of 130,000 euros to the tenth (for the rest of the fractions)
– A second prize (extraction of 5 figures) of 25,000 euros to the tenth
– 50 prizes (five 4-digit extractions) of 375 euros to the tenth
– 1,500 prizes (fifteen 3-digit draws) of 75 euros to the tenth
– 2,000 prizes (two 2-digit draws) of 30 euros to the tenth.
### Valentine’s Day Lottery Draw: how much do you play in approximations?
These are the prizes in approximations that the Valentine’s Lottery distributes:
– Two approximations for the numbers before and after the first prize, from 2,400 euros to the tenth each
– Two approximations for the numbers before and after the second prize, of 1,532 euros to the tenth each.
### Valentine’s Day Lottery Draw: how much do you play in hundreds?
– Prizes for the 99 remaining numbers of the hundred of the first prize, of 75 euros to the tenth each
– Prizes for the 99 remaining numbers of the hundred of the second prize, of 75 euros to the tenth each.
### Valentine’s Day Lottery Draw: how much do you play in terminations and withdrawals?
– Prizes for tickets whose last three figures are the same and are equally arranged as those of the one who obtains the first prize, from 75 euros to the tenth each
– Prizes for tickets whose last two figures are the same and are equally arranged as those of the winner of the second prize, of 75 euros to the tenth each
– Refunds for tickets whose last figure is equal to the one that obtains the first prize, awarded with 15 euros to the tenth each
– Refunds for banknotes whose last figure is equal to the one obtained by the first special extraction of a digit, awarded with 15 euros to the tenth each
– Refunds for banknotes whose last digit is equal to the one obtained by the second special draw of a digit, awarded with 15 euros to the tenth each.
### The other Extraordinary Draws of the Lottery
In addition to the traditional Christmas Lottery Draw and of the Lottery of ‘El Niño’, throughout the year other extraordinary draws are held. Among them are the Valentine’s Lottery Giveaway, which is celebrated this Sunday. The others are: Winter Giveaway, Father’s Day Giveaway, Spring Giveaway, and Holiday Giveaway.
Note: GLM is not responsible for errors or omissions that may exist. The only valid official list is the one provided by State Lotteries and Betting.
Melissa Galbraith is the World News reporter for Globe Live Media. She covers all the major events happening around the World. From Europe to Americas, from Asia to Antarctica, Melissa covers it all. Never miss another Major World Event by bookmarking her author page right here.
| 905
| 3,922
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.40625
| 3
|
CC-MAIN-2021-39
|
latest
|
en
| 0.923651
|
https://bestbtcxonyngj.netlify.app/delahunt37199do/trade-receivables-formula-364
| 1,725,885,244,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00680.warc.gz
| 111,304,638
| 11,333
|
### Category Delahunt37199
Accounts receivable and accounts payable can significantly affect a If the term DSO / Days per Month in the formula above is not a whole number, the formula One such calculation, the accounts receivable turnover ratio, can help you determine how effective you are at extending credit and collecting debts from your 24 May 2012 1 The objectives of accounts receivable management. The optimum level The calculation of the annual cost can be expressed as a formula:.
One such calculation, the accounts receivable turnover ratio, can help you determine how effective you are at extending credit and collecting debts from your 24 May 2012 1 The objectives of accounts receivable management. The optimum level The calculation of the annual cost can be expressed as a formula:. 26 Jun 2018 Calculation inputs are the ending accounts receivable balance for the period and credit sales for the same period. DSO = [(AR / credit sales) x Study 9. Impairment loss on trade receivables flashcards from Chee Bee Seok's class online, or in Brainscape's iPhone or Android app. ✓ Learn faster with 19 Aug 2015 The calculation of the accounts receivable collection period establishes the average number of days needed to collect an amount due to the Trade Receivables is the accounting entry in the balance sheet of an entity, which arises due to the selling of the goods and services by the Entity to Its Customers on credit. Since this is an amount which the Entity has a legal claim over its Customer and also the Customer is bound to pay the same to Entity,
## Trade receivables arise due to credit sales. They are treated as an asset to the company and can be found on the balance sheet. Trade Receivables = Debtors
### The formula for accounts receivable days is: ( Accounts receivable ÷ Annual revenue) x Number of days in the year = Accounts receivable days For example, if a company has an average accounts receivable balance of \$200,000 and annual sales of \$1,200,000, then its accounts receivable days figure is:
Trade receivables are amounts billed by a business to its customers when it delivers goods or services to them in the ordinary course of business. These billings are typically documented on formal invoices, which are summarized in an accounts receivable aging report. This report is commonly used by the collections staff to collect overdue payments from customers. Formula. Accounts receivable turnover is calculated by dividing net credit sales by the average accounts receivable for that period. The reason net credit sales are used instead of net sales is that cash sales don’t create receivables. Only credit sales establish a receivable, so the cash sales are left out of the calculation. It is a helpful tool to evaluate the liquidity of receivables. Formula: Two components of the formula are “net credit sales” and “average trade accounts receivable”. It is clearly mentioned in the formula that the numerator should include only credit sales. But in examination questions, this information may not be given.
### 22 Apr 2019 or the general approach for all trade receivables or contract assets that result from Step 2: Migration and the calculation of historical loss rate.
The calculation of this ratio involves averages of account receivable and net credit sales. We will discuss later in this article. Formula: Accounts Receivable Change in Receivables affects cash flow, not net income. Formula. Change in Accounts Receivable = End of Year Accounts Receivable - Beginning of Year Trade receivables turnover ratio (in days). Method of calculation. Formula for trade receivables turnover ratio in days: trade receivables maturing up to 12 mths . The formula for Accounts Receivable Days is: (Accounts Receivable / Revenue) x Number of Days In Year. For the purpose of this calculation, it is usually simplifications for trade receivables, contract assets under AASB 15 Revenue will effectively develop an expected credit loss using this formula and probability 23 Jul 2013 The accounts receivable definition is a current asset account on the balance sheet. Accounts receivable (A/R) is a mainstay concept in
## 6 Jun 2019 Accounts receivable (AR) are amounts owed by customers for goods and When accounts receivable goes up, this is considered a use of cash on Calculating Internal Rate of Return Using Excel or a Financial Calculator.
Average Receivables (the preferable calculation method) = Sum of the accounts receivable at the end of each working day ÷ Number of working days Average Receivables (if only monthly data available) = Sum of the accounts receivable at the end of each month ÷ Number of months The formula for net credit sales is = Sales on credit – Sales returns – Sales allowances. Average accounts receivable is the sum of starting and ending accounts receivable over a time period (such as monthly or quarterly), divided by 2. Accounts receivable, sometimes shortened to "receivables" or A/R, is money that is owed to a company by its customers. If a company has delivered products or services but not yet received payment, it's an account receivable. The formula for Accounts Receivable Days is: (Accounts Receivable / Revenue) x Number of Days In Year For the purpose of this calculation, it is usually assumed that there are 360 days in the year (4 quarters of 90 days). Accounts Receivable Days is often found on a financial statement projection model.
Definition, Explanation and Use: The trade receivables’ collection period ratio represents the time lag between a credit sale and receiving payment from the customer. As trade receivables relate to credit sales so the credit sales figure should be used to calculate the ratio. Trade receivables are amounts billed by a business to its customers when it delivers goods or services to them in the ordinary course of business. These billings are typically documented on formal invoices, which are summarized in an accounts receivable aging report. This report is commonly used by the collections staff to collect overdue payments from customers. Formula. Accounts receivable turnover is calculated by dividing net credit sales by the average accounts receivable for that period. The reason net credit sales are used instead of net sales is that cash sales don’t create receivables. Only credit sales establish a receivable, so the cash sales are left out of the calculation. It is a helpful tool to evaluate the liquidity of receivables. Formula: Two components of the formula are “net credit sales” and “average trade accounts receivable”. It is clearly mentioned in the formula that the numerator should include only credit sales. But in examination questions, this information may not be given. In the equation, "days" refers to the number of days in the period being measures (usually a year or half of a year). However, the bottom of the equation, receivables turnover, must also be calculated from other data. This requires measurement of net credit sales during the period and average accounts receivable … The red boxes highlight the important information that we need to calculate Accounts Receivables to Sales, namely the company’s current accounts receivable and its total sales. Using the formula provided above, we arrive at the following figures: The formula for accounts receivable days is: ( Accounts receivable ÷ Annual revenue) x Number of days in the year = Accounts receivable days For example, if a company has an average accounts receivable balance of \$200,000 and annual sales of \$1,200,000, then its accounts receivable days figure is:
| 1,558
| 7,603
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.40625
| 3
|
CC-MAIN-2024-38
|
latest
|
en
| 0.921454
|
https://phys.libretexts.org/Bookshelves/University_Physics/Radically_Modern_Introductory_Physics_Text_II_(Raymond)/13%3A_Newtons_Law_of_Gravitation/13.07%3A_Use_of_Conservation_Laws
| 1,726,522,441,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00325.warc.gz
| 403,726,604
| 31,032
|
13.7: Use of Conservation Laws
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$
( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$
$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$
$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$
$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$
$$\newcommand{\Span}{\mathrm{span}}$$
$$\newcommand{\id}{\mathrm{id}}$$
$$\newcommand{\Span}{\mathrm{span}}$$
$$\newcommand{\kernel}{\mathrm{null}\,}$$
$$\newcommand{\range}{\mathrm{range}\,}$$
$$\newcommand{\RealPart}{\mathrm{Re}}$$
$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$
$$\newcommand{\Argument}{\mathrm{Arg}}$$
$$\newcommand{\norm}[1]{\| #1 \|}$$
$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$
$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$
$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$
$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vectorC}[1]{\textbf{#1}}$$
$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$
$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$
$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
$$\newcommand{\avec}{\mathbf a}$$ $$\newcommand{\bvec}{\mathbf b}$$ $$\newcommand{\cvec}{\mathbf c}$$ $$\newcommand{\dvec}{\mathbf d}$$ $$\newcommand{\dtil}{\widetilde{\mathbf d}}$$ $$\newcommand{\evec}{\mathbf e}$$ $$\newcommand{\fvec}{\mathbf f}$$ $$\newcommand{\nvec}{\mathbf n}$$ $$\newcommand{\pvec}{\mathbf p}$$ $$\newcommand{\qvec}{\mathbf q}$$ $$\newcommand{\svec}{\mathbf s}$$ $$\newcommand{\tvec}{\mathbf t}$$ $$\newcommand{\uvec}{\mathbf u}$$ $$\newcommand{\vvec}{\mathbf v}$$ $$\newcommand{\wvec}{\mathbf w}$$ $$\newcommand{\xvec}{\mathbf x}$$ $$\newcommand{\yvec}{\mathbf y}$$ $$\newcommand{\zvec}{\mathbf z}$$ $$\newcommand{\rvec}{\mathbf r}$$ $$\newcommand{\mvec}{\mathbf m}$$ $$\newcommand{\zerovec}{\mathbf 0}$$ $$\newcommand{\onevec}{\mathbf 1}$$ $$\newcommand{\real}{\mathbb R}$$ $$\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}$$ $$\newcommand{\laspan}[1]{\text{Span}\{#1\}}$$ $$\newcommand{\bcal}{\cal B}$$ $$\newcommand{\ccal}{\cal C}$$ $$\newcommand{\scal}{\cal S}$$ $$\newcommand{\wcal}{\cal W}$$ $$\newcommand{\ecal}{\cal E}$$ $$\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}$$ $$\newcommand{\gray}[1]{\color{gray}{#1}}$$ $$\newcommand{\lgray}[1]{\color{lightgray}{#1}}$$ $$\newcommand{\rank}{\operatorname{rank}}$$ $$\newcommand{\row}{\text{Row}}$$ $$\newcommand{\col}{\text{Col}}$$ $$\renewcommand{\row}{\text{Row}}$$ $$\newcommand{\nul}{\text{Nul}}$$ $$\newcommand{\var}{\text{Var}}$$ $$\newcommand{\corr}{\text{corr}}$$ $$\newcommand{\len}[1]{\left|#1\right|}$$ $$\newcommand{\bbar}{\overline{\bvec}}$$ $$\newcommand{\bhat}{\widehat{\bvec}}$$ $$\newcommand{\bperp}{\bvec^\perp}$$ $$\newcommand{\xhat}{\widehat{\xvec}}$$ $$\newcommand{\vhat}{\widehat{\vvec}}$$ $$\newcommand{\uhat}{\widehat{\uvec}}$$ $$\newcommand{\what}{\widehat{\wvec}}$$ $$\newcommand{\Sighat}{\widehat{\Sigma}}$$ $$\newcommand{\lt}{<}$$ $$\newcommand{\gt}{>}$$ $$\newcommand{\amp}{&}$$ $$\definecolor{fillinmathshade}{gray}{0.9}$$
The gravitational force is conservative, so two point masses $$M$$ and $$m$$ separated by a distance $$r$$ have a potential energy:
$U=-\frac{G M m}{r}\label{13.12}$
It is easily verified that differentiation recovers the gravitational force.
The conservation of energy and angular momentum in planetary motions can be used to solve many practical problems involving motion under the influence of gravity. For instance, suppose a bullet is shot straight upward from the surface of the moon. One might ask what initial velocity is needed to insure that the bullet will escape from the gravity of the moon. Since total energy $$E$$ is conserved, the sum of the initial kinetic and potential energies must equal the sum of the final kinetic and potential energies:
$E=K_{\text {initial }}+U_{\text {initial }}=K_{\text {final }}+U_{\text {final }}\label{13.13}$
For the bullet to escape the moon, its kinetic energy must remain positive no matter how far it gets from the moon. Since the potential energy is always negative, asymptoting to zero at infinite distance (i. e., $$U$$ final = 0), the minimum total energy consistent with this condition is zero. For zero total energy we have
$\frac{m v_{\text {initial }}^{2}}{2}=K_{\text {initial }}=-U_{\text {initial }}=+\frac{G M m}{R},\label{13.14}$
where $$m$$ is the mass of the bullet, $$M$$ is the mass of the moon, $$R$$ is the radius of the moon, and $$v$$ initial is the minimum initial velocity required for the bullet to escape. Solving for $$v$$ initial yields
$v_{\text {initial }}=\left(\frac{2 G M}{R}\right)^{1 / 2}\label{13.15}$
This is called the escape velocity. Notice that the escape velocity from a given radius is a factor of 212 larger than the velocity needed for a circular orbit at that radius.
An object is energetically bound to the sun if its kinetic plus potential energy is less than zero. In this case the object follows an elliptical orbit around the sun as shown by Kepler. However, if the kinetic plus potential energy is zero, the object follows a parabolic orbit, and if it is greater than zero, a hyperbolic orbit results. In the latter two cases the sun also resides at a focus of the parabola or hyperbola. Figure 13.8 shows a typical hyperbolic orbit. The impact parameter, defined in this figure, is the closest the object would have come to the center of the sun if it hadn’t been deflected by gravity.
Sometimes energy and angular momentum conservation can be used together to solve problems. For instance, suppose we know the energy and angular momentum of an asteroid of mass $$m$$ and we wish to infer the maximum and minimum distances of the asteroid from the sun, the so-called aphelion and perihelion distances. Since the asteroid is gravitationally bound to the sun, it is convenient to characterize the total energy by $$E$$ b = -$$E$$, the so-called binding energy. If $$v$$ is the orbital speed of the asteroid and $$r$$ is its distance from the sun, then the binding energy can be written in terms of the kinetic and potential energies:
$-E_{b}=\frac{m v^{2}}{2}-\frac{G M m}{r}\label{13.16}$
The magnitude of the angular momentum of the asteroid is $$L$$ = $$mv$$ t$$r$$, where $$v$$ t is the tangential component of the asteroid’s velocity. At aphelion and perihelion, the radial part of the velocity of the asteroid is zero and the speed equals the tangential component of the velocity, $$v$$ = $$v$$ t. Thus, at aphelion and perihelion we can eliminate $$v$$ in favor of the angular momentum:
$-E_{b}=\frac{L^{2}}{2 m r^{2}}-\frac{G M m}{r} \quad \text { (aphelion and perihelion). }\label{13.17}$
This can be rearranged into a quadratic equation
$r^{2}-\frac{G M m}{E_{b}} r+\frac{L^{2}}{2 m E_{b}}=0\label{13.18}$
which can be solved to yield
$r=\frac{1}{2}\left[\frac{G M m}{E_{b}} \pm\left(\frac{G^{2} M^{2} m^{2}}{E_{b}^{2}}-\frac{2 L^{2}}{m E_{b}}\right)^{1 / 2}\right]\label{13.19}$
The larger of the two solutions yields the aphelion value of the radius while the smaller yields the perihelion.
Equation \ref{13.19} tells us something else interesting. The quantity inside the square root cannot be negative, which means that we must have
$L^{2} \leq \frac{G^{2} M^{2} m^{3}}{2 E_{b}}\label{13.20}$
In other words, for a given value of the binding energy $$E$$b there is a maximum value for the angular momentum. This maximum value makes the square root zero, which means that the aphelion and the perihelion are the same — i. e., the orbit is circular. Thus, among all orbits with a given binding energy, the circular orbit has the maximum angular momentum.
13.7: Use of Conservation Laws is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
| 3,015
| 9,106
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.609375
| 3
|
CC-MAIN-2024-38
|
latest
|
en
| 0.197447
|
https://community.clickteam.com/threads/106284-why-this-OR-expression-wo-nt-work?s=9e99a8833f8538c187d49037813e76a8
| 1,571,331,071,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00369.warc.gz
| 437,719,398
| 13,602
|
# Thread: why this OR expression wo'nt work
1. ## why this OR expression wo'nt work
attached is a screenshot of the example
just doesn't work right why?
2. Hi, for line 2 you can have if EditValue is different from 3. Otherwise you could separate it like this:Screenshot (26).jpg
3. using OR in the expression editor is different to the OR condition in the event editor (the working solution @Lukiester posted)
in the expression editor, OR uses logic operators, a special way of performing simple logic checks
the OR operator checks to see if two values are at least a 1, that is:
0 OR 0 = 0
1 OR 0 = 1
0 OR 1 = 1
1 OR 1 = 1
as far as I can tell, Fusion treats numbers >1 the same as a 1 in logic operations - meaning "Edit Value == 0 OR 1 OR 2" is the same as checking "Edit Value == 1"
there are other operators as well, such as AND (which gives you a 1 if both numbers are a 1) and XOR (which gives you a 1 if one but *not* both are 1)
eg.
1 AND 1 = 0
0 AND 1 = 0
1 AND 0 = 0
0 AND 0 = 1
0 XOR 0 = 0
1 XOR 0 = 1
0 XOR 1 = 1
1 XOR 1 = 0
it's a bit confusing at first if you haven't come across them before, but they're very useful for comparing two flags in a single line in the expression editor. hope this provides a bit of an explanation
4. ## thanx Lukiester and Marbenx
thanx Lukiester for the idea but i think ill try and shortcut it a bit and remove the ors
at the end of day I've come up with this
better.jpg
don't know if right because it plays up when putting in each entry into a list with "Only one action when event loops" for each event
betternot.jpg
5. As marbenx says, it's not that kind of "or". OR in an expression is a Bitwise OR (you can also do AND, XOR & I think NOT also). It just compares 2 values at a bit level to see if the bits at the corresponding positions of either value are set & returns the result.
The best way to do what it looks like you're trying to do is to have the "Edit value ("edit box") = 1/2/4/whatever" each as separate conditions in one event, & use "or (logical)" between each one. It's in the same right click menu as the "negated" option.
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
| 634
| 2,252
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.09375
| 3
|
CC-MAIN-2019-43
|
latest
|
en
| 0.940572
|
https://www.broadstreethockey.com/2021/2/25/22300672/how-good-has-brian-elliott-been-for-his-age-flyers-goalies-stats-halak-hart-backup
| 1,632,827,746,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00651.warc.gz
| 694,963,734
| 42,817
|
Filed under:
# How good has Brian Elliott been for his age?
The Moose has been loose this season.
Anyone watching the Flyers this season has probably been singing the praises of Brian Elliott at one point or another. The current holder of the NHL’s “most likely to be mistaken for a high school gym teacher” award has been stellar in relief of Carter Hart this year, posting a sparkling triple slash of 2.37 GAA, .922 SV% and one shutout in five starts. Only a few seasons removed from a year when Dave Hakstol rode the Moose into the ground, it’s pretty remarkable to see this kind of standout performance, especially at his age. To find out how particularly good Elliott’s been by the numbers, I did a little digging. Here’s what I found.
## How are we comparing Elliott to others?
In order to keep this simple, I’ll really just be looking at one stat: goals saved above expectation, or GSAx. For this adventure, we’ll be using the Evolving-Hockey.com expected goals model, so keep that in mind if you want to go looking for this data yourself (fair warning that the content is behind a paywall).
What exactly is GSAx? Well, the core concept makes sense. Expected goals models attempt to quantify the average value of a shot by taking a variety of data, weighing it, and using it to determine the probability of an unblocked shot attempt becoming a goal. Most models chiefly care about shot location and distance, which makes logical sense; on average, a shot from the slot will have a better chance to go in than a shot from the point. GSAx takes this stat and compares the expected results of a goalie with the tangible ones. If the shots a goalie faces in a given game have a cumulative xG value of 2.35 and the goalie allows two goals, they’ve had a positive GSAx game. Essentially the stat quantifies if a goalie performed well in spite of their team or if a goalie had a rough night. While not the end-all, be-all (no one number is or ever will be), it’s certainly a useful tool to look at leaguewide goalie performance, and I’ll be using it today. Now, let’s have a look at how Elliott stacks up:
## Among his contemporaries:
Using the aforementioned GSAx to help account for team success, Elliott ranks 16th across the league in the 2020-2021 season, ahead of notable names like John Gibson, Tuukka Rask and David Rittich. Among backup goalies (fewer than half of their team’s starts) Elliott is 4th in this particular stat, and legitimately he should be 3rd if we eliminate injured starter Petr Mrazek from the list. In terms of his results at even strength, Elliott actually grades out below expectation, but his work on special teams has buoyed overall performance.
## Among his comparables:
Since the start of the 2015-2016 NHL season, there have been 42 single-season occurrences of a goalie over/at the age of 35 making at least one start. That may sound like a sizable group, but when you consider that a lot of those are repeat seasons from a small collection of players (Rinne, Lundqvist) and compare that to the overall total of goalies who played at least one game over that span (535) it’s really an select few. Elliott only has one year that qualifies for this group, but that’s been done on purpose in an attempt to isolate his performance in the 2020-2021 season and compare it to those of a similar age.
Among all goalie seasons in that group of 42, Elliott’s 2020-2021 already ranks 15th by GSAx, well above average. Among netminders in the sample that were backups (played fewer than half the games for their team), the veteran backstop is 8th. In terms of goalies this year who have accomplished the same feat at a similar age, Elliott is only in the company of Marc-Andre Fleury (aged 36, Vegas’s starter) and Jaroslav Halak (aged 35, Boston’s backup).
In a smaller role and sample size, Elliott is putting up numbers comparable to 2015-2016 Roberto Luongo (went 35-19-6 with a 2.35 GAA, .922 SV% and four shutouts). Even with all of those caveats, that kind of performance is incredible stuff; this generation of goalies has had more success in old age than most before them (it’s not fair to compare to Terry Sawchuk’s bonkers 1966-67 season).
## Conclusions
If the elder statesman of the orange & black continues to see the ice in short, well-rested stints, there’s little reason to think he’ll regress too heavily. It’s not often that the Flyers have a fantastic backup goalie, and it gives the team a bit of relief while young Carter Hart figures out how to adjust technically to the ever-adapting scorers of the NHL. The next time the Moose steps on the ice for Philly, take a second to appreciate how historic his performance has been to start the year, and say a quick prayer to the hockey gods for his continued success.
| 1,097
| 4,773
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.515625
| 3
|
CC-MAIN-2021-39
|
latest
|
en
| 0.955203
|
https://mail.python.org/pipermail/python-list/2014-November/680726.html
| 1,563,543,807,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-30/segments/1563195526237.47/warc/CC-MAIN-20190719115720-20190719141720-00241.warc.gz
| 448,565,755
| 2,985
|
# Understanding "help" command description syntax - explanation needed
Steven D'Aprano steve+comp.lang.python at pearwood.info
Fri Nov 7 06:47:44 CET 2014
```Chris Angelico wrote:
> On Wed, Nov 5, 2014 at 11:31 PM, Ivan Evstegneev
> <webmailgroups at gmail.com> wrote:
>>>> That's what I'm talking about (asking actually), where do you know it
>>>> from?
>>
>>>>I know it because I've been a programmer for 39 years.
>>
>> I didn't intend to offence anyone here. Just asked a questions ^_^
>
> Don't worry about offending people. Even if you do annoy one or two,
> there'll be plenty of us who know to be patient :) And I don't think
> Larry was actually offended; it's just that some questions don't
> mathematician "But how do you KNOW that 2 + 2 is 4? Where's it written
> down?"... all he can say is "It is".
An ordinary mathematician will say: "Hold up two fingers. Count them, and
you get one, two. Now hold up another two fingers. Count them, and you will
get two again. Hold them together, count the lot, and you get one, two,
three, four. Therefore, 2+2 = 4."
A good mathematician might start with the empty set, ∅ = {}. [Aside: if the
symbol looks like a small box, try changing your font -- it is supposed to
be a circle with a slash through it. Lucinda Typewriter has the glyph
for '\N{EMPTY SET}'.] That empty set represents zero. Take the set of all
empty sets, {∅} = {{}}, which represents one. Now we know how to count:
after any number, represented by some set, the *next* number is represented
by the simplest set containing the previous set.
Having defined counting, the good mathematician can define addition, and go
on to prove that 2+2 = 4. This is, essentially, a proof of Peano Arithmetic
(PA), which one can take as effectively the basic arithmetic of counting
fingers, sheep or sticks.
But a *great* mathematician will say, "Hmmm, actually, we don't *know* that
2+2 equals 4, because we cannot prove that arithmetic is absolutely
consistent. If arithmetic is not consistent, then we might simultaneously
prove that 2+2 = 4 and 2+2 ≠ 4, which is unlikely but not inconceivable."
Fields medallist Vladimir Voevodsky is a great mathematician, and he
apparently believes that the consistency of Peano Arithmetic is still an
open question.
http://m-phi.blogspot.com.au/2011/05/voevodsky-consistency-of-pa-is-open.html
Another way to look at this, not necessarily Voevodsky's approach, is to
note that the existing proofs of PA's consistency are *relative* proofs of
PA. E.g. they rely on the consistency of some other formal system, such as
the Zermelo-Frankel axioms (ZF). If ZF is consistent, so is PA, but we
don't know that ZF is consistent...
--
Steven
```
| 719
| 2,695
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.078125
| 3
|
CC-MAIN-2019-30
|
latest
|
en
| 0.954263
|
https://math.stackexchange.com/questions/450986/prove-midpoints-collinear
| 1,643,097,081,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00052.warc.gz
| 441,542,794
| 33,401
|
# Prove midpoints collinear
Let $ABCD$ be a convex quadrilateral and let $E$ and $F$ be the points of intersections of the lines $AB, CD$ and $AD,BC$ , respectively. Prove that the midpoints of the segments $AC$, $BD$, and $EF$ are collinear.
I tried to solve this question
assuming the opposite edges aren't parallel Let G,H, and I are midpoint of BD, AC, EF, respectively. Thus we have $[AGB]+[CGD]=\frac{1}{2}([ABD]+[BCD])=\frac{1}{2}[ABCD]$ similarity, $[AHB]+[CHD]=\frac{1}{2}([ABC]+[ACD])=\frac{1}{2}[ABCD]$ I can only do until here. I can't prove $G,H,I$ collinear. Could you help me continue my works? Thank you :D
• You're assuming the opposite edges aren't parallel. What have you tried? Jul 24 '13 at 12:45
• Another problem from this poster with no source, no motivation, and no indication of the slightest bit of effort. That's not what this website is here for. Jul 24 '13 at 12:54
• i'm sorry, Mr. Gerry Myerson. I'm a new user here and I don't know anything about the rules here. If i make a mistake, i'm so sorry. Okay, i'll add my works in this question.. Jul 24 '13 at 13:05
• This can also be solved by simple POP Aug 10 '20 at 2:36
Hint: Call $\overrightarrow{AB} = \vec x$ and $\overrightarrow{AD} = \vec y$. Find expressions for $\overrightarrow{AG}$, $\overrightarrow{AH}$, and $\overrightarrow{AI}$ as linear combinations of $\vec x$ and $\vec y$. You will need to use all the geometry in the problem, for example, writing $\overrightarrow{AE} = s\overrightarrow{AB}$ for some scalar $s$, etc.
| 461
| 1,523
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.953125
| 4
|
CC-MAIN-2022-05
|
latest
|
en
| 0.85615
|
https://hextobinary.com/unit/acceleration/from/mms2/to/mihmin/600
| 1,716,522,597,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-22/segments/1715971058677.90/warc/CC-MAIN-20240524025815-20240524055815-00489.warc.gz
| 243,590,487
| 15,636
|
# 600 Millimeter/Second Squared in Mile/Hour/Minute
Acceleration
Millimeter/Second Squared
Mile/Hour/Minute
600 Millimeter/Second Squared = 80.53 Mile/Hour/Minute
## How many Mile/Hour/Minute are in 600 Millimeter/Second Squared?
The answer is 600 Millimeter/Second Squared is equal to 80.53 Mile/Hour/Minute and that means we can also write it as 600 Millimeter/Second Squared = 80.53 Mile/Hour/Minute. Feel free to use our online unit conversion calculator to convert the unit from Millimeter/Second Squared to Mile/Hour/Minute. Just simply enter value 600 in Millimeter/Second Squared and see the result in Mile/Hour/Minute.
## How to Convert 600 Millimeter/Second Squared to Mile/Hour/Minute (600 mm/s2 to mi/h/min)
By using our Millimeter/Second Squared to Mile/Hour/Minute conversion tool, you know that one Millimeter/Second Squared is equivalent to 0.13421617752326 Mile/Hour/Minute. Hence, to convert Millimeter/Second Squared to Mile/Hour/Minute, we just need to multiply the number by 0.13421617752326. We are going to use very simple Millimeter/Second Squared to Mile/Hour/Minute conversion formula for that. Pleas see the calculation example given below.
$$\text{1 Millimeter/Second Squared} = \text{0.13421617752326 Mile/Hour/Minute}$$
$$\text{600 Millimeter/Second Squared} = 600 \times 0.13421617752326 = \text{80.53 Mile/Hour/Minute}$$
## What is Millimeter/Second Squared Unit of Measure?
Millimeter/Second Squared or Millimeter per Second Squared is a unit of measurement for acceleration. If an object accelerates at the rate of 1 millimeter/second squared, that means its speed is increased by 1 millimeter per second every second.
## What is the symbol of Millimeter/Second Squared?
The symbol of Millimeter/Second Squared is mm/s2. This means you can also write one Millimeter/Second Squared as 1 mm/s2.
## What is Mile/Hour/Minute Unit of Measure?
Mile/Hour/Minute or Mile per Hour per Minute is a unit of measurement for acceleration. If an object accelerates at the rate of 1 mile/hour/minute, that means its speed is increased by 1 mile per hour every minute.
## What is the symbol of Mile/Hour/Minute?
The symbol of Mile/Hour/Minute is mi/h/min. This means you can also write one Mile/Hour/Minute as 1 mi/h/min.
## Millimeter/Second Squared to Mile/Hour/Minute Conversion Table (600-609)
Millimeter/Second Squared [mm/s2]Mile/Hour/Minute [mi/h/min]
60080.53
60180.66
60280.8
60380.93
60481.07
60581.2
60681.34
60781.47
60881.6
60981.74
## Millimeter/Second Squared to Other Units Conversion Table
Millimeter/Second Squared [mm/s2]Output
600 millimeter/second squared in meter/second squared is equal to0.6
600 millimeter/second squared in attometer/second squared is equal to600000000000000000
600 millimeter/second squared in centimeter/second squared is equal to60
600 millimeter/second squared in decimeter/second squared is equal to6
600 millimeter/second squared in dekameter/second squared is equal to0.06
600 millimeter/second squared in femtometer/second squared is equal to600000000000000
600 millimeter/second squared in hectometer/second squared is equal to0.006
600 millimeter/second squared in kilometer/second squared is equal to0.0006
600 millimeter/second squared in micrometer/second squared is equal to600000
600 millimeter/second squared in nanometer/second squared is equal to600000000
600 millimeter/second squared in picometer/second squared is equal to600000000000
600 millimeter/second squared in meter/hour squared is equal to7776000
600 millimeter/second squared in millimeter/hour squared is equal to7776000000
600 millimeter/second squared in centimeter/hour squared is equal to777600000
600 millimeter/second squared in kilometer/hour squared is equal to7776
600 millimeter/second squared in meter/minute squared is equal to2160
600 millimeter/second squared in millimeter/minute squared is equal to2160000
600 millimeter/second squared in centimeter/minute squared is equal to216000
600 millimeter/second squared in kilometer/minute squared is equal to2.16
600 millimeter/second squared in kilometer/hour/second is equal to2.16
600 millimeter/second squared in inch/hour/minute is equal to5102362.2
600 millimeter/second squared in inch/hour/second is equal to85039.37
600 millimeter/second squared in inch/minute/second is equal to1417.32
600 millimeter/second squared in inch/hour squared is equal to306141732.28
600 millimeter/second squared in inch/minute squared is equal to85039.37
600 millimeter/second squared in inch/second squared is equal to23.62
600 millimeter/second squared in feet/hour/minute is equal to425196.85
600 millimeter/second squared in feet/hour/second is equal to7086.61
600 millimeter/second squared in feet/minute/second is equal to118.11
600 millimeter/second squared in feet/hour squared is equal to25511811.02
600 millimeter/second squared in feet/minute squared is equal to7086.61
600 millimeter/second squared in feet/second squared is equal to1.97
600 millimeter/second squared in knot/hour is equal to4198.7
600 millimeter/second squared in knot/minute is equal to69.98
600 millimeter/second squared in knot/second is equal to1.17
600 millimeter/second squared in knot/millisecond is equal to0.0011663067
600 millimeter/second squared in mile/hour/minute is equal to80.53
600 millimeter/second squared in mile/hour/second is equal to1.34
600 millimeter/second squared in mile/hour squared is equal to4831.78
600 millimeter/second squared in mile/minute squared is equal to1.34
600 millimeter/second squared in mile/second squared is equal to0.0003728227153424
600 millimeter/second squared in yard/second squared is equal to0.65616797900262
600 millimeter/second squared in gal is equal to60
600 millimeter/second squared in galileo is equal to60
600 millimeter/second squared in centigal is equal to6000
600 millimeter/second squared in decigal is equal to600
600 millimeter/second squared in g-unit is equal to0.061182972778676
600 millimeter/second squared in gn is equal to0.061182972778676
600 millimeter/second squared in gravity is equal to0.061182972778676
600 millimeter/second squared in milligal is equal to60000
600 millimeter/second squared in kilogal is equal to0.06
Disclaimer:We make a great effort in making sure that conversion is as accurate as possible, but we cannot guarantee that. Before using any of the conversion tools or data, you must validate its correctness with an authority.
| 1,721
| 6,420
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.65625
| 4
|
CC-MAIN-2024-22
|
latest
|
en
| 0.885868
|
https://stat.ethz.ch/pipermail/r-help/2007-February/125006.html
| 1,516,468,697,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2018-05/segments/1516084889677.76/warc/CC-MAIN-20180120162254-20180120182254-00154.warc.gz
| 831,112,395
| 2,970
|
# [R] glm gamma scale parameter
WILLIE, JILL JILWIL at SAFECO.com
Tue Feb 6 22:13:56 CET 2007
Thank you. You are correct, the shape parameter is what I need to
change & I think I see how to use the MASS package to do it...or if not,
at least I have enough now to figure it out.
A question to reconcile terminology which will speed me up, if you have
time to help me a bit more: phi = 'scale parameter' vs. 'dispersion
parameter' vs. 'shape parameter'? Excerpt below from the R intro.
manual defining phi & the stats compliment discussion.
R intro:
distribution of y is of the form
f_Y(y; mu, phi) =
exp((A/phi) * (y lambda(mu) - gamma(lambda(mu))) + tau(y, phi))
where phi is a scale parameter (possibly known), and is constant for all
observations, A represents a prior weight, assumed known but possibly
varying with the observations, and $\mu$ is the mean of y. So it is
assumed that the distribution of y is determined by its mean and
possibly a scale parameter as well.
Statistics Complements to Modern Applied Statistics with S, Fourth
edition By W. N. Venables and B. D. Ripley Springer:
7.6 Gamma models
The role of dispersion parameter for the Gamma family is rather
different. This is a parametric family which can be fitted by maximum
likelihood, including its shape parameter
Jill Willie
Open Seas
Safeco Insurance
jilwil at safeco.com
-----Original Message-----
From: Prof Brian Ripley [mailto:ripley at stats.ox.ac.uk]
Sent: Tuesday, February 06, 2007 12:25 PM
To: WILLIE, JILL
Cc: r-help at stat.math.ethz.ch
Subject: Re: [R] glm gamma scale parameter
On Tue, 6 Feb 2007, Prof Brian Ripley wrote:
> I think you mean 'shape parameter'. If so, see the MASS package and
> ?gamma.shape.
Also http://www.stats.ox.ac.uk/pub/MASS4/#Complements
leads to several pages of discussion.
>
> glm() _is_ providing you with the MLE of the scale parameter, but
really no
> estimate of the shape (although summary.glm makes use of one).
>
>
> On Tue, 6 Feb 2007, WILLIE, JILL wrote:
>
>> I would like the option to specify alternative scale parameters when
>> using the gamma family, log link glm. In particular I would like the
>> option to specify any of the following:
>>
>> 1. maximum likelihood estimate
>> 2. moment estimator/Pearson's
>> 3. total deviance estimator
>>
>> Is this easy? Possible?
>>
>> In addition, I would like to know what estimation process (maximum
>> likelihood?) R is using to estimate the parameter if somebody knows
that
>> off the top of their head or can point me to something to read?
>>
>> I did read the help & search the archives but I'm a bit confused
trying
>> to reconcile the terminology I'm used to w/R terminology as we're
>> transitioning to R, so if I missed an obvious way to do this, or
stated
>> this question in a way that's incomprehensible, my apologies.
>>
>> Jill Willie
>> Open Seas
>> Safeco Insurance
>> jilwil at safeco.com
>>
>> ______________________________________________
>> R-help at stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
| 925
| 3,481
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.515625
| 3
|
CC-MAIN-2018-05
|
latest
|
en
| 0.844876
|
https://math.answers.com/basic-math/What_is_7_fifths_in_a_percentage
| 1,708,815,014,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-10/segments/1707947474569.64/warc/CC-MAIN-20240224212113-20240225002113-00208.warc.gz
| 416,487,467
| 45,133
|
0
# What is 7 fifths in a percentage?
Updated: 4/28/2022
Wiki User
10y ago
7 fifths in a percentage = 140%
= 7/5 * 100%
= 1.4 * 100%
= 140%
Wiki User
10y ago
| 69
| 167
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.078125
| 3
|
CC-MAIN-2024-10
|
latest
|
en
| 0.908697
|
https://blog.plover.com/2015/08/
| 1,726,011,926,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00366.warc.gz
| 126,256,694
| 22,646
|
# The Universe of Disco
Fri, 21 Aug 2015
This is page 6 of the Cosmic Call message. An explanation follows.
The 10 digits again:
0 1 2 3 4 5 6 7 8 9
Page 6 discusses fundamental particles of matter, the structure of the hydrogen and helium atoms, and defines glyphs for the most important chemical elements.
Depicted at top left is the hydrogen atom, with a proton in the center and an electron circulating around the outside. This diagram is equated to the glyph for hydrogen.
The diagram for helium is similar but has two electrons, and its nucleus has two protons and also two neutrons.
Proton Neutron Electron
The illustrations may puzzle the aliens, depending on how they think of atoms. (Feynman once said that this idea of atoms as little solar systems, with the electrons traveling around the nucleus like planets, was a hundred years old and out of date.) But the accompanying mass and charge data should help clear things up. The first formula says
!!M_p = 1836\cdot M_e!!
the mass of the proton is 1836 times the mass of the electron, and that 1836, independent of the units used and believed to be a universal and fundamental constant, ought to be a dead giveaway about what is being discussed here.
If you want to communicate fundamental constants, you have a bit of a problem. You can't tell the aliens that the speed of light is !!1.8\cdot10^{12}!! furlongs per fortnight without first explaining furlongs and fortnights (as is actually done on a later page). But the proton-electron mass ratio is dimensionless; it's 1836 in every system of units. (Although the value is actually known to be 1836.15267; I don't know why a more accurate value wasn't given.)
This is the first use of subscripts in the document. It also takes care of introducing the symbol for mass. The following formula does the same for charge : !!Q_p = -Q_e!!.
The next two formulas, accompanying the illustration of the helium atom, describe the mass (1.00138 protons) and charge (zero) of the neutron. I wonder why the authors went for the number 1.00138 here instead of writing the neutron-electron mass ratio of 1838 for consistency with the previous ratio. I also worry that this won't be enough for the aliens to be sure about the meaning of . The 1836 is as clear as anything can be, but the 0 and -1 of the corresponding charge ratios could in principle be a lot of other things. Will the context be enough to make clear what is being discussed? I suppose it has to; charge, unlike mass, comes in discrete units and there is nothing like the 1836.
The second half of the page reiterates the symbols for hydrogen and helium and defines symbols for eight other chemical elements. Some of these appear in organic compounds that will be discussed later; others are important constituents of the Earth. It also introduces symbol for “union” or “and”: . For example, sodium is described as having 11 protons and 12 neutrons.
Hydrogen Helium Carbon Nitrogen Oxygen Aluminium Silicon Iron Sodium Chlorine
Most of these new glyphs are not especially mnemonic, except for hydrogen—and aluminium, which is spectacular.
The blog is going on hiatus until early September. When it returns, the next article will discuss page 7, shown at right. It has three errors. Can you find them? (Click to enlarge.)
Wed, 19 Aug 2015
This is page 5 of the Cosmic Call message. An explanation follows.
The 10 digits again:
0 1 2 3 4 5 6 7 8 9
Page 5 discusses two basic notions of geometry. The top half concerns circles and introduces !!\pi!!. There is a large circle with its radius labeled :
The outer diameter is then which is !!2\cdot r!!.
The perimeter is twice times the radius , and the area is times the square of the radius . What is ? It's !!\pi!! of course, as the next line explains, giving !!\pi = 3.1415926545697932…365698614212904!!, which gives enough digits on the front to make clear what is being communicated. The trailing digits are around the 51 billionth places and communicate part of the state of our knowledge of !!\pi!!. I almost wish the authors had included a sequence of fifteen random digits at this point, just to keep the aliens wondering.
The bottom half of the page is about the pythagorean theorem. Here there's a rather strange feature. Instead of using the three variables from the previous page, , the authors changed the second one and used instead. This new glyph does not appear anywhere else. A mistake, or did they do it on purpose?
In any case, the pythagorean formula is repeated twice, once with exponents and once without, as both !!z^2=x^2+b^2!! and !!z\cdot z = x\cdot x + b\cdot b!!. I think they threw this in just in case the exponentiation on the previous pages wasn't sufficiently clear. I don't know why the authors chose to use an isosceles right triangle; why not a 3–4–5 or some other scalene triangle, for maximum generality? (What if the aliens think we think the pythagorean theorem applies only for isosceles triangles?) But perhaps they were worried about accurately representing any funny angles on their pixel grid. I wanted to see if it would fit, and it does. You have to make the diagram smaller, but I think it's still clear:
(I made it smaller than it needed to be and then didn't want to redo it.)
I hope this section will be sufficiently unmistakable that the aliens will see past the oddities.
The next article will discuss page 6, shown at right. (Click to enlarge.) Try to figure it out before then.
Mon, 17 Aug 2015
This is page 4 of the Cosmic Call message. An explanation follows.
Reminder: page 1 explained the ten digits:
0 1 2 3 4 5 6 7 8 9
And the equal sign . Page 2 explained the four basic arithmetic operations and some associated notions:
addition subtraction multiplication division negation ellipsis (…) decimalpoint indeterminate
This page, headed with the glyph for “mathematics” , describes the solution of simple algebraic equations and defines glyphs for three variables, which we may as well call !!x,y,!! and !!z!!:
x y z
Each equation is introduced by the locution which means “solve for !!x!!”. This somewhat peculiar “solve” glyph will not appear again until page 23.
For example the second equation is !!x+4=10!!:
Solve for !!x!!: !!x+4=10!!
The solution, 6, is given over on the right:
!!x=6!!
After the fourth line, the equations to be solved change from simple numerical equations in one variable to more abstract algebraic relations between three variables. For example, if
Solve for !!x!!: !!x\cdot y=z!!
then
!!x=z\div y!!.
The next-to-last line uses a decimal fraction in the exponent, !!0.5!!: . On the previous page, the rational fraction !!1\div 2!! was used. Had the same style been followed, it would have looked like this: .
Finally, the last line defines !!x=y^3!! and then, instead of an algebraic solution, gives a graph of the resulting relation, with axes labeled. The scale on the axes is not the same; the !!x!!-coordinate increases from 0 to 20 pixels, but the !!y!!-coordinate increases from 0 to 8000 pixels because !!20^3 = 8000!!. If axes were to the same scale, the curve would go up by 8,000 pixels. Notice that the curve does not peek above the !!x!!-axis until around !!x=8, y=512!! or so. The authors could have stated that this was the graph of !!y=x^3\div 400!!, but chose not to.
I also wonder what the aliens will make of the arrows on the axes. I think the authors want to show that our coordinates increase going up and to the left, but this seems like a strange and opaque way to do that. A better choice would have been to use a function with an asymmetric graph, such as !!y=2^x!!.
(After I wrote that I learned that similar concerns were voiced about the use of a directional arrow in the Pioneer plaque.
(Wikipedia says: “An article in Scientific American criticized the use of an arrow because arrows are an artifact of hunter-gatherer societies like those on Earth; finders with a different cultural heritage may find the arrow symbol meaningless.”)
The next article will discuss page 5, shown at right. (Click to enlarge.) Try to figure it out before then.
Sun, 16 Aug 2015
My overall SE posting volume was down this month, and not only did I post relatively few interesting items, I've already written a whole article about the most interesting one. So this will be a short report.
• I already wrote up Building a box from smaller boxes on the blog here. But maybe I have a couple of extra remarks. First, the other guy's proposed solution is awful. It's long and complicated, which is forgivable if it had answered the question, but it doesn't. And the key point is “blah blah blah therefore code a solver which visits all configurations of the search space”. Well heck, if this post had just been one sentence that ended with “code a solver which visits all configurations of the search space” I would not have any complaints about that.
As an undergraduate I once gave a talk on this topic. One of my examples was the problem of packing 31 dominoes into a chessboard from which two squares have been deleted. There is a simple combinatorial argument why this is impossible if the two deleted squares are the same color, say if they are opposite corners: each domino must cover one square of each color. But if you don't take time to think about the combinatorial argument you could waste a lot of time on computer search learning that there is no solution in that case, and completely miss the deeper understanding that it brings you. So this has been on my mind for a long time.
• I wrote a few posts this month where I thought I gave good hints. In How to scale an unit vector !!u!! in such way that !!a u\cdot u=1!! where !!a!! is a scalar I think I did a good job identifying the original author's confusion; he was conflating his original unit vector !!u!! and the scaled, leading him to write !!au\cdot u=1!!. This is sure to lead to confusion. So I led him to the point of writing !!a(bv)\cdot(bv)=1!! and let him take it from there. The other proposed solution is much more rote and mechanical. (“Divide this by that…”)
In Find numbers !!\overline{abcd}!! so that !!\overline{abcd}+\overline{bcd}+\overline{cd}+d+1=\overline{dcba}!! the OP got stuck partway through and I specifically addressed the stuckness; other people solved the problem from the beginning. I think that's the way to go, if the original proposal was never going to work, especially if you stop and say why it was never going to work, but this time OP's original suggestion was perfectly good and she just didn't know how to get to the next step. By the way, the notation !!\overline{abcd}!! here means the number !!1000a+100b+10c+d!!.
In Help finding the limit of this series !!\frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \cdots!! it would have been really easy to say “use the formula” or to analyze the series de novo, but I think I almost hit the nail on the head here: it's just like !!1+\frac12 + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \cdots!!, which I bet OP already knows, except a little different. But I pointed out the wrong difference: I observed that the first sequence is one-fourth the second one (which it is) but it would have been simpler to observe that it's just the second one without the !!1+\frac12!!. I had to review it just now to give the simpler explanation, but I sure wish I'd thought of it at the time. Nobody else pointed it out either. Best of all, would have been to mention both methods. If you can notice both of them you can solve the problem without the advance knowledge of the value of !!1+\frac12+\frac14+\ldots!!, because you have !!4S = 1+\frac12 + S!! and then solve for !!S!!.
In Visualization of Rhombus made of Radii and Chords it seemed that OP just needed to see a diagram (“I really really don't see how two circles can form a rhombus?”), so I drew one.
Fri, 14 Aug 2015
Earlier articles: Introduction Common features Page 1 (numerals) Page 2 (arithmetic)
This is page 3 of the Cosmic Call message. An explanation follows.
Reminder: page 1 explained the ten digits:
0 1 2 3 4 5 6 7 8 9
And the equal sign . Page 2 explained the four basic arithmetic operations and some associated notions:
addition subtraction multiplication division negation ellipsis (…) decimalpoint indeterminate
This page, headed with the glyph for “mathematics” , explains notations for exponentiation and scientific notation. (This notation was first used on page 1 in the mersenne prime .)
Exponentiation could be represented by an operator, but instead the authors have chosen to represent it by a superscripted position on the page, as is done in conventional mathematical notation. This saves space.
The top section of the page has small examples of exponentiation, including for example !!5^3=125!!:
!!5^3=125!!
There is a section that follows with powers of 10: !!10^1=10, 10^2=100, 10^3=1000, !! and more interestingly !!10^{{}^-2} = 0.01!!:
!!10^{{}^-2} = 0.01!!
This is a lead-in to the next section, which expresses various quantities in scientific notation, which will recur frequently later on. For example, !!0.045!! can be written as !!45\times 10^{{}^-2}!!:
!!45\times10^{{}^-2} = 0.45!!
Finally, there is an offhand remark about the approximate value of the square root of 2:
!!2^{1\div 2} = 1.41421356…!!
The next article will discuss page 4, shown at right. (Click to enlarge.) Try to figure it out before then.
Wed, 12 Aug 2015
On Tuesday I discussed an interesting solution to the problem of turning this:
no X X on
A --------------- C
into this:
no X X off X on
A ------ B ------ C
Dave Du Cros has suggested an alternative solution: Make the changes required to turn off feature X, and commit them as B, as in my solution:
no X X on X off
A ------ C ------ B
Then use git-revert to revert the changes, making a new C commit in the right place:
no X X on X off X on
A ------ C ------ B ------ C'
C' and C have identical trees.
Then use git-rebase to squash together C and B:
no X X off X on
A --------------- B ------ C'
This has the benefit of not requiring anything strange. I think my solution is more general, but it's also weird, and it's not clear that the increased generality is useful.
However, what if there were a git-reorder-commits command? Then my solution would seem much less weird. It would look like this: create B, as before, and do:
git reorder-commits 0 1
This last command would mean that the previous two commits, normally HEAD~1 and HEAD~0, should switch places. This might be a useful standard tool. Or similarly to turn
B -- 3 -- 2 -- 1 -- 0
into
B -- 2 -- 0 -- 3 -- 1
one would use
git reorder-commits 2 0 3 1
I think git-reorder-commits would be easy to implement, as a loop atop git-commit-tree, as in the previous article.
[ Addendum 20200531: Curtis Dunham suggested a much better interface to this functionality than my git-reorder-commits proposal. ]
Earlier articles: Introduction Common features
This is page 1 of the Cosmic Call message. An explanation follows.
This page, headed with the glyph for “mathematics” , explains the numeral symbols that will be used throughout the rest of the document. I should warn you that these first few pages are a little dull, establishing basic mathematical notions. The good stuff comes a little later.
The page is in three sections. The first section explains the individual digit symbols. A typical portion looks like this:
•••• ••• = 0111 = 7
Here the number 7 is written in three ways: first, as seven dots, probably unmistakable. Second, as a 4-bit binary number, using the same bit symbols that are used in the page numbers. The three forms are separated by the glyph , which means “equals”. The ten digits, in order from 0 to 9, are represented by the glyphs
0 1 2 3 4 5 6 7 8 9
The authors did a great job selecting glyphs that resemble the numerals they represent. All have some resemblance except for 4, which has 4 horizontal strokes. Watch out for 4; it's easy to confuse with 3.
The second section serves two purposes. It confirms the meaning of the ten digits, and it also informs the aliens that the rest of the message will write numerals in base ten. For example, the number 14:
••••• ••••• •••• = 14
Again, there are 14 dots, an equal sign, and the numeral 14, this time written with the two glyphs (1) and (4). The base-2 version is omitted this time, to save space. The aliens know from this that we are using base 10; had it been, say, base 8, the glyphs would have been .
People often ask why the numbers are written in base 10, rather than say in base 2. One good answer is: why not? We write numbers in base 10; is there a reason to hide that from the aliens? The whole point of the message is to tell the aliens a little bit about ourselves, so why disguise the fact that we use base-10 numerals? Another reason is that base-10 numbers are easier to proofread for the humans sending the message.
The third section of the page is a list of prime numbers from 2 to 89:
67, 71, 73, 79, 83
and finally the number !!2^{3021377}-1!!
,
!!2^{3021377}-1!!
which was the largest prime number known to humans at the time. (The minus sign and exponentiation notation are explained on later pages.) Why? Again. to tell the aliens about ourselves: here's a glimpse of the limits of our mathematical knowledge.
I often wonder what the aliens will think of the !!2^{3021377}-1!!. Will they laugh at how cute we are, boasting about the sweet little prime number we found? Or will they be astounded and wonder why we think we know that such a big number is prime?
The next article, to appear 2015-08-12, will discuss page 2, shown at right. (Click to enlarge.) Try to figure it out before then.
Earlier articles: Introduction Common features Page 1 (numerals)
This is page 2 of the Cosmic Call message. An explanation follows.
Reminder: the previous page explained the ten digits:
0 1 2 3 4 5 6 7 8 9
The page is in five sections, three on top and two below.
The first four sections explain addition , subtraction , multiplication , and division . Each is explained with a series of five typical arithmetic equalities. For example, !!4\times 3= 12!!:
The subtraction sign actually appeared back on page 1 in the Mersenne prime !!2^{3021377}-1!! .
The negative sign is introduced in connection with subtraction, since !!1-2={}^-1!!:
Note that the negative-number sign is not the same as the subtraction sign.
The decimal point is introduced in connection with division. For example, !!3\div 2 = 1.5!!:
There is also an attempt to divide by zero:
It's not clear what the authors mean by this; the mysterious glyph does not appear anywhere else in the document. What did they think it meant? Infinity? Indeterminate? Well, I found out later they published a cheat sheet, which assigns the meaning “undetermined” to this glyph. Not a great choice, in my opinion, because !!1÷0!! is not numerically equal to anything.
For some reason, perhaps because of space limitations, the authors have stuck the equation !!0-1 = {}^-1!! at the bottom of the division section.
The fifth section, at lower right, displays some nonterminating decimal fractions and introduces the ellipsis or ‘…’ symbol. For example, !!1\div 9 = 0.1111\ldots!!:
I would have put !!2÷27 = 0.0740…!! here instead of !!2\div 3!!, which I think is too similar to the other examples.
The next article, to appear 2015-08-14, will discuss page 3, shown at right. (Click to enlarge.) Try to figure it out before then.
Tue, 11 Aug 2015
I know, you want to say “Why didn't you just use git-rebase?” Because git-rebase wouldn't work here, that's why. Let me back up.
Say I have commit A, in which feature X does not exist yet. Then in commit C, I implement feature X.
But I realize what I really wanted was to have A, then B, in which feature X was implemented but disabled, and then C in which feature X was enabled. The C I want is just like the C that I have, but I don't have the intervening B.
I have:
no X X on
A --------------- C
I want:
no X X off X on
A ------ B ------ C
One way to do this is to use git-rebase in edit mode to split C into B and C. To do this I would pause while rebasing C, edit C to disable feature X, commit the result, which is B, then undo the previous edits to re-enable X, and continue the rebase, creating C. That's two sets of edits. I could backup the files before the first edit and then copy them back for the second edit, but that's the SVN way, so I'm not going to do that.
Now someone wants me to use git-rebase to “reorder the commits”. Their idea is: I have C. Edit C to disable feature X and commit the result as B':
no X X on X off
A ------ C ------ B'
Now use interactive git-rebase to reorder B and C. But this will not work. git-rebase will construct a patch for turning C into B' and will try to apply it to A. This will fail completely, because a patch for turning C into B' is a patch for turning off feature X once it is implemented. Feature X is not in A and you can't turn something off that isn't there. So the rebase will fail to apply the patch.
What I did instead was rather bizarre, using a plumbing command, but worked well. I wrote the code to disable X, and committed it as B, obtaining this:
no X X on X off
A ------ C ------ B
Now B and C have the files I want in them, but their parents are wrong. That is, the history is in the wrong order, but if the parent of C was B and the parent of B was A, eveything would be perfect.
But we can't just change the parents; we have to create a new commit, say B', which has the same files as B but whose parent is A instead of C, and we have to create a new commit C' which has the same files as C but whose parent is B' instead of A.
This is what git-commit-tree does. You give it a tree object containing the files you want, a list of parents, and a commit message, and it creates the commit you asked for and prints its SHA1.
When we use git-commit, it first turns the index into a tree, with git-write-tree, then creates the commit, with git-commit-tree, and then moves the current head ref up to the new commit. Here we will use git-commit-tree directly.
So I did:
% git checkout -b XX A
Switched to a new branch 'XX'
% git commit-tree -p HEAD B^{tree}
% git commit-tree -p HEAD C^{tree}
ce46beb90d4aa4e2c9fe0e2e3d22eea256edceac
% git reset --hard ce46beb90d4aa4e2c9fe0e2e3d22eea256edceac
The first git-commit-tree
% git commit-tree -p HEAD B^{tree}
says to make a commit whose tree is the same as B's, and whose parent is the current HEAD, which is A. (B^{tree} is a special notation that means to get the tree from commit B.) Git pauses here to read the commit message from standard input (not shown), and prints the SHA of the new commit on the terminal. I then use git-reset to move the current head ref, XX, up to the new commit. Normally git-commit would do this for us, but we're not using git-commit today.
Then I do the same thing with C:
% git commit-tree -p HEAD C^{tree}
makes a new commit whose tree is the same as C's, and whose parent is the current head, which looks just like B. Again it reads a commit message from standard input, and prints the SHA of the new commit on the terminal, and again I use git-reset to move XX up to the new commit.
Now I have what I want and I only had to edit the files once. To complete the task I just reset the head of my working branch to wherever XX is now, discarding the old A-C-B branch in favor of the new A-B-C branch. If there's an easier way to do this, I don't know it.
It seems to me that there have been a number of times in the past when I wanted to do something like reordering commits, and git-rebase did not do what I wanted because it reorders patches and not commits. I should keep my eyes open, and see if this comes up again, and if it is worth automating.
[ Thanks to Jeremy Leader for suggesting I write this up and to Jeremy Leader and Rik Signes for advance editing. ]
[ Addendum 20150813: a followup article ]
[ Addendum 20200531: a better way to accomplish the same thing ]
Sun, 09 Aug 2015
Earlier articles: Introduction
(At left is page 1 of the Cosmic Call message. For an enlarged version of the image, click it.)
First, some notes about the general format of each page. The Cosmic Call message was structured as 23 pages, each a 127×127 bitmap. The entire message was therefore 127×127×23 bits, and this would hopefully be suggestive to the aliens: they could try decoding it as 127 pages of 127×23-bit images, which would produce garbage, or as 23 pages of 127×127-bit images, which is correct. Or they might try decoding it as a single 127×2921-bit image, which would also work. But the choices are quite limited and it shouldn't take long to figure out which one makes sense.
To assist in the framing, each page of the message is surrounded by a border of black pixels and then a second smaller border of white pixels. If the recipient misinterpreted the framing of the bit sequence, say by supposing that the message was made of lines of 23 pixels, it would be immediately apparent that something was wrong, as at right. At the very least the regular appearance of the black border pixels every 127 positions, and the fact that the message began with 128 black pixels, would suggest that there was something significant about that number. If the aliens fourier-transform the message, there should be a nice big spike at the 127 mark.
Most of the message is encoded as a series of 5×7-pixel glyphs. The glyphs were generated at random and then subject to later filtering: candidate glyphs were discarded unless they differed from previous glyphs in enough bit positions. This was to help the recipients reconstruct the glyphs if some of the bits were corrupted in transmission, as is likely.
The experimenters then eyeballed the glyphs and tried to match glyphs with their meanings in a way that would be easy for humans to remember, to aid in proofreading. For example, the glyph they chose to represent the digit 7 was .
People frequently ask why the message uses strange glyphs instead of standard hindu-arabic numerals. This is explained by the need to have the glyphs be as different as possible. Communication with other stars is very lossy. Imagine trying to see a tiny flickering light against the background of a star at a distance of several light years. In between you and the flickering light are variable dust and gas clouds. Many of the pixels are likely to be corrupted in transmission. The message needs to survive this corruption. So glyphs are 35 bits each. Each one differs from the other glyphs in many positions, whereas a change of only a few pixels could change a standard 6 into an 8 or vice versa. A glyph is spread across multiple lines of the image, which makes it more resistant to burst errors: even if an entire line of pixels is destroyed in transit, no entire glyph will be lost.
At the top left and top right of each page are page numbers. For example, page number 1: The page numbers are written in binary, most significant bit first, with representing a 1 bit and representing a 0 bit. These bit shapes were chosen to be resistant to data corruption; you can change any 4 of the 9 pixels in either shape and the recipient can still recover the entire bit unambiguously. There is an ambiguity about whether the numerals are written right to left or left to right—is the number 1 or the number 16?—but the aliens should be able to figure it out by comparing page numbers on consecutive pages; this in turn will help them when time comes for them to figure out the digit symbols.
Every page has a topic header, in this case , which roughly means “mathematics”. The topics of the following pages are something like:
• 1–5 Mathematics
• 6–11,21 Physics
• 12–14,19–20 The Earth
• 15–18 Human anatomy and biochemistry
• 22 Cosmology
• 23 Questions
In the next article I'll explain the contents of page 1. Each following article will appear two or three days later and will explain another page.
Zip file of all 23 pages
Thu, 06 Aug 2015
A message to the aliens (introduction)
In 1999, two Canadian astrophysicists, Stéphane Dumas and Yvan Dutil, composed and sent a message into space. The message was composed of twenty-three pages of bitmapped data, and was sent from the RT-70 radio telescope in Yevpatoria, Ukraine, as part of a set of messages called Cosmic Call.
The message images themselves are extremely compelling. I saw the first one in the book Beyond Contact by Brian McConnell:
I didn't think much of the rest of the book, but the image was arresting. After staring at it for a while, and convincing myself I understood the basic idea, I found the full set of images on Mike Matessa's web site, printed them out, and spent a happy couple of hours at the kitchen table deciphering them.
Sometimes when I gave conference talks, I would put this image on the screen during break, to give people something to think about before the class started up again. I like to say that it's fun to see if you're as smart as an alien, or at least if as smart as the Canadian astrophysicists thought the aliens would be.
I invite you to try to understand what is going on in the first image, above. In a day or two I will post a full explanation, along with the second image. Over the next few weeks I hope to write a series of blog articles about the 23 pages, explaining the details of each.
If you can't wait that long for the full set, you can browse them here, or download a zip file.
Addendum 20151223: The series is now complete. The full set of articles is:
Tue, 04 Aug 2015
A few months ago I wrote an article about using Haskell's list monad to do exhaustive search, with the running example of solving this cryptarithm puzzle:
S E N D
+ M O R E
-----------
M O N E Y
(This means that we want to map the letters S, E, N, D, M, O, R, Y to distinct digits 0 through 9 to produce a five-digit and two four-digit numerals which, when added in the indicated way, produce the indicated sum.)
At the end, I said:
It would be an interesting and pleasant exercise to try to implement the same underlying machinery in another language. I tried this in Perl once, and I found that although it worked perfectly well, between the lack of the do-notation's syntactic sugar and Perl's clumsy notation for lambda functions (sub { my ($s) = @_; … } instead of \s -> …) the result was completely unreadable and therefore unusable. However, I suspect it would be even worse in Python because of semantic limitations of that language. I would be interested to hear about this if anyone tries it. I was specifically worried about Python's peculiar local variable binding. But I did receive the following quite clear solution from Peter De Wachter, who has kindly allowed me to reprint it: digits = set(range(10)) def to_number(*digits): n = 0 for d in digits: n = n * 10 + d return n def let(x, f): return f(x) def unit(x): return [x] def bind(xs, f): ys = [] for x in xs: ys += f(x) return ys def guard(b, f): return f() if b else [] after which the complete solution looks like: def solutions(): return bind(digits - {0}, lambda s: bind(digits - {s}, lambda e: bind(digits - {s,e}, lambda n: bind(digits - {s,e,n}, lambda d: let(to_number(s,e,n,d), lambda send: bind(digits - {0,s,e,n,d}, lambda m: bind(digits - {s,e,n,d,m}, lambda o: bind(digits - {s,e,n,d,m,o}, lambda r: let(to_number(m,o,r,e), lambda more: bind(digits - {s,e,n,d,m,o,r}, lambda y: let(to_number(m,o,n,e,y), lambda money: guard(send + more == money, lambda: unit((send, more, money)))))))))))))) print(solutions()) I think this shows that my fears were unfounded. This code produces the correct answer in about 1.8 seconds on my laptop. Thus inspired, I tried doing it again in Perl, and it was not as bad as I remembered: sub bd { my ($ls, $f) = @_; [ map @{$f->($_)}, @$ls ] # Yow
}
sub guard { $_[0] ? [undef] : [] } I opted to omit unit/return since an idiomatic solution doesn't really need it. We can't name the bind function bind because that is reserved for a built-in function; I named it bd instead. We could use Perl's operator overloading to represent binding with the >> operator, but that would require turning all the lists into objects, and it didn't seem worth doing. We don't need to_number, because Perl does it implicitly, but we do need a set subtraction function, because Perl has no built-in set operators: sub remove { my ($b, $a) = @_; my %h = map {$_ => 1 } @$a; delete$h{$_} for @$b;
return [ keys %h ];
}
After which the solution, although cluttered by Perl's verbose notation for lambda functions, is not too bad:
my $digits = [0..9]; my$solutions =
bd remove([0], $digits) => sub { my ($s) = @_;
bd remove([$s],$digits) => sub { my ($e) = @_; bd remove([$s,$e],$digits) => sub { my ($n) = @_; bd remove([$s,$e,$n], $digits) => sub { my ($d) = @_;
my $send = "$s$e$n$d"; bd remove([0,$s,$e,$n,$d],$digits) => sub { my ($m) = @_; bd remove([$s,$e,$n,$d,$m], $digits) => sub { my ($o) = @_;
bd remove([$s,$e,$n,$d,$m,$o], $digits) => sub { my ($r) = @_;
my $more = "$m$o$r$e"; bd remove([$s,$e,$n,$d,$m,$o,$r], $digits) => sub { my ($y) = @_;
my $money = "$m$o$n$e$y";
bd guard($send +$more == $money) => sub { [[$send, $more,$money]] }}}}}}}}};
for my $s (@$solutions) {
print "@\$s\n";
}
This runs in about 5.5 seconds on my laptop. I guess, but am not sure, that remove is mainly at fault for this poor performance.
An earlier version of this article claimed, incorrectly, that the Python version had lazy semantics. It does not; it is strict.
[ Addendum: Aaron Crane has done some benchmarking of the Perl version. A better implementation of remove (using an array instead of a hash) does speed up the calculation somewhat, but contrary to my guess, the largest part of the run time is bd itself, apparently becuse Perl function calls are relatively slow.
HN user masklinn tried a translation of the Python code into a version that returns a lazy iterator; I gather the changes were minor. ]
| 8,484
| 34,314
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.90625
| 3
|
CC-MAIN-2024-38
|
latest
|
en
| 0.923954
|
https://mail.python.org/pipermail/python-list/2009-February/523088.html
| 1,563,266,550,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-30/segments/1563195524517.31/warc/CC-MAIN-20190716075153-20190716101153-00222.warc.gz
| 472,973,630
| 2,335
|
# nth root
casevh casevh at gmail.com
Mon Feb 2 08:01:58 CET 2009
```On Feb 1, 10:02 pm, Mensanator <mensana... at aol.com> wrote:
> On Feb 1, 8:20 pm, casevh <cas... at gmail.com> wrote:
>
>
>
> > On Feb 1, 1:04 pm, Mensanator <mensana... at aol.com> wrote:
>
> > > On Feb 1, 2:27 am, casevh <cas... at gmail.com> wrote:
>
> > > > On Jan 31, 9:36 pm, "Tim Roberts" <t.robe... at cqu.edu.au> wrote:
>
> > > > > Actually, all I'm interested in is whether the 100 digit numbers have an exact integral root, or not. At the moment, because of accuracy concerns, I'm doing something like
>
> > > > > for root in powersp:
> > > > > nroot = round(bignum**(1.0/root))
> > > > > if bignum==long(nroot)**root:
> > > > > .........
> > > > > which is probably very inefficient, but I can't see anything better.....
>
> > > > > Tim
>
> > > > Take a look at gmpy and the is_power function. I think it will do
> > > > exactly what you want.
>
> > > And the root function will give you the root AND tell you whether
> > > it was an integral root:
>
> > > >>> gmpy.root(a,13)
>
> > > (mpz(3221), 0)
>
> > > In this case, it wasn't.
>
> > I think the original poster wants to know if a large number has an
> > exact integral root for any exponent. is_power will give you an answer
> > to that question but won't tell you what the root or exponent is. Once
> > you know that the number is a perfect power, you can root to find the
> > root.
>
> But how do you know what exponent to use?
That's the gotcha. :) You still need to test all prime exponents until
you find the correct one. But it is much faster to use is_power to
check whether or not a number has representation as a**b and then try
all the possible exponents than to just try all the possible exponents
on all the numbers.
>
>
>
| 551
| 1,776
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.203125
| 3
|
CC-MAIN-2019-30
|
latest
|
en
| 0.867647
|
https://myassignmenthelp.com/answers/sunderland-uni1/lubm303-business-analytics/lubm303-business-analytics-question-1-.html
| 1,721,315,548,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-30/segments/1720763514831.13/warc/CC-MAIN-20240718130417-20240718160417-00205.warc.gz
| 364,620,337
| 22,423
|
Get Instant Help From 5000+ Experts For
Writing: Get your essay and assignment written from scratch by PhD expert
Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost
Business Analysis Questions on Costing, Correlation, and Software Programs
## Question 1
Peter, Operations manager of Borehamwood Farms Limited (BF Ltd) lacks strong insight in both economics and mathematical knowledge for any realistic business decision-making. However, Peter has been able to use the cost diagram below to illustrate and influence costing decisions in the company, BF Ltd:
The company BF Ltd has the following records for its product:
Budgeted annual output is 120000 units; fixed cost amount to £40,000; variable cost is £0.50 per unit and the sales price is £1.00 per unit.
a)Develop a mathematical model using the cost information above.
b)Calculate the profit or loss for BF Ltd using the above information.Â
c)Draw a graph with a spreadsheet, for a five-year projection from January 2021 and factor in an annual expected increase of 10 % in variable and 5% increase on both the sales quantity and price (N.B, the changes takes effect from January 2021). Critically analyse and comment on the costing behaviour for the projections for total revenue and total cost relationship.Â
The figures below represent forecast expenditure for advertising and expected sales revenue of footwear designer company, Summer Land Limited:
Year Advertising expenditure (X) Sales revenue (Y)  (£000s) (£000S) 2021 2 60 2022 5 100 2013 4 70 2024 6 90 2025 3 80
Assuming you are a newly employed graduate business analyst, you are required to:Â
b)Plot a scatter diagram of the data and discuss the pattern of the relationship of the variables.   Â
Critically analyse the impact of advertising expenditure on sales and advise the marketing manager on how the company can gain a competitive advantage in the footwear industry by adopting other relevant marketing tactic
BAX Plc. produces and sells a specialised computer software programs. Estimated unit data for next year are as follows:
Per unit £ £ Selling price  600 Variable costs:   Labour 200  Materials   40  Variable overheads.   10 250 Profit  350
Anticipated fixed costs for the year are £80,000 for administration and £60000 for selling and distribution. Estimated sales for the year are 640 computer soft wares.
a)Determine.
i)Breakeven point in terms of number of software programs sold. Â
ii)The margin of safety as a percentage of estimated sales.Â
b)The company’s target profit for the year is £56000.Â
i) Discuss whether the estimated sales volume will be sufficient to achieve this target.Â
ii) Evaluate the estimated sales volume that will neither cause profit to exceed nor fall short of the target profit.Â
c)Prepare a breakeven chart for BAX Plc, showing clearly, the breakeven point, and the margin of safetyÂ
(e) Critically analyse the benefits and limitations of the breakeven model, its application in marginal costing and as a business strategy for BAX Plc.Â
As part of an interview for a position as a Graduate Business Analysts at KADLex Plc, you are required to demonstrate your research and business analytical skills. As a result, you are tasked to:Â
 A. (i) Research the monthly digest of statistics or the national statistical archives for any two economic variables which you think may be correlated (e.g. retail price index/employment, inflation/money supply, GDP/employment etc.) andÂ
(ii) Critically analyse and present your results in writing to the interview panel.Â
Bear in mind the dangers of spurious correlation and seeking other factors which may be influencing your data, including outliers. Interpretation may be simpler if you select cross sectional data taken at the same point in time (e.g. regional figures) rather than longitudinal data taken over a ten-year period
B.From the company website, Mrs. Smart the Chief Business Analyst, has stated “using insights gained from our intuitive tools like Search Engine Optimisation (SEO), we adjust our approach alongside continued analysis of your market, competitors, and customers. This means you are always ahead of industry changesâ€Â
Critically analyse how SEO can be used to the advantage of KADlex in the European Marketing industry.
| 975
| 4,360
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 2.984375
| 3
|
CC-MAIN-2024-30
|
latest
|
en
| 0.903582
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.