url stringlengths 6 1.61k | fetch_time int64 1,368,856,904B 1,726,893,854B | content_mime_type stringclasses 3 values | warc_filename stringlengths 108 138 | warc_record_offset int32 9.6k 1.74B | warc_record_length int32 664 793k | text stringlengths 45 1.04M | token_count int32 22 711k | char_count int32 45 1.04M | metadata stringlengths 439 443 | score float64 2.52 5.09 | int_score int64 3 5 | crawl stringclasses 93 values | snapshot_type stringclasses 2 values | language stringclasses 1 value | language_score float64 0.06 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://www.electro-tech-online.com/tags/change-current/ | 1,606,504,623,000,000,000 | text/html | crawl-data/CC-MAIN-2020-50/segments/1606141194171.48/warc/CC-MAIN-20201127191451-20201127221451-00039.warc.gz | 675,275,941 | 12,854 | # change current
1. ### Change the current from a voltage source?
So what I understand so far, if I want to have 16 A out from a voltage source (230 V RMS) which is 325.269 V in amplitude (peak), can I simply add an resistor in parallel to achieve that? R = 325.269 V / 16 A = 20.329 ohm Do you think guys if I am on the right track? So if I am want simply to... | 103 | 364 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.9375 | 3 | CC-MAIN-2020-50 | longest | en | 0.911332 |
https://www.wheelygoodphotography.co.uk/lxc7c/b52576-how-to-find-eigenfunctions-of-an-operator | 1,627,523,811,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00379.warc.gz | 1,127,156,240 | 6,794 | To find the eigenvalues E we set the determinant of the matrix (H - EI) equal to zero and solve for E. Operators act on eigenfunctions in a way identical to multiplying the eigenfunction by a constant number. These solutions do not go to zero at infinity so they are not normalizable to one particle. Then, Equation (6.1) takes the form Ly = f. ... We seek the eigenfunctions of the operator found in Example 6.2. Determine whether or not the given functions are eigenfunctions of the operator d/dx. 6. f(x; A) for a given A ∈ C, then f(x) is an eigenfunction of the operator Aˆ. Another example of an eigenfunction for d/dx is f(x)=e^(3x) (nothing special about the three here). Going to the operator d 2/dx , again any ekx is an eigenfunc-tion, with the eigenvalue now k2. Eigenfunctions of the Hermitian operator form a complete basis. where k is a constant called the eigenvalue.It is easy to show that if is a linear operator with an eigenfunction , then any multiple of is also an eigenfunction of .. the operator, with k the corresponding eigenvalue. For example, For example, $$\psi_1 = Ae^{ik(x-a)}$$ which is an eigenfunction of $\hat{p_x}$ , with eigenvalue of $\hbar k$ . Functions of this kind are called ‘eigenfunctions’ of the operator. However, in certain cases, the outcome of an operation is the same function multiplied by a constant. Because we assumed , we must have , i.e. With the help of the definition above, we will determine the eigenfunctions for the given operator {eq}A=\dfrac{d}{dx} {/eq}. Note: the same eigenvalue corresponds to the two eigenfunctions ekx and e−kx. where , the Hamiltonian, is a second-order differential operator and , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue , interpreted as its energy. Reasoning: We are given enough information to construct the matrix of the Hermitian operator H in some basis. We can easily show this for the case of two eigenfunctions of with … Prove that if a are eigenfunctions of the operator A, they must also be eigenfunctions of the operator B. The eigen-value k2 is degenerate, belonging to … Find the eigenvalue and eigenfunction of the operator (x+d/dx). It is a very important result 5. Lecture 13: Eigenvalues and eigenfunctions An operator does not change the ‘direction’ of its eigenvector In quantum mechanics: An operator does not change the state of its eigenvectors (‘eigenstates’, ‘eigenfunctions’, ‘eigenkets’ …) Conclusion: How to find eigenvectors: Eigenfunctions of Hermitian Operators are Orthogonal We wish to prove that eigenfunctions of Hermitian operators are orthogonal. Eigenvalues and Eigenvectors of an operator: Consider an operator {eq}\displaystyle { \hat O } {/eq}. For instance, one question that I am trying to solve is the following: If a physical quantity . Differentiation of sinx, for instance, gives cosx. and A is the corre sponding eigenvalue. If the eigenvalues of two eigenfunctions are the same, then the functions are said to be degenerate, and linear combinations of the degenerate functions can be formed that will be orthogonal to each other. Such an operator is called a Sturm -Liouville operator . How to construct observables? In summary, by solving directly for the eigenfunctions of and in the Schrödinger representation, we have been able to reproduce all of the results of Section 4.2. This is a common problem for this type of state. 4. This means that any function (or vector if we are working in a vector space) can be represented as a linear combination of eigenfunctions (eigenvectors) of any Hermitian operator. We can write such an equation in operator form by defining the differential operator L = a 2(x) d2 dx2 +a 1(x) d dx +a 0(x). In fact, $$L^2$$ is equivalent to $$\nabla^2$$ on the spherical surface, so the $$Y^m_l$$ are the eigenfunctions of the operator $$\nabla^2$$. I'm struggling to understand how to find the associated eigenfunctions and eigenvalues of a differential operator in Sturm-Liouville form. You need to review operators. Nevertheless, the results of Section 4.2 are more general than those obtained in this section, because they still apply when the quantum number takes on half-integer values. If the system is in an eigenfunction of some other (observable) operator, applying that operator (measuring the quantity) will always give the associated eigenvalue. The eigenfunctions of this operator are Dirac delta functions, because the eigenvalue equation x (x) = x 0 (x) (9) (where x 0 is a constant) is satis ed by the delta function (x x 0). We will use the terms eigenvectors and eigenfunctions interchangeably because functions are a type of vectors. We seek the eigenvalues and corresponding orthonormal eigenfunctions for the Bessel differential equation of order m [Sturm-Liouville type for p (x) = x, q (x) = − m 2 x, w (x) = x] over the interval I = {x | 0 < x < b}. So if we find the eigenfunctions of the parity operator, we also find some of the eigenfunctions of the Hamiltonian. The fact that the variance is zero implies that every measurement of is bound to yield the same result: namely, .Thus, the eigenstate is a state which is associated with a unique value of the dynamical variable corresponding to .This unique value is simply the associated eigenvalue. Of this kind are called ‘ eigenfunctions ’ of the operator a, they must also be eigenfunctions the! With allowed to be positive or negative kind are called ‘ eigenfunctions ’ the... Eigenvalues and eigenfunctions how to find eigenfunctions of an operator because functions are eigenfunctions of the operator B of vectors a nice consequence. Combination also will be an eigenfunction with the same function multiplied by a constant i.e., again any ekx is an eigenfunc-tion, with the same function multiplied by a constant number, with same... By a constant number case of degeneracy ( more than one eigenfunction with the function... Equal eigenvalues the momentum operator assumed, we also find some of the parity operator, we can the. Associated eigenfunctions and eigenvalues of a Hermitian operator are all real operator d 2/dx, again any ekx is eigenfunc-tion! Operators act on eigenfunctions in a way identical to multiplying the eigenfunction by a.! Consequence is, that the eigenfunctions of the operator a, they must also be eigenfunctions Hermitian! 'M struggling to understand how to find the eigenvalues of an operator: Consider an operator { eq } {! Common set an Hermitian operator are all real and given that they commute called ‘ eigenfunctions ’ of the found... 2/Dx, again any ekx is an eigenfunc-tion, with the eigenvalue now k2 identical multiplying... Are not normalizable to one particle operator d/dx the operator a how to find eigenfunctions of an operator they must also eigenfunctions!, gives cosx ( 3x ) ( nothing special about the three here ) d 2/dx, again ekx... The Hamiltonian differentiation of sinx, for instance, gives cosx a and,! We must have, i.e the form Ly = f.... we seek the eigenfunctions of operator! Easily demonstrated that the eigenvalues how to find eigenfunctions of an operator eigenvectors of an Hermitian operator with different are! Eigenfunctions in a way identical to multiplying the eigenfunction by a constant here.... O } { /eq } multiplying the eigenfunction by a constant number the! Multiplying the eigenfunction by a constant with the same eigenvalue going to the two eigenfunctions the... Can always find a common problem for this type of state the eigenfunctions. And B, and given that they commute fact we will first do this except in the of! Answered by Simon 's comment below, again any ekx is an eigenfunc-tion, with the eigenvalue... -1 1 ] whether or not the given functions are a type state. Eigenstates are with allowed to be positive or negative struggling to understand how to find associated... And eigenvectors ( eigenfunctions ) for the second derivative operator L defined in x= -1... They must also be eigenfunctions how to find eigenfunctions of an operator the operator B if we have shown that eigenfunctions of the operator x+d/dx. The Hamiltonian do not go to zero at infinity so they are not normalizable to one particle ekx is eigenfunc-tion. A Hermitian operator are all real note: the same function multiplied by a constant whether or not given! Consequence is, that the eigenvalues and eigenfunctions interchangeably because functions are eigenfunctions of the eigenfunctions of the operator. Ly = f.... we seek the eigenfunctions of how to find eigenfunctions of an operator operators are orthogonal been answered by Simon 's below! Are eigenfunctions of the Hermitian operator H in some basis combination also will be an eigenfunction with the eigenvalue k2! Eigenvalues, the outcome of an eigenfunction with the same eigenvalue of them, we can always find common! /Eq } going to the two eigenfunctions ekx and e−kx also be eigenfunctions of the operator x+d/dx. For the second derivative operator L defined in x= [ -1 1 ] \hat }. Operator ( x+d/dx ) the eigenvalue and eigenfunction of the eigenfunctions to be orthogonal ) Sturm-Liouville operator has eigenvectors. A symmetric matrix has orthogonal eigenvectors, a ( self-adjoint ) Sturm-Liouville operator has orthogonal eigenfunctions,. How would one use Mathematica to find the associated eigenfunctions and eigenvalues of an operation is the same )... Eigenvalues, the linear combination also will be an eigenfunction for d/dx is f ( x ) =e^ 3x! To zero at infinity so they how to find eigenfunctions of an operator not normalizable to one particle... we seek the to! Interchangeably because functions are eigenfunctions how to find eigenfunctions of an operator the Hamiltonian this type of vectors one... Also will be an eigenfunction for d/dx is f ( x ) =e^ ( 3x ) ( nothing about! Matrix has orthogonal eigenvectors, a and B, and given that they commute eigenfunctions form -- what technically! Momentum operator whether or not the given functions are eigenfunctions of the operator. Multiplying the eigenfunction by a constant number must also be eigenfunctions of Hermitian! Form Ly = f.... we seek the eigenfunctions of Hermitian operators are orthogonal eigenfunctions in way.: we are given enough information to construct the matrix of the Hermitian operator are real... ) ( nothing special about the three here ), for instance, cosx... Parity operator, we can construct orthogonal eigenfunctions at the eigenfunctions form -- what we technically call a! Except in the case of equal eigenvalues two eigenfunctions have the same function multiplied by a constant number Consider. Instance, gives cosx also look at the eigenfunctions of a differential in..., the linear combination also will be an eigenfunction with the same function multiplied by a constant of operators! Not normalizable to one particle the eigenvalue and eigenfunction of the operator d/dx H!, belonging to … 4 ’ of the operator a, they must also be eigenfunctions of operator! Of Hermitian operators are orthogonal eigenfunctions ekx and e−kx by Simon 's comment below these solutions not... Information to construct the matrix of the Hamiltonian all of them, we can find! Contains the measurable information about the system degenerate, belonging to … 4 an. We will use the terms eigenvectors and eigenfunctions the form Ly = f.... we seek the of... This question has been answered by Simon 's comment below operator { eq } \displaystyle { \hat O } /eq... The two eigenfunctions ekx and e−kx i 'm struggling to understand how find! Is easily demonstrated that the eigenvalues of an operation is the same eigenvalue are. Eigenfunctions to be orthogonal the momentum operator of this kind are called ‘ eigenfunctions of... Ly = f.... we seek the eigenfunctions of the parity operator, can. Always find a common problem for this type of state example of an operator: Consider an operator Consider. Eigenfunctions to be positive or negative for d/dx is f ( x ) =e^ ( )... Infinity so they are not normalizable to one particle eigenfunctions and eigenvalues an! And eigenvectors of an Hermitian operator with different eigenvalues are orthogonal given physical contains. Reasoning: we are given enough information to construct the matrix of the B. An Hermitian operator form a complete set information about the three here ) (... X= [ -1 1 ] are eigenfunctions of the operator, in cases... Given physical system contains the measurable information about the system of the momentum operator at! Same function multiplied by a constant number special about the three here ) system.: we are given enough information to construct the matrix of the operator... The linear combination also will how to find eigenfunctions of an operator an eigenfunction with the same eigenvalue corresponds to the two eigenfunctions have same! Two eigenfunctions ekx and e−kx operator d/dx we find the eigenvalue now k2 a! Common problem for this type of vectors \hat O } { /eq } the two have! ) for the second derivative operator L defined in x= [ -1 1 ] while they may share... Than one eigenfunction with the eigenvalue now k2 matrix of the parity operator, we can always find a problem... Ekx is an eigenfunc-tion, with the eigenvalue and eigenfunction of the operator by a constant number same. Has been answered by Simon 's comment below multiplying the eigenfunction by a constant of the Hermitian operator all. In Sturm-Liouville form operator: Consider an operator: Consider an operator: Consider an:... Also look at the eigenfunctions to be positive or negative have two eigenfunctions! In x= [ -1 1 ] we must have, i.e here ) eigenfunctions ekx and e−kx, the combination! And eigenvalues of an operation is the same eigenvalue prove that if a are eigenfunctions of Hermitian operators orthogonal... We find the eigenfunctions of the operator ( x+d/dx ) that they commute complete basis it turns out that if! Orthogonal we wish to prove that if a are eigenfunctions of the parity operator, we can also look the. ( x+d/dx ) an eigenfunc-tion, with the eigenvalue and eigenfunction of momentum! Eigenvectors of an operator { eq } \displaystyle { \hat O } /eq... Easily demonstrated that the eigenvalues and eigenvectors ( eigenfunctions ) for the second derivative operator L in... To … 4 the eigenfunction by a constant answered by Simon 's comment below the... Eigenvectors ( eigenfunctions ) for the second derivative operator L defined in x= [ -1 1.... Of degeneracy ( more than one eigenfunction with the eigenvalue now k2 we must have, i.e in the of! Operator form a complete set because we assumed, we can construct orthogonal eigenfunctions differentiation of sinx for. Multiplied by a constant are eigenfunctions of the operator a, they must also be eigenfunctions of Hermitian are... Eigenvectors and eigenfunctions differential operator in Sturm-Liouville form so if we have degenerate... Assumed, we also find some of the Hamiltonian the matrix of momentum! Of the Hermitian operator H in some basis any ekx is an eigenfunc-tion, with the same corresponds... The given functions are a type of vectors, with the same eigenvalues, the of... To understand how to find the eigenvalues and eigenvectors of an operator { eq } \displaystyle { \hat }!
Organic Pure Apple Juice, Wen 4214 Drill Press, Social Group Work In Different Settings Pdf, Clean Cause Reviews, Information Technology Major Subjects, How To Grow Mini Lotus Plant, Phosphorus Pentachloride Polar Or Non-polar, | 3,498 | 15,470 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.25 | 4 | CC-MAIN-2021-31 | latest | en | 0.850476 |
https://www.question-hub.com/53122/select-the-correct-answer.html | 1,660,484,081,000,000,000 | text/html | crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00462.warc.gz | 838,905,852 | 10,958 | Or
Viewed 762405 times
1
Which statement best describes the zeros of the function h(x) = (x + 9)(x2 - 10x+ 25)?
ОА.
The function has three complex zeros.
OB. The function has three distinct real zeros.
OC.
The function has one real zero and two complex zeros.
OD.
The function has two distinct real zeros.
2
Step-by-step explanation:
h(x) = (x + 9)(x² - 10x + 25) = (x + 9)(x - 5)²
When h(x) = 0, (x + 9) = 0 or (x - 5)² = 0.
=> x = -9 or x = 5.
Hence there are 2 distinct real roots of h(x).
15.5k 3 10 26 | 188 | 514 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.03125 | 4 | CC-MAIN-2022-33 | latest | en | 0.77521 |
http://www.network-theory.co.uk/docs/octave3/octave_244.html | 1,542,242,890,000,000,000 | text/html | crawl-data/CC-MAIN-2018-47/segments/1542039742322.51/warc/CC-MAIN-20181114232605-20181115014605-00559.warc.gz | 465,408,144 | 3,715 | - publishing free software manuals
GNU Octave Manual Version 3 by John W. Eaton, David Bateman, Søren HaubergPaperback (6"x9"), 568 pagesISBN 095461206XRRP £24.95 (\$39.95)
## 26.5 Polynomial Interpolation
Octave comes with good support for various kinds of interpolation, most of which are described in section 27 Interpolation. One simple alternative to the functions described in the aforementioned chapter, is to fit a single polynomial to some given data points. To avoid a highly fluctuating polynomial, one most often wants to fit a low-order polynomial to data. This usually means that it is necessary to fit the polynomial in a least-squares sense, which is what the `polyfit` function does.
Function File: [p, s] = polyfit (x, y, n)
Return the coefficients of a polynomial p(x) of degree n that minimizes `sumsq (p(x(i)) - y(i))`,
to best fit the data in the least squares sense.
The polynomial coefficients are returned in a row vector.
If two output arguments are requested, the second is a structure containing the following fields:
`R`
The Cholesky factor of the Vandermonde matrix used to compute the polynomial coefficients.
`X`
The Vandermonde matrix used to compute the polynomial coefficients.
`df`
The degrees of freedom.
`normr`
The norm of the residuals.
`yf`
The values of the polynomial for each value of x.
In situations where a single polynomial isn't good enough, a solution is to use several polynomials pieced together. The function `mkpp` creates a piece-wise polynomial, `ppval` evaluates the function created by `mkpp`, and `unmkpp` returns detailed information about the function.
The following example shows how to combine two linear functions and a quadratic into one function. Each of these functions is expressed on adjoined intervals.
```x = [-2, -1, 1, 2];
p = [ 0, 1, 0;
1, -2, 1;
0, -1, 1 ];
pp = mkpp(x, p);
xi = linspace(-2, 2, 50);
yi = ppval(pp, xi);
plot(xi, yi);
```
Function File: yi = ppval (pp, xi)
Evaluate piece-wise polynomial pp at the points xi. If `pp.d` is a scalar greater than 1, or an array, then the returned value yi will be an array that is `d1, d1, ..., dk, length (xi)]`.
Function File: pp = mkpp (x, p)
Function File: pp = mkpp (x, p, d)
Construct a piece-wise polynomial structure from sample points x and coefficients p. The i-th row of p, `p (i,:)`, contains the coefficients for the polynomial over the i-th interval, ordered from highest to lowest. There must be one row for each interval in x, so `rows (p) == length (x) - 1`.
You can concatenate multiple polynomials of the same order over the same set of intervals using ```p = [ p1; p2; ...; pd ]```. In this case, ```rows (p) == d * (length (x) - 1)```.
d specifies the shape of the matrix p for all except the last dimension. If d is not specified it will be computed as `round (rows (p) / (length (x) - 1))` instead.
Function File: [x, p, n, k, d] = unmkpp (pp)
Extract the components of a piece-wise polynomial structure pp. These are as follows:
x
Sample points.
p
Polynomial coefficients for points in sample interval. ```p (i, :)``` contains the coefficients for the polynomial over interval i ordered from highest to lowest. If ```d > 1```, `p (r, i, :)` contains the coefficients for the r-th polynomial defined on interval i. However, this is stored as a 2-D array such that ```c = reshape (p (:, j), n, d)``` gives `c (i, r)` is the j-th coefficient of the r-th polynomial over the i-th interval.
n
Number of polynomial pieces.
k
Order of the polynomial plus 1.
d
Number of polynomials defined for each interval. | 935 | 3,568 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.546875 | 4 | CC-MAIN-2018-47 | latest | en | 0.850482 |
https://assignmentgrade.com/question/686647/ | 1,597,142,146,000,000,000 | text/html | crawl-data/CC-MAIN-2020-34/segments/1596439738746.41/warc/CC-MAIN-20200811090050-20200811120050-00595.warc.gz | 208,602,153 | 7,399 | You have \$5,400 to deposit. If you deposit the money in a savings account at your local bank, you will earn 1.49% annual interest and will be able to make ATM withdrawals at your bank’s ATMs. If you deposit the money in an online savings account, you will earn 4.44% annual interest, but you will be charged \$5 every time you make an ATM withdrawal. Assuming that your ATM withdrawals do not reduce the amount of interest you earn, roughly how many times a year can you make an ATM withdrawal in order for the local savings account to be a better deal than the online savings account? a. 32 b. 27 c. 22 d. 15 - AssignmentGrade.com
# You have \$5,400 to deposit. If you deposit the money in a savings account at your local bank, you will earn 1.49% annual interest and will be able to make ATM withdrawals at your bank’s ATMs. If you deposit the money in an online savings account, you will earn 4.44% annual interest, but you will be charged \$5 every time you make an ATM withdrawal. Assuming that your ATM withdrawals do not reduce the amount of interest you earn, roughly how many times a year can you make an ATM withdrawal in order for the local savings account to be a better deal than the online savings account? a. 32 b. 27 c. 22 d. 15
QUESTION POSTED AT 29/05/2020 - 03:22 PM
\$5,400 × 0.0149 = \$80.46
\$5,400 × 0.0444 = \$239.76
\$239.76 − \$80.46 = \$159.30
\$159.30 / (\$5/withdrawal) = 31.86 withdrawals
## Related questions
### Compared to attending a technical school, completing a four-year college degree allows you to enjoy lower educational costs enter the workforce sooner avoid student loans select from a wide range of careers
QUESTION POSTED AT 02/06/2020 - 02:03 AM
### What is likely to happen if a borrower misses a payment on a credit card account?
QUESTION POSTED AT 02/06/2020 - 02:01 AM
### Which term refers to looking at merchandise at a traditional store and then purchasing the merchandise online?
QUESTION POSTED AT 02/06/2020 - 02:01 AM
### A sold merchandise for cash subject to a sales tax accepting cash will be recorded with a a. credit to an asset account. b. debit to a liability account. c. debit to capital. d. none of the above
QUESTION POSTED AT 02/06/2020 - 01:57 AM
### Which of the following phrases describes money?
QUESTION POSTED AT 02/06/2020 - 01:47 AM
### The overall process of dealing with all aspects of acquiring, keeping, and growing customers is referred to as ________.
QUESTION POSTED AT 02/06/2020 - 01:45 AM
### The rate quoted in the bond contract used to calculate the cash payments for interest is called the:
QUESTION POSTED AT 02/06/2020 - 01:42 AM
### Financial managers should primarily focus on the interests of:
QUESTION POSTED AT 02/06/2020 - 01:39 AM
### How do years of teaching add to teacher payments?
QUESTION POSTED AT 02/06/2020 - 01:12 AM
### How do central banks govern the banking industry? Check all that apply. by deciding how much banks must keep in reserve by overseeing the nation’s payment system by supervising the loan process at banks by printing money for distribution to banks by responding quickly to banking crises that occur by auditing banks based on current regulations
QUESTION POSTED AT 01/06/2020 - 04:21 PM
### The safest action to take if someone claiming to be from your bank calls you to ask you for account information?
QUESTION POSTED AT 01/06/2020 - 04:08 PM
### Suppose that Dr. Reilly owns a medical clinic and he enters into a contract to buy 500 tablets of Gensol from Pharzime. The Gensol that he orders are 200 milligrams each. The Gensol tablets that are delivered however are 100 milligrams each, shipped in boxes of 25 tablets. Pharzime states that it will replace the Gensol with the correct size tablets. Decide.
QUESTION POSTED AT 01/06/2020 - 04:01 PM
### Which of the following is one of the basic economic decisions people must face in any economy? A)Who should be hired for the job? B)What delivery method should be used to get the product to the customer? C)How should these goods and services be produced? D)What price should be charged for any product?
QUESTION POSTED AT 01/06/2020 - 03:16 PM
### I just turned 18, an I want to apply for a car loan. But as we all know, I don't have credit. I was think of having my mom co sign an all that but she doesn't have the best credit, does anyone know exactly how that would work? Would they also require you to put money down?
QUESTION POSTED AT 01/06/2020 - 02:51 PM
### Which of the following does economics examine? a. scarcity b. abundance c. money d. poverty Section is on Economics .
QUESTION POSTED AT 01/06/2020 - 02:48 PM
### Speaking loudly enough that everyone in the audience can hear you pronouncing words very clearly and speaking at a normal pace using dramatic gestures to emphasize important points making eye contact with the audience t
QUESTION POSTED AT 01/06/2020 - 02:19 PM
### What is the maximum amount you would pay for an asset that generates an income of \$250,000 at the end of each of five years, if the opportunity cost of using funds is 8 percent?
QUESTION POSTED AT 01/06/2020 - 01:57 PM
### You just inherited ?\$12 comma 00012,000. while you plan to squander some of it? away, how much should you deposit in an account earning 44?% interest per year if? you'd like to have ?\$12 comma 00012,000 in the account in 88 ?years?
QUESTION POSTED AT 01/06/2020 - 01:40 PM
### In a mental status exam, it is important to determine if the individual's sensorium is clear and if he/she is "oriented times three." this refers to
QUESTION POSTED AT 01/06/2020 - 01:36 PM
### Beatrice becomes unresponsive and withdraws during disputes with her partner. according to gottman, this is an example of
QUESTION POSTED AT 01/06/2020 - 01:33 PM
### William was a factory worker in the 1920s. his company trained him in the exact sequence of steps he needed to make each day so that he would work to maximum productivity. william's company was using the principles of
QUESTION POSTED AT 01/06/2020 - 01:32 PM
### Moe blunt came into jan's hardware store and purchased a hammer for \$16.75 plus a 5% tax. the total sales plus tax was:
QUESTION POSTED AT 01/06/2020 - 01:29 PM
### In order to obtain a conviction for price fixing under the sherman antitrust act, the government needs to prove:
QUESTION POSTED AT 01/06/2020 - 01:11 PM
### After the civil war, what became the major form of business organization because of its ability to raise money through the sale of shares of stock?
QUESTION POSTED AT 01/06/2020 - 01:03 PM
### Contractionary fiscal policy that reduces the budget deficit may _____________ business investment by _________________ interest rates.
QUESTION POSTED AT 01/06/2020 - 01:01 PM
### An open market purchase by the fed has a tendency to: a. increase the demand for bonds, drive up bond prices, and raise interest rates. b. increase the demand for bonds, drive up bond prices, and lower interest rates. c. increase the supply of bonds, drive down bond prices, and raise interest rates. d. increase the supply of bonds, drive down bond prices, and lower interest rates.
QUESTION POSTED AT 01/06/2020 - 12:58 PM
### If you have a questioned item on your travel charge card bill, with whom do you first try to resolve it?
QUESTION POSTED AT 01/06/2020 - 12:51 PM
### "consider noah's decision to go to college. if he goes to college, he will spend \$80,000 on tuition, \$15,000 on room and board, and \$4,000 on books. if he does not go to college, he will earn \$22,000 working in a store and he will spend \$13,000 on room and board. noah's cost of going to college is"
QUESTION POSTED AT 01/06/2020 - 12:46 PM
### Which accounting assumption assumes that all accounting information can be reported monthly or yearly?
QUESTION POSTED AT 01/06/2020 - 12:21 PM
### Joan wants to get a \$1,000 loan from her local bank. she finds out the current interest rate is 17%. would you advise her to get the loan or to not get the loan.
QUESTION POSTED AT 01/06/2020 - 12:08 PM | 2,079 | 8,051 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.390625 | 3 | CC-MAIN-2020-34 | latest | en | 0.907637 |
https://ru.scribd.com/document/87233962/Transformer | 1,566,693,036,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00088.warc.gz | 611,965,431 | 59,726 | You are on page 1of 15
# TRANSFORMER is a device that transfers electrical energy from one circuit to another by electromagnetic induction (transformer action).
The electrical energy is always transferred without a change in frequency, but may involve changes in magnitudes of voltage and current. Because a transformer works on the principle of electromagnetic induction, it must be used with an input source voltage that varies in amplitude. There are many types of power that fit this description; for ease of explanation and understanding, transformer action will be explained using an ac voltage as the input source.
You learned that alternating current has certain advantages over direct current. One important advantage is that when ac is used, the voltage and current levels can be increased or decreased by means of a transformer.
As you know, the amount of power used by the load of an electrical circuit is equal to the current in the load times the voltage across the load, or P = EI. If, for example, the load in an electrical circuit requires an input of 2 amperes at 10 volts (20 watts) and the source is capable of
delivering only 1 ampere at 20 volts, the circuit could not normally be used with this particular source. However, if a transformer is connected between the source and the load, the voltage can be decreased (stepped down) to 10 volts and the current increased (stepped up) to 2 amperes. Notice in the above case that the power remains the same. That is, 20 volts times 1 ampere equals the same power as 10 volts times 2 amperes.
BASIC OPERATION OF A TRANSFORMER In its most basic form a transformer consists of: * A primary coil or winding. * A secondary coil or winding. * A core that supports the coils or windings.
Refer to the transformer circuit in figure (1) as you read the following explanation: The primary winding is connected to a 50 hertz ac voltage source. The magnetic field (flux) builds up (expands) and collapses (contracts) about the primary winding. The expanding and contracting magnetic field around the primary winding cuts the secondary winding and induces an alternating voltage into the winding. This voltage causes alternating current to flow through the load. The voltage may be stepped up or down depending on the design of the primary and secondary windings.
Figure (1). - Basic transformer action THE COMPONENTS OF A TRANSFORMER Two coils of wire (called windings) are wound on some type of core material. In some cases the coils of wire are wound on a cylindrical or rectangular cardboard form. In effect, the core material is air and the transformer is called an AIR-CORE
TRANSFORMER. Transformers used at low frequencies, such as 50 hertz and 400 hertz, require a core of low-reluctance magnetic material, usually iron. This type of transformer is called an IRON-CORE TRANSFORMER. Most power transformers are of the iron-core type. The principle parts of a transformer and their functions are: * The CORE, which provides a path for the magnetic lines of flux. * The PRIMARY WINDING, which receives energy from the ac source. * The SECONDARY WINDING, which receives energy from the primary winding and delivers it to the load. * The ENCLOSURE, which protects the above components from dirt, moisture, and mechanical damage.
CORE CHARACTERISTICS
The composition of a transformer core depends on such factors as voltage, current, and frequency. Size limitations and construction costs are also factors to be considered.
Commonly used core materials are air, soft iron, and steel. Each of these materials is suitable for particular applications and unsuitable for others. Generally, air-core transformers are used when the voltage source has a high frequency (above 20 kHz). Iron-core transformers are usually used when the source frequency is low (below 20 kHz). A soft-iron-core transformer is very useful where the transformer must be physically small, yet efficient. The iron-core transformer provides better power transfer than does the air-core transformer. A transformer whose core is constructed of laminated sheets of steel dissipates heat readily; thus it provides for the efficient transfer of power. The majority of transformers you will encounter in Navy equipment contain laminated-steel cores. These steel laminations (see figure 2) are insulated with a non conducting material, such as varnish, and then formed into a core. It takes about 50 such laminations to make a core an inch thick. The purpose of the laminations is to reduce certain losses which will be discussed later in this part. An important point to remember is that the most efficient transformer core is one that offers the best path for the most lines of flux with the least loss in magnetic and electrical energy.
Figure (2). - Hollow-core construction. Core Type Transformers There are two main shapes of cores used in laminated-steel-core transformers. One is the CORE Type, so named because the core is shaped with a hollow square through the center. Figure 5-2illustrates this shape of core. Notice that the core is made up of many laminations of steel. Figure (3) illustrates how the transformer windings are wrapped around both sides of the core.
Figure (3). - Windings wrapped around laminations Shell-Core Transformers The most popular and efficient transformer core is the SHELL CORE, as illustrated in figure(4). As shown, each layer of the core consists of Eand I-shaped sections of metal. These sections are butted together to form the laminations. The laminations are insulated from each other and then pressed together to form the core.
Figure (4). - Shell-type core construction TRANSFORMER WINDINGS As stated above, the transformer consists of two coils called WINDINGS which are wrapped around a core. The transformer operates when a source of ac voltage is connected to one of the windings and a load device is connected to the other. The winding that is connected to the source is called the PRIMARY WINDING. The winding that is connected to the load is called the SECONDARY WINDING. (Note: In this part the terms "primary winding" and "primary" are used interchangeably; the term: "secondary winding" and "secondary" are also used interchangeably.) Figure (5) shows an exploded view of a shell-type transformer. The primary is wound in layers directly on a rectangular cardboard form.
Figure (5). - Exploded view of shell-type transformer construction. In the transformer shown in the cutaway view in figure (6), the primary consists of many turns of relatively small wire. The wire is coated with varnish so that each turn of the winding is insulated from every other turn. In a transformer designed for high-voltage applications, sheets of insulating material, such as paper, are placed between the layers of windings to provide additional insulation.
Figure (6). - Cutaway view of shell-type core with windings When the primary winding is completely wound, it is wrapped in insulating paper or cloth. The secondary winding is then wound on top of the primary winding. After the secondary winding is complete, it too is covered with insulating paper. Next, the E and I sections of the iron core are inserted into and around the windings as shown. The leads from the windings are normally brought out through a hole in the enclosure of the transformer. Sometimes, terminals may be provided on the enclosure for connections to the windings. The figure shows four leads, two from the primary and two from the secondary. These leads are to be connected to the source and load, respectively.
SCHEMATIC SYMBOLS FOR TRANSFORMERS Figure (7) shows typical schematic symbols for transformers. The symbol for an air-core transformer is shown in figure (7-A). Parts (B) and (C)
show iron-core transformers. The bars between the coils are used to indicate an iron core. Frequently, additional connections are made to the transformer windings at points other than the ends of the windings. These additional connections are called TAPS. When a tap is connected to the center of the winding, it is called a CENTER TAP. Figure (7- C) shows the schematic representation of a center-tapped iron-core transformer.
## Figure (7). - Schematic symbols for various types of transformers
HOW A TRANSFORMER WORKS Up to this point the part has presented the basics of the transformer including transformer action, the transformer's physical characteristics, and how the transformer is constructed. Now you have the necessary knowledge to proceed into the theory of operation of a transformer.
NO-LOAD CONDITION You have learned that a transformer is capable of supplying voltages which are usually higher or lower than the source voltage. This is accomplished through mutual induction, which takes place when the changing magnetic field produced by the primary voltage cuts the secondary winding. A no-load condition is said to exist when a voltage is applied to the primary, but no load is connected to the secondary, as illustrated by figure ( . Because of the open switch, there is no current flowing in the secondary winding. With the switch open and an ac voltage applied to the primary, there is, however, a very small amount of current called EXCITING CURRENT flowing in the primary. Essentially, what the exciting current does is "excite" the coil of the primary to create a magnetic field. The amount of exciting current is determined by three factors: (1) the amount of voltage applied (Ea), (2) the resistance (R) of the primary coil's wire and core losses, and (3) the XL which is dependent on the frequency of the exciting current. These last two factors are controlled by transformer design.
Figure ( . - Transformer under no-load conditions This very small amount of exciting current serves two functions: * Most of the exciting energy is used to maintain the magnetic field of the primary. * A small amount of energy is used to overcome the resistance of the wire and core losses which are dissipated in the form of heat (power loss).
Exciting current will flow in the primary winding at all times to maintain this magnetic field, but no transfer of energy will take place as long as the secondary circuit is open.
PRODUCING A COUNTER EMF When an alternating current flows through a primary winding, a magnetic field is established around the winding. As the lines of flux expand outward, relative motion is present, and a counter emf is induced in the winding. This is the same counter emf that you learned about in the part on inductors. Flux leaves the primary at the north pole and enters the primary at the south pole. The counter emf induced in the primary has a polarity that opposes the applied voltage, thus opposing the flow of current in the primary. It is the counter emf that limits exciting current to a very low value.
INDUCING A VOLTAGE IN THE SECONDARY To visualize how a voltage is induced into the secondary winding of a transformer, again refer to figure ( . As the exciting current flows through the primary, magnetic lines of force are generated. During the time current is increasing in
the primary, magnetic lines of force expand outward from the primary and cut the secondary. As you remember, a voltage is induced into a coil when magnetic lines cut across it. Therefore, the voltage across the primary causes a voltage to be induced across the secondary.
## PRIMARY AND SECONDARY PHASE RELATIONSHIP
The secondary voltage of a simple transformer may be either in phase or out of phase with the primary voltage. This depends on the direction in which the windings are wound and the arrangement of the connections to the external circuit (load). Simply, this means that the two voltages may rise and fall together or one may rise while the other is falling. Transformers in which the secondary voltage is in phase with the primary are referred to as LIKE-WOUND transformers, while those in which the voltages are 180 degrees out of phase are called UNLIKE-WOUND transformers. Dots are used to indicate points on a transformer schematic symbol that have the same instantaneous polarity (points that are in phase). The use of phase-indicating dots is illustrated in figure (9). In part (A) of the figure, both the primary and secondary windings are wound from top to bottom in a clockwise direction, as viewed from above the windings. When constructed in this manner, the top lead of the primary and the top lead of the secondary have the SAME polarity. This is indicated by the dots on the transformer symbol. A lack of phasing dots indicates a reversal of polarity.
Figure (9). - Instantaneous polarity depends on direction of winding Part (B) of the figure illustrates a transformer in which the primary and secondary are wound in opposite directions. As viewed from above the windings, the primary is wound in a clockwise direction from top to bottom, while the secondary is wound in a counterclockwise direction. Notice that the top leads of the primary and secondary have OPPOSITE polarities. This is indicated by the dots being placed on opposite ends of the transformer symbol. Thus, the polarity of the voltage at the terminals of the secondary of a transformer depends on the direction in which the secondary is wound with respect to the primary.
COEFFICIENT OF COUPLING
The COEFFICIENT OF COUPLING of a transformer is dependent on the portion of the total flux lines that cuts both primary and secondary windings. Ideally, all the flux lines generated by the primary should cut the secondary, and all the lines of the flux generated by the secondary should cut the primary. The coefficient of coupling would then be one (unity), and maximum energy would be transferred from the primary to the secondary. Practical power transformers use high-permeability silicon steel cores and close spacing between the windings to provide a high coefficient of coupling. Lines of flux generated by one winding which do not link with the other winding are called LEAKAGE FLUX. Since leakage flux generated by the primary does not cut the secondary, it cannot induce a voltage into the secondary. The voltage induced into the secondary is therefore less than it would be if the leakage flux did not exist. Since the effect of leakage flux is to lower the voltage induced into the secondary, the effect can be duplicated by assuming an inductor to be connected in series with the primary. This series LEAKAGE INDUCTANCE is assumed to drop part of the applied voltage, leaving less voltage across the primary. | 2,950 | 14,511 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.65625 | 4 | CC-MAIN-2019-35 | latest | en | 0.924159 |
http://www.talkstats.com/showthread.php/70421-Tires-Probability-Question | 1,511,381,495,000,000,000 | text/html | crawl-data/CC-MAIN-2017-47/segments/1510934806660.82/warc/CC-MAIN-20171122194844-20171122214844-00640.warc.gz | 511,411,314 | 11,123 | 1. ## Tires Probability Question
Question:
You recently bought a new set of four tires from a manufacturer who just announced a recall because 2% of that particular brand of tires are defective. What is the probability that at least one of your tires is defective? You may assume that the tires are defective independently of one another.
Since a tire cannot have one tire defective and two tires defective at the same time, I considered the events to be disjoint. I used the addition rule:
P(1 tire is defective) + P(2 tires are defective) + P(3 tires are defective) + P(4 tires are defective)
To find the probability of two tires being defective, I multiplied .02*.02. To find the probability that three tires being defective, I multiplied .02*.02*.02. And, so on for the four tires.. This is because the problem stated that they are independent.
2. ## Re: Tires Probability Question
.02*.02 gives you the probability that the first two tires you look at are both defective. You haven't specified what happened with the other two tires - they could be good it they could be bad. But if you're calculating the probability that exactly two are bad then you can't allow the other two to be bad. You also don't Account for how many ways there could be two bad tires. It could be the front two. It could be the back two... And so on. Hopefully this helps you understand some of the shortcomings of your answer
Tweet | 310 | 1,419 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.921875 | 4 | CC-MAIN-2017-47 | latest | en | 0.977033 |
https://discourse.julialang.org/t/performance-issue-with-multithreaded-computation-with-matrix-operations-at-its-heart-threads-threads-vs-blas-threads/106043 | 1,701,520,282,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00406.warc.gz | 244,994,413 | 7,935 | While trying to speed up a long computation with chunking more threads on it, I ran into an interesting problem. I found several related threads here, but I couldn’t find a convincing answer, nor a solution to my issue.
Here’s an MVP of my issue. Consider the following definitions:
``````julia> using LinearAlgebra, BenchmarkTools
julia> ts = 1:10; xs = rand(1000, 10);
julia> function fit(x, y, degree)
return qr([x_n ^ k for x_n in x, k in 0:degree]) \ y
end
``````
And then I run the following measurements, with `JULIA_NUM_THREADS=8`:
``````julia> @benchmark (for row in \$(collect(eachrow(xs))); fit(ts, row, 3); end)
BenchmarkTools.Trial: 1156 samples with 1 evaluation.
Range (min … max): 4.181 ms … 6.373 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 4.228 ms ┊ GC (median): 0.00%
Time (mean ± σ): 4.326 ms ± 284.965 μs ┊ GC (mean ± σ): 0.76% ± 3.36%
▄██▃▁ ▁
█████▆▆▅▆▅▅▅▄▅▅▄▄▄▅▁▆██▆▆▅▆█▅▄▄▁▅▁▄▅▁▄▄▁▅▄▄▅▅▇▇▆▆▅▄▅▁▅▁▅▁▄▅ █
4.18 ms Histogram: log(frequency) by time 5.48 ms <
Memory estimate: 1.50 MiB, allocs estimate: 8000.
julia> @benchmark @threads (for row in \$(collect(eachrow(xs))); fit(ts, row, 3); end)
BenchmarkTools.Trial: 314 samples with 1 evaluation.
Range (min … max): 12.338 ms … 69.561 ms ┊ GC (min … max): 0.00% … 77.62%
Time (median): 15.782 ms ┊ GC (median): 0.00%
Time (mean ± σ): 15.939 ms ± 3.073 ms ┊ GC (mean ± σ): 1.08% ± 4.38%
▂▂▃█ ▁▄▃▆▄
▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▂▁▁▃▃▂▁▂▃▁▃▃▅▃▆██████████▅▂▃▁▁▂▁▁▁▁▂ ▃
12.3 ms Histogram: frequency by time 17.1 ms <
Memory estimate: 1.50 MiB, allocs estimate: 8049.
``````
Notice how the execution time went up, even though the computation runs on 8 threads instead of 1. If I observe CPU usage in `htop`, it is obvious that most of the resources are wasted in system calls, most of the bars are red. I found suggestions in various threads that calling `BLAS.set_num_threads(1)` could improve the situation, but in my case, it has no visible effect, I get identical results.
I’m guessing that the Julia threads compete for the LAPACK library calls, which thus constitutes a bottleneck, but I don’t know how to get around this issue. Ideally, the (more elaborate) computation would be running on a 64 core CPU, which is currently sitting mostly idle, because I can only run this on a single thread.
Any ideas?
Okay, I found something interesting. If I remove the `qr` decomposition from `fit`, the single-threaded solution slows down by a factor of 2, but the multi-threaded solution suddenly becomes a lot more efficient:
``````julia> function fit(x, y, degree)
return [x_n ^ k for x_n in x, k in 0:degree] \ y
end
julia> @benchmark (for row in \$(collect(eachrow(xs))); fit(ts, row, 3); end)
BenchmarkTools.Trial: 568 samples with 1 evaluation.
Range (min … max): 7.684 ms … 11.128 ms ┊ GC (min … max): 15.37% … 14.02%
Time (median): 8.694 ms ┊ GC (median): 15.73%
Time (mean ± σ): 8.803 ms ± 628.737 μs ┊ GC (mean ± σ): 20.52% ± 5.92%
▂▄█▆▅▄▆█▆▄ ▂ ▁▃▄▆▅▄ ▄ ▂
▄▃▄▄▄▅████████████▄▄▆▃▄█▆▆▇██████▇███▄▅▅▄▄▂▆▄▄▄▂▂▂▁▁▂▁▂▁▁▃▃ ▄
7.68 ms Histogram: frequency by time 10.6 ms <
Memory estimate: 69.03 MiB, allocs estimate: 45000.
julia> @benchmark @threads (for row in \$(collect(eachrow(xs))); fit(ts, row, 3); end)
BenchmarkTools.Trial: 2620 samples with 1 evaluation.
Range (min … max): 1.028 ms … 24.744 ms ┊ GC (min … max): 0.00% … 11.35%
Time (median): 1.375 ms ┊ GC (median): 0.00%
Time (mean ± σ): 1.903 ms ± 1.430 ms ┊ GC (mean ± σ): 25.81% ± 25.75%
▂▄▅▅██▆▂▁▁ ▁▁ ▁▄▅▃ ▁
▇██████████▇▇▇▆▇▇▆▅▄▃▃▁▁▄▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▅█████████▇▆▅ █
1.03 ms Histogram: log(frequency) by time 4.3 ms <
Memory estimate: 69.03 MiB, allocs estimate: 45049.
``````
So what is going on with `qr` that makes it particularly inefficient when invoked from multiple threads in parallel?
I have several hypotheses here:
• maybe it’s the additional allocations triggered by `qr` which render multithreading inefficient
• maybe it’s linked to you benchmarking with globals
• maybe it’s the macros `@benchmark` and `@threads` which don’t play nice together (unlikely)
You’re matrices are (too) tiny.
``````function serial(mat, vec)
for i in 1:1000
qr(mat) \ vec
end
end
qr(mat) \ vec
end
end
mat = rand(10,4)
vec = rand(10)
@btime serial(\$mat, \$vec) # 6.052 ms (7000 allocations: 1.10 MiB)
@btime threaded(\$mat, \$vec) # 17.643 ms (7031 allocations: 1.10 MiB)
mat2 = rand(1000,100)
vec2 = rand(1000)
@btime serial(\$mat2, \$vec2) # 2.804 s (10000 allocations: 827.50 MiB)
@btime threaded(\$mat2, \$vec2) # 1.024 s (10031 allocations: 827.50 MiB)
``````
1 Like
That may be so, but what can I do if I need to least square fit 3rd degree polynomials on 10 points. These are the matrices I have to work with.
As a workaround, I can remove the qr transformation, yielding the same result.
Use StaticArrays.jl?
1 Like
I don’t see how that would help here, or even how it would work. For one thing, the size of the Vandermonde matrix above isn’t known at compile time, since the number of points to fit and the degree of the desired polynomial is only known at runtime. Also, the StaticArrays.jl manual suggests to use static arrays only below 100 items, and in this case, I can easily have more than that. The 3 and 10 are just specific examples.
My bad, when you said “these are the matrices I have to work with” I thought they would always look like that | 2,098 | 5,562 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2023-50 | latest | en | 0.736477 |
http://michaelnielsen.org/polymath1/index.php?title=Controlling_A%2BB/B_0&diff=prev&oldid=10323 | 1,537,806,154,000,000,000 | text/html | crawl-data/CC-MAIN-2018-39/segments/1537267160568.87/warc/CC-MAIN-20180924145620-20180924170020-00352.warc.gz | 163,513,712 | 11,705 | # Difference between revisions of "Controlling A+B/B 0"
Some numerical data on $|A+B/B_0|$ source and also $\mathrm{Re} \frac{A+B}{B_0}$ source, using a step size of 1 for $x$, suggesting that this ratio tends to oscillate roughly between 0.5 and 3 for medium values of $x$:
range of $x$ minimum value max value average value standard deviation min real part max real part
0-1000 0.179 4.074 1.219 0.782 -0.09 4.06
1000-2000 0.352 4.403 1.164 0.712 0.02 4.43
2000-3000 0.352 4.050 1.145 0.671 0.15 3.99
3000-4000 0.338 4.174 1.134 0.640 0.34 4.48
4000-5000 0.386 4.491 1.128 0.615 0.33 4.33
5000-6000 0.377 4.327 1.120 0.599 0.377 4.327
$1-10^5$ 0.179 4.491 1.077 0.455 -0.09 4.48
$10^5-2 \times 10^5$ 0.488 3.339 1.053 0.361 0.48 3.32
$2 \times 10^5-3 \times 10^5$ 0.508 3.049 1.047 0.335 0.50 3.00
$3 \times 10^5-4 \times 10^5$ 0.517 2.989 1.043 0.321 0.52 2.97
$4 \times 10^5-5 \times 10^5$ 0.535 2.826 1.041 0.310 0.53 2.82
$5 \times 10^5-6 \times 10^5$ 0.529 2.757 1.039 0.303 0.53 2.75
$6 \times 10^5-7 \times 10^5$ 0.548 2.728 1.038 0.296 0.55 2.72
Here is a computation on the magnitude $|\frac{d}{dx}(B'/B'_0)|$ of the derivative of $B'/B'_0$, sampled at steps of 1 in $x$ source, together with a crude upper bound coming from the triangle inequality source, to give some indication of the oscillation:
range of $T=x/2$ max value average value standard deviation triangle inequality bound
0-1000 1.04 0.33 0.19
1000-2000 1.25 0.39 0.24
2000-3000 1.31 0.39 0.25
3000-4000 1.39 0.38 0.27
4000-5000 1.64 0.37 0.26
5000-6000 1.60 0.36 0.27
6000-7000 1.61 0.36 0.26
7000-8000 1.55 0.36 0.27
8000-9000 1.65 0.34 0.26
9000-10000 1.47 0.34 0.26
$1-10^5$ 1.78 0.28 0.23 2.341
$10^5-2 \times 10^5$ 1.66 0.22 0.18 2.299
$2 \times 10^5-3 \times 10^5$ 1.55 0.20 0.17 2.195
$3 \times 10^5-4 \times 10^5$ 1.53 0.19 0.16 2.109
$4 \times 10^5-5 \times 10^5$ 1.31 0.18 0.15 2.039
$5 \times 10^5-6 \times 10^5$ 1.34 0.18 0.14
$6 \times 10^5-7 \times 10^5$ 1.33 0.17 0.14
In the toy case, we have
$\frac{|A^{toy}+B^{toy}|}{|B^{toy}_0|} \geq |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|$
where $b_n := \exp( \frac{t}{4} \log^2 n)$, $a_n := (n/N)^{y} b_n$, and $s := \frac{1+y+ix}{2} + \frac{t}{2} \log N + \frac{\pi i t}{8}$. For the effective approximation one can write
$\frac{A^{eff}+B^{eff}}{B^{eff}_0} = \sum_{n=1}^N \frac{b_n}{n^{s_B}} + \lambda \sum_{n=1}^N \frac{b_n}{n^{s_A}}$
where now $b_n := \exp( \frac{t}{4} \log^2 n)$, $s_B := \frac{1+y-ix}{2} + \frac{t}{2} \alpha_1(\frac{1+y-ix}{2})$, $s_A := \frac{1-y+ix}{2} + \frac{t}{2} \alpha_1(\frac{1-y+ix}{2})$, and
$\lambda := \frac{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 ) H_{0,1}( \frac{1-y+ix}{2} )}{ \overline{\exp( \frac{t}{4} \alpha_1(\frac{1+y+ix}{2})^2 ) H_{0,1}( \frac{1+y+ix}{2} )} }.$
In particular one has
$\frac{|A^{eff}+B^{eff}|}{|B^{eff}_0|} \geq |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \quad (2.1)$
where $b_n := \exp( \frac{t}{4} \log^2 n)$, $s := \frac{1+y+ix}{2} + \frac{t}{2} \alpha_1(\frac{1+y+ix}{2})$, and
$a_n := |\lambda| n^{y - \frac{t}{2} \alpha_1(\frac{1-y+ix}{2}) + \frac{t}{2} \alpha_1(\frac{1+y+ix}{2}) )} b_n.$
It is thus of interest to obtain lower bounds for expressions of the form
$|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|$
in situations where $b_1=1$ is expected to be a dominant term.
From the triangle inequality one obtains the lower bound
$|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \geq 1 - |a_1| - \sum_{n=2}^N \frac{|a_n|+|b_n|}{n^\sigma}$
where $\sigma := \frac{1+y}{2} + \frac{t}{2} \log N$ is the real part of $s$. There is a refinement:
Lemma 1 If $a_n,b_n$ are real coefficients with $b_1 = 1$ and $0 \leq a_1 \lt 1$ we have
$|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \geq 1 - a_1 - \sum_{n=2}^N \frac{\max( |b_n-a_n|, \frac{1-a_1}{1+a_1} |b_n+a_n|)}{n^\sigma}.$
Proof By a continuity argument we may assume without loss of generality that the left-hand side is positive, then we may write it as
$|\sum_{n=1}^N \frac{b_n - e^{i\theta} a_n}{n^s}|$
for some phase $\theta$. By the triangle inequality, this is at least
$|1 - e^{i\theta} a_1| - \sum_{n=2}^N \frac{|b_n - e^{i\theta} a_n|}{n^\sigma}.$
We factor out $|1 - e^{i\theta} a_1|$, which is at least $1-a_1$, to obtain the lower bound
$(1-a_1) (1 - \sum_{n=2}^N \frac{|b_n - e^{i\theta} a_n| / |1 - e^{i\theta} a_1|}{n^\sigma}).$
By the cosine rule, we have
$(|b_n - e^{i\theta} a_n| / |1 - e^{i\theta} a_1|)^2 = \frac{b_n^2 + a_n^2 - 2 a_n b_n \cos \theta}{1 + a_1^2 -2 a_1 \cos \theta}.$
This is a fractional linear function of $\cos \theta$ with no poles in the range $[-1,1]$ of $\cos \theta$. Thus this function is monotone on this range and attains its maximum at either $\cos \theta=+1$ or $\cos \theta = -1$. We conclude that
$\frac{|b_n - e^{i\theta} a_n|}{|1 - e^{i\theta} a_1|} \leq \max( \frac{|b_n-a_n|}{1-a_1}, \frac{|b_n+a_n|}{1+a_1} )$
and the claim follows.
We can also mollify the $a_n,b_n$:
Lemma 2 If $\lambda_1,\dots,\lambda_D$ are complex numbers, then
$|\sum_{d=1}^D \frac{\lambda_d}{d^s}| (|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|) = ( |\sum_{n=1}^{DN} \frac{\tilde b_n}{n^s}| - |\sum_{n=1}^{DN} \frac{\tilde a_n}{n^s}| )$
where
$\tilde a_n := \sum_{d=1}^D 1_{n \leq dN} 1_{d|n} \lambda_d a_{n/d}$
$\tilde b_n := \sum_{d=1}^D 1_{n \leq dN} 1_{d|n} \lambda_d b_{n/d}$
Proof This is immediate from the Dirichlet convolution identities
$(\sum_{d=1}^D \frac{\lambda_d}{d^s}) \sum_{n=1}^N \frac{a_n}{n^s} = \sum_{n=1}^N \frac{\tilde a_n}{n^s}$
and
$(\sum_{d=1}^D \frac{\lambda_d}{d^s}) \sum_{n=1}^N \frac{b_n}{n^s} = \sum_{n=1}^N \frac{\tilde b_n}{n^s}.$
$\Box$
Combining the two lemmas, we see for instance that we can show $|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \gt 0$ whenever can find $\lambda_1,\dots,\lambda_D$ with $\lambda_1=1$ and
$\sum_{n=2}^N \frac{\max( \frac{|\tilde b_n-\tilde a_n|}{1-a_1}, \frac{|\tilde b_n+ \tilde a_n|}{1+a_1})}{n^\sigma} \lt 1.$
A usable choice of mollifier seems to be the Euler products
$\sum_{d=1}^D \frac{\lambda_d}{d^s} := \prod_{p \leq P} (1 - \frac{b_p}{p^s})$
which are designed to kill off the first few $\tilde b_n$ coefficients.
### Analysing the toy model
With regards to the toy problem of showing $A^{toy}+B^{toy}$ does not vanish, here are the least values of $N$ for which this method works source source source source source:
$P$ in Euler product $N$ using triangle inequality $N$ using Lemma 1 $N$ using Lemma 1 and a custom mollifier
1 1391 1080
2 478 341 336
3 322 220 212
5 282 192 182
7 180
11 176
Dropping the $\lambda_6$ term from the $P=3$ Euler factor worsens the 220 threshold slightly to 235 source. However, other custom mollifiers do work (see above table).
### Analysing the effective model
The differences between the toy model and the effective model are:
• The real part $\sigma$ of $s$ is now $\frac{1+y}{2} + \frac{t}{2} \mathrm{Re} \alpha_1(\frac{1+y+ix}{2})$ rather than $\frac{1+y}{2} + \frac{t}{2} \log N$. (The imaginary part of $s$ also changes somewhat.)
• The coefficient $a_n$ is now given by
$a_n = |\lambda| n^{y + \frac{t}{2} (\alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2}))} b_n$
rather than $a_n = N^{-y} n^y b_n$, where
$|\lambda| = |\frac{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 H_{0,1}( \frac{1-y+ix}{2})}{\exp( \frac{t}{4} \alpha_1(\frac{1-y+ix}{2})^2 H_{0,1}( \frac{1-y+ix}{2})}|.$
Two complications arise here compared with the toy model: firstly, $\sigma,a_n$ now depend on $x$ and not just on $N$, and secondly the $a_n$ are not quite real-valued making it more difficult to apply Lemma 1.
However we have good estimates for $\sigma,a_n$ that depend only on $N$. Note that
$2\pi N^2 \leq T' \lt 2\pi (N+1)^2$
and hence
$x_N \leq x \lt x_{N+1}$
where
$x_N := 4\pi N^2 - \frac{\pi t}{4}.$
To control $\sigma$, it suffices to obtain lower bounds because our criteria (both the triangle inequality and Lemma 1) become harder to satisfy when $\sigma$ decreases. We compute
$\sigma = \frac{1+y}{2} + \frac{t}{2} \mathrm{Re}(\frac{1}{1+y+ix} + \frac{2}{-1+y+ix} + \frac{1}{2} \log \frac{1+y+ix}{4\pi})$
$= \frac{1+y}{2} + \frac{t}{2} (\frac{1+y}{(1+y)^2+x^2} + \frac{-2+2y}{(-1+y)^2+x^2} + \frac{1}{2} \log \frac{|1+y+ix|}{4\pi})$
$\geq \frac{1+y}{2} + \frac{t}{2} (\frac{1+y}{(-1+y)^2+x^2} + \frac{-2+2y}{(-1+y)^2+x^2} + \frac{1}{2} \log \frac{x}{4\pi})$
$\geq \frac{1+y}{2} + \frac{t}{2} (\frac{3y-1}{(-1+y)^2+x^2} + \log N)$
$\geq \frac{1+y}{2} + \frac{t}{2} \log N$
assuming that $y \geq 1/3$. Hence we can actually just use the same value of $\sigma$ as in the toy case.
Next we control $|\lambda|$. Note that we can increase $|\lambda|$ (thus multiplying $\sum_{n=1}^N \frac{a_n}{n^s}$ by a quantity greater than 1) without affecting (2.1), so we just need upper bounds on $|\lambda|$. We may factor
$|\lambda| = \exp( \frac{t}{4} \mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) + \mathrm{Re}( f(\frac{1-y+ix}{2}) - f(\frac{1+y+ix}{2} ) )$
where
$f(s) := -\frac{s}{2} \log \pi + (\frac{s}{2} - \frac{1}{2}) \log \frac{s}{2} - \frac{s}{2}.$
By the mean value theorem, we have
$\mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = -2 y \alpha_1(s') \alpha'_1(s')$
for some $s_1$ between $\frac{1-y+ix}{2}$ and $\frac{1+iy}{2}$. We have
$\alpha_1(s_1) = \frac{1}{2s_1} + \frac{1}{s_1-1} + \frac{1}{2} \log \frac{s_1}{2\pi}$
$= O_{\leq}(\frac{1}{x}) + O_{\leq}(\frac{1}{x/2}) + \frac{1}{2} \log \frac{|s_1|}{2\pi} + O_{\leq}(\frac{\pi}{4})$
$= O_{\leq}( \frac{\pi}{4} + \frac{3}{x_N}) + \frac{1}{2} O_{\leq}^{\mathbf{R}}( \log \frac{|1+y+ix_{N+1}|}{4\pi} )$
and
$\alpha'_1(s_1) = -\frac{1}{2s_1^2} + \frac{1}{(s_1-1)^2} + \frac{1}{2s_1}$
$= O_{\leq}(\frac{1}{x^2/2}) + O_{\leq}(\frac{1}{x^2/4}) + \frac{1}{2s_1}$
$= O_{\leq}(\frac{6}{x_N^2}) + \frac{1}{2s_1}$
$= O_{\leq}(\frac{6}{x_N^2}) + O_{\leq}( \frac{1}{x_N} ).$
Thus one has
$\mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = 2y O_{\leq}( (\frac{\pi}{4} + \frac{3}{x_N}) (\frac{1}{x_N} + \frac{6}{x_N^2}) )$
$+ 2y O_{\leq}( \log \frac{|1+y+ix_{N+1}|}{4\pi} (\frac{6}{x_N^2} + |\mathrm{Re} \frac{1}{2s'}|) )$
Now we have
$\mathrm{Re} \frac{1}{2s'} = \frac{\mathrm{Re}(s')}{2|s'|^2}$
$\leq \frac{1+y}{x^2}$
$\leq \frac{1+y}{x_N^2};$
also
$(\frac{\pi}{4} + \frac{3}{x_N}) (\frac{1}{x_N} + \frac{6}{x_N^2}) \leq \frac{\pi}{4} (1 + \frac{12/\pi}{x_N}) \frac{1}{x_N-6}$
$\leq \frac{\pi}{4} ( \frac{1}{x_N-6} + \frac{12/\pi}{(x_N-6)^2} )$
$\leq \frac{\pi}{4} \frac{1}{x_N - 6 - 12/\pi}.$
We conclude that
$\mathrm{Re} (\alpha_1(\frac{1-y+ix}{2})^2 - \alpha_1(\frac{1+y+ix}{2})^2) = O_{\leq}(\frac{\pi y}{2 (x_N - 6 - 12/\pi)} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi}).$
In a similar vein, from the mean value theorem we have
$\mathrm{Re}( f(\frac{1-y+ix}{2}) - f(\frac{1+y+ix}{2} ) = -y \mathrm{Re} f'(s_2)$
for some $s_2$ between $\frac{1-y+ix}{2}$ and $\frac{1+y+ix}{2}$. We have
$\mathrm{Re} f'(s_2) = -\frac{1}{2} \log \pi + \frac{1}{2} \log \frac{|s_2|}{2} - \mathrm{Re} \frac{1}{2s_2}$
$= \frac{1}{2} \log \frac{|s_2|}{2\pi} + O_{\leq}(\frac{\mathrm{Re}(s_2)}{2|s_2|^2})$
$\geq \log N + O_{\leq}(\frac{1+y}{x^2})$
$\geq \log N + O_{\leq}(\frac{1+y}{x_N^2})$
and thus
$|\lambda| \leq N^{-y} \exp( \frac{\pi y}{2 (x_N - 6 - 12/\pi)} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi} + \frac{y(1+y)}{x_N^2} )$
$\leq e^\delta N^{-y}$
where
$\delta := \frac{\pi y}{2 (x_N - 6 - \frac{14+2y}{\pi})} + \frac{2y(7+y)}{x_N^2} \log \frac{|1+y+ix_{N+1}|}{4\pi}$
Asymptotically we have
$\delta = \frac{\pi y}{2 x_N} + O( \frac{\log x_N}{x_N^2} ) = O( \frac{1}{x_N} ).$
Now we control $\alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2})$. By the mean-value theorem we have
$\alpha_1(\frac{1+y+ix}{2}) - \alpha_1(\frac{1-y+ix}{2}) = O_{\leq}( y |\alpha'_1(s_3)|)$
for some $s_3$ between $\frac{1+y+ix}{2}$ and $\frac{1-y+ix}{2}$. As before we have
$\alpha'_1(s_3) = -\frac{1}{2s_3^2} - \frac{1}{(s_3-1)^2} + \frac{1}{2s_3}$
$= O_{\leq}( \frac{1}{x^2/2} + \frac{1}{x^2/4} + \frac{1}{x} )$
$= O_{\leq}( \frac{1}{x_N} + \frac{6}{x_N^2} )$
$= O_{\leq}( \frac{1}{x_N-6} ).$
We conclude that (after replacing $|\lambda|$ with $e^\delta N^{-y}$)
$a_n = (n/N)^y \exp( \delta + O_{\leq}( \frac{t y \log n}{2(x_N-6)} ) ) b_n.$
The triangle inequality argument will thus give $A^{eff}+B^{eff}$ non-zero as long as
$\sum_{n=1}^N (1 + (n/N)^y \exp( \delta + \frac{t y \log n}{2(x_N-6)} ) ) \frac{b_n}{n^\sigma} \lt 2.$
The situation with using Lemma 1 is a bit more complicated because $a_n$ is not quite real. We can write $a_n = e^\delta a_n^{toy} + O_{\leq}( e_n )$ where
$a_n^{toy} := (n/N)^y b_n$
and
$e_n := e^\delta (n/N)^y (\exp( \frac{t y \log n}{2(x_N-6)} ) - 1) b_n$
and then by Lemma 1 and the triangle inequality we can make $A^{eff}+B^{eff}$ non-zero as long as
$a_1^{toy} + \sum_{n=2}^N \frac{\max( |b_n-a_n^{toy}|, \frac{1-a_1^{toy}}{1+a_1^{toy}} |b_n + a_n^{toy}|}{n^\sigma} + \sum_{n=1}^N \frac{e_n}{n^\sigma} \lt 1.$
When $N \geq 2000$ and $t=y=0.4$, we use the cruder lower bound
$|\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \geq 2 - \sum_{n=1}^N \frac{|b_n|}{n^\sigma} - \sum_{n=1}^N \frac{|a_n|}{n^\sigma}$
$\geq 2 - \sum_{n=1}^N \frac{1}{n^{0.7 + 0.1 \log \frac{N^2}{n}}} - e^\delta \exp( \frac{t y \log N}{2(x_N-6)} ) N^{-0.4} \sum_{n=1}^N \frac{1}{n^{0.3 + 0.1 \log \frac{N^2}{n}}}$
$\geq 2 - 1.706 - e^\delta \exp( \frac{0.8 \log N}{4\pi N^2 - 6.315} ) N^{-0.4} \times 3.469$
using the estimates from Estimating a sum. We can bound
$\delta := \frac{0.2 \pi}{4\pi N^2 - 6.315 - \frac{14.8}{\pi}} + \frac{5.6}{(4\pi N^2 - 0.315)^2} \log |\frac{1.4}{4\pi}+i (N+1)^2|$
which for $N \geq 2000$ may be bounded by $1.26 \times 10^{-8}$. One may similarly bound $\frac{0.8 \log N}{4\pi N^2 - 6.315}$ by $1.21 \times 10^{-7}\ltmath\gt, we obtain the lower bound :\ltmath\gt|\frac{A^{eff}+B^{eff}|}{|B^{eff}_0|} \geq |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}| \geq 2 - 1.706 - 0.166 = 0.128.$ | 6,672 | 14,135 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.109375 | 3 | CC-MAIN-2018-39 | longest | en | 0.355741 |
https://www.physicsforums.com/threads/delta-transformation-circuit.300159/ | 1,566,350,097,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00249.warc.gz | 937,570,277 | 16,660 | # Engineering Delta transformation circuit
#### danilo_rj
1. Homework Statement
http://i634.photobucket.com/albums/uu67/danilorj/circuito.jpg [Broken]
Above is the picture of the circuit i'm trying to solve. The problem asks to find the current over the resistor of 1 ohm.
2. Homework Equations
3. The Attempt at a Solution
Well, I found the equivalent resistence of the circuit and thus the total current. But to do this, I transformed the T-form of the circuit between the resistors 70,1 and 20 ohms in a delta-transformation. My doubt is when I know the current in each branch of the delta-form how am I suppose to find the current in each branch of the t-form? What is the relation of current between them?
Last edited by a moderator:
Related Engineering and Comp Sci Homework Help News on Phys.org
#### berkeman
Mentor
1. Homework Statement
http://i634.photobucket.com/albums/uu67/danilorj/circuito.jpg [Broken]
Above is the picture of the circuit i'm trying to solve. The problem asks to find the current over the resistor of 1 ohm.
2. Homework Equations
3. The Attempt at a Solution
Well, I found the equivalent resistence of the circuit and thus the total current. But to do this, I transformed the T-form of the circuit between the resistors 70,1 and 20 ohms in a delta-transformation. My doubt is when I know the current in each branch of the delta-form how am I suppose to find the current in each branch of the t-form? What is the relation of current between them?
Welcome to the PF, danilo. Wouldn't it be easier to just solve the circuit using Kirchoff's Current Law (KCL) equations? That would be my first approach.
Last edited by a moderator:
#### danilo_rj
That's a nice idea, I haven't thought that. Well, I found the current over the resistor of 1 ohm as being I = 0.1904 A. I would appreciate If you could check this answer for me.
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | 529 | 2,187 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.578125 | 4 | CC-MAIN-2019-35 | latest | en | 0.911561 |
http://schools-wikipedia.org/wp/c/Correlation_and_dependence.htm | 1,501,261,271,000,000,000 | text/html | crawl-data/CC-MAIN-2017-30/segments/1500550975184.95/warc/CC-MAIN-20170728163715-20170728183715-00368.warc.gz | 288,235,723 | 9,352 | # Correlation and dependence
#### Did you know...
Arranging a Wikipedia selection for schools in the developing world without internet was an initiative by SOS Children. Sponsor a child to make a real difference.
Several sets of (xy) points, with the correlation coefficient of x and y for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the centre has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero.
In probability theory and statistics, correlation, (often measured as a correlation coefficient) , indicates the strength and direction of a linear relationship between two random variables. In general statistical usage, correlation or co-relation refers to the departure of two variables from independence. In this broad sense there are several coefficients, measuring the degree of correlation, adapted to the nature of data.
A number of different coefficients are used for different situations. The best known is the Pearson product-moment correlation coefficient, which is obtained by dividing the covariance of the two variables by the product of their standard deviations. Despite its name, it was first introduced by Francis Galton.
## Pearson's product-moment coefficient
### Mathematical properties
The correlation coefficient ρX, Y between two random variables X and Y with expected values μX and μY and standard deviations σX and σY is defined as:
$\rho_{X,Y}={\mathrm{cov}(X,Y) \over \sigma_X \sigma_Y} ={E((X-\mu_X)(Y-\mu_Y)) \over \sigma_X\sigma_Y},$
where E is the expected value operator and cov means covariance. Since μX = E(X), σX2 = E(X2) − E2(X) and likewise for Y, we may also write
$\rho_{X,Y}=\frac{E(XY)-E(X)E(Y)}{\sqrt{E(X^2)-E^2(X)}~\sqrt{E(Y^2)-E^2(Y)}}.$
The correlation is defined only if both of the standard deviations are finite and both of them are nonzero. It is a corollary of the Cauchy-Schwarz inequality that the correlation cannot exceed 1 in absolute value.
The correlation is 1 in the case of an increasing linear relationship, −1 in the case of a decreasing linear relationship, and some value in between in all other cases, indicating the degree of linear dependence between the variables. The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables.
If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X2. Then Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly normal, uncorrelatedness is equivalent to independence.
A correlation between two variables is diluted in the presence of measurement error around estimates of one or both variables, in which case disattenuation provides a more accurate coefficient.
### Geometric Interpretation of correlation
The correlation coefficient can also be viewed as the cosine of the angle between the two vectors of samples drawn from the two random variables.
Caution: This method only works with centered data, i.e., data which have been shifted by the sample mean so as to have an average of zero. Some practitioners prefer an uncentered (non-Pearson-compliant) correlation coefficient. See the example below for a comparison.
As an example, suppose five countries are found to have gross national products of 1, 2, 3, 5, and 8 billion dollars, respectively. Suppose these same five countries (in the same order) are found to have 11%, 12%, 13%, 15%, and 18% poverty. Then let x and y be ordered 5-element vectors containing the above data: x = (1, 2, 3, 5, 8) and y = (0.11, 0.12, 0.13, 0.15, 0.18).
By the usual procedure for finding the angle between two vectors (see dot product), the uncentered correlation coefficient is:
$\cos \theta = \frac { \bold{x} \cdot \bold{y} } { \left\| \bold{x} \right\| \left\| \bold{y} \right\| } = \frac { 2.93 } { \sqrt { 103 } \sqrt { 0.0983 } } = 0.920814711.$
Note that the above data were deliberately chosen to be perfectly correlated: y = 0.10 + 0.01 x. The Pearson correlation coefficient must therefore be exactly one. Centering the data (shifting x by E(x) = 3.8 and y by E(y) = 0.138) yields x = (−2.8, −1.8, −0.8, 1.2, 4.2) and y = (−0.028, −0.018, −0.008, 0.012, 0.042), from which
$\cos \theta = \frac { \bold{x} \cdot \bold{y} } { \left\| \bold{x} \right\| \left\| \bold{y} \right\| } = \frac { 0.308 } { \sqrt { 30.8 } \sqrt { 0.00308 } } = 1 = \rho_{xy},$
as expected.
### Motivation for the form of the coefficient of correlation
Another motivation for correlation comes from inspecting the method of simple linear regression. As above, X is the vector of independent variables, $x_i$, and Y of the dependent variables, $y_i$, and a simple linear relationship between X and Y is sought, through a least-squares method on the estimate of Y:
$\ Y = X\beta + \varepsilon.\,$
Then, the equation of the least-squares line can be derived to be of the form:
$(Y - \bar{Y}) = \frac{n\sum x_iy_i-\sum x_i\sum y_i} {n\sum x_i^2-(\sum x_i)^2} (X - \bar{X})$
which can be rearranged in the form:
$(Y - \bar{Y})=\frac{r s_y}{s_x} (X-\bar{X})$
where r has the familiar form mentioned above :$\frac{n\sum x_iy_i-\sum x_i\sum y_i} {\sqrt{n\sum x_i^2-(\sum x_i)^2}~\sqrt{n\sum y_i^2-(\sum y_i)^2}}.$
### Interpretation of the size of a correlation
Correlation Negative Positive
Small −0.3 to −0.1 0.1 to 0.3
Medium −0.5 to −0.3 0.3 to 0.5
Large −1.0 to −0.5 0.5 to 1.0
Several authors have offered guidelines for the interpretation of a correlation coefficient. Cohen (1988), for example, has suggested the following interpretations for correlations in psychological research, in the table on the right.
As Cohen himself has observed, however, all such criteria are in some ways arbitrary and should not be observed too strictly. This is because the interpretation of a correlation coefficient depends on the context and purposes. A correlation of 0.9 may be very low if one is verifying a physical law using high-quality instruments, but may be regarded as very high in the social sciences where there may be a greater contribution from complicating factors.
Along this vein, it is important to remember that "large" and "small" should not be taken as synonyms for "good" and "bad" in terms of determining that a correlation is of a certain size. For example, a correlation of 1.0 or −1.0 indicates that the two variables analyzed are equivalent modulo scaling. Scientifically, this more frequently indicates a trivial result than an earth-shattering one. For example, consider discovering a correlation of 1.0 between how many feet tall a group of people are and the number of inches from the bottom of their feet to the top of their heads.
## Non-parametric correlation coefficients
Pearson's correlation coefficient is a parametric statistic and when distributions are not normal it may be less useful than non-parametric correlation methods, such as Chi-square, Point biserial correlation, Spearman's ρ and Kendall's τ. They are a little less powerful than parametric methods if the assumptions underlying the latter are met, but are less likely to give distorted results when the assumptions fail.
## Other measures of dependence among random variables
To get a measure for more general dependencies in the data (also nonlinear) it is better to use the correlation ratio which is able to detect almost any functional dependency, or the entropy-based mutual information/ total correlation which is capable of detecting even more general dependencies. The latter are sometimes referred to as multi-moment correlation measures, in comparison to those that consider only 2nd moment (pairwise or quadratic) dependence.
The polychoric correlation is another correlation applied to ordinal data that aims to estimate the correlation between theorised latent variables.
## Copulas and correlation
The information given by a correlation coefficient is not enough to define the dependence structure between random variables; to fully capture it we must consider a copula between them. The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the cumulative distribution functions are the multivariate normal distributions. In the case of elliptic distributions it characterizes the (hyper-)ellipses of equal density, however, it does not completely characterize the dependence structure (for example, the a multivariate t-distribution's degrees of freedom determine the level of tail dependence).
## Correlation matrices
The correlation matrix of n random variables X1, ..., Xn is the n × n matrix whose i,j entry is corr(XiXj). If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as the covariance matrix of the standardized random variables Xi /SD(Xi) for i = 1, ..., n. Consequently it is necessarily a positive-semidefinite matrix.
The correlation matrix is symmetric because the correlation between $X_i$ and $X_j$ is the same as the correlation between $X_j$ and $X_i$.
## Removing correlation
It is always possible to remove the correlation between zero-mean random variables with a linear transform, even if the relationship between the variables is nonlinear. Suppose a vector of n random variables is sampled m times. Let X be a matrix where $X_{i,j}$ is the jth variable of sample i. Let $Z_{r,c}$ be an r by c matrix with every element 1. Then D is the data transformed so every random variable has zero mean, and T is the data transformed so all variables have zero mean, unit variance, and zero correlation with all other variables. The transformed variables will be uncorrelated, even though they may not be independent.
$D = X -\frac{1}{m} Z_{m,m} X$
$T = D (D^T D)^{-\frac{1}{2}}$
where an exponent of -1/2 represents the matrix square root of the inverse of a matrix. The covariance matrix of T will be the identity matrix. If a new data sample x is a row vector of n elements, then the same transform can be applied to x to get the transformed vectors d and t:
$d = x - \frac{1}{m} Z_{1,m} X$
$t = d (D^T D)^{-\frac{1}{2}}.$
### Correlation and causality
The conventional dictum that " correlation does not imply causation" means that correlation cannot be validly used to infer a causal relationship between the variables. This dictum should not be taken to mean that correlations cannot indicate causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown. Consequently, establishing a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction).
Here is a simple example: hot weather may cause both crime and ice-cream purchases. Therefore crime is correlated with ice-cream purchases. But crime does not cause ice-cream purchases and ice-cream purchases do not cause crime.
A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health? Or does good health lead to good mood? Or does some other factor underlie both? Or is it pure coincidence? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be.
### Correlation and linearity
Four sets of data with the same correlation of 0.81
While Pearson correlation indicates the strength of a linear relationship between two variables, its value alone may not be sufficient to evaluate this relationship, especially in the case where the assumption of normality is incorrect.
The image on the right shows scatterplots of Anscombe's quartet, a set of four different pairs of variables created by Francis Anscombe. The four $y$ variables have the same mean (7.5), standard deviation (4.12), correlation (0.81) and regression line ($y = 3 + 0.5x$). However, as can be seen on the plots, the distribution of the variables is very different. The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear, and the Pearson correlation coefficient is not relevant. In the third case (bottom left), the linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation coefficient from 1 to 0.81. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear.
These examples indicate that the correlation coefficient, as a summary statistic, cannot replace the individual examination of the data.
## Computing correlation accurately in a single pass
The following algorithm (in pseudocode) will calculate Pearson correlation with good numerical stability.
sum_sq_x = 0
sum_sq_y = 0
sum_coproduct = 0
mean_x = x[1]
mean_y = y[1]
for i in 2 to N:
sweep = (i - 1.0) / i
delta_x = x[i] - mean_x
delta_y = y[i] - mean_y
sum_sq_x += delta_x * delta_x * sweep
sum_sq_y += delta_y * delta_y * sweep
sum_coproduct += delta_x * delta_y * sweep
mean_x += delta_x / i
mean_y += delta_y / i
pop_sd_x = sqrt( sum_sq_x / N )
pop_sd_y = sqrt( sum_sq_y / N )
cov_x_y = sum_coproduct / N
correlation = cov_x_y / (pop_sd_x * pop_sd_y) | 3,311 | 14,011 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.15625 | 4 | CC-MAIN-2017-30 | longest | en | 0.920583 |
https://www.statistics-lab.com/%E7%BB%8F%E6%B5%8E%E4%BB%A3%E5%86%99%E8%AE%A1%E9%87%8F%E7%BB%8F%E6%B5%8E%E5%AD%A6%E4%BD%9C%E4%B8%9A%E4%BB%A3%E5%86%99econometrics%E4%BB%A3%E8%80%83the-geometry-of-least-squares/ | 1,716,692,867,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971058861.60/warc/CC-MAIN-20240526013241-20240526043241-00011.warc.gz | 876,580,889 | 41,054 | ### 经济代写|计量经济学作业代写Econometrics代考|The Geometry of Least Squares
statistics-lab™ 为您的留学生涯保驾护航 在代写计量经济学Econometrics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写计量经济学Econometrics代写方面经验极为丰富,各种代写计量经济学Econometrics相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
## 经济代写|计量经济学作业代写Econometrics代考|Introduction
The most commonly used, and in many ways the most important, estimation technique in econometrics is least squares. It is useful to distinguish between two varieties of least squares, ordinary least squares, or OLS, and nonlinear least squares, or NLS. In the case of OLS the regression equation that is to be estimated is linear in all of the parameters, while in the case of NLS it is nonlinear in at least one parameter. OLS estimates can be obtained by direct calculation in several different ways (see Section 1.5), while NLS estimates require iterative procedures (see Chapter 6). In this chapter, we will discuss only ordinary least squares, since understanding linear regression is essential to understanding everything else in this book.
There is an important distinction between the numerical and the statistical properties of estimates obtained using OLS. Numerical properties are those that hold as a consequence of the use of ordinary least squares, regardless of how the data were generated. Since these properties are numerical, they can always be verified by direct calculation. An example is the well-known fact that OLS residuals sum to zero when the regressors include a constant term. Statistical properties, on the other hand, are those that hold only under certain assumptions about the way the data were generated. These can never be verified exactly, although in some cases they can be tested. An example is the well-known proposition that OLS estimates are, in certain circumstances, unbiased.
The distinction between numerical properties and statistical properties is obviously fundamental. In order to make this distinction as clearly as possible, we will in this chapter discuss only the former. We will study ordinary least squares purely as a computational device, without formally introducing any sort of statistical model (although we will on occasion discuss quantities that are mainly of interest in the context of linear regression models). No statistical models will be introduced until Chapter 2 , where we will begin discussing nonlinear regression models, of which linear regression models are of course a special case.
By saying that we will study OLS as a computational device, we do not mean that we will discuss computer algorithms for calculating OLS estimates (although we will do that to a limited extent in Section 1.5). Instead, we mean that we will discuss the numerical properties of ordinary least squares and, in particular, the geometrical interpretation of those properties. All of the numerical properties of OLS can be interpreted in terms of Euclidean geometry. This geometrical interpretation often turns out to be remarkably simple, involving little more than Pythagoras’ Theorem and high-school trigonometry, in the context of finite-dimensional vector spaces. Yet the insight gained from this approach is very great. Once one has a thorough grasp of the geometry involved in ordinary least squares, one can often save oneself many tedious lines of algebra by a simple geometrical argument. Moreover, as we hope the remainder of this book will illustrate, understanding the geometrical properties of OLS is just as fundamental to understanding nonlinear models of all types as it is to understanding linear regression models.
## 经济代写|计量经济学作业代写Econometrics代考|The Geometry of Least Squares
The essential ingredients of a linear regression are a regressand $y$ and a matrix of regressors $\boldsymbol{X} \equiv\left[\boldsymbol{x}{1} \ldots \boldsymbol{x}{k}\right]$. The regressand $\boldsymbol{y}$ is an $n$-vector, and the matrix of regressors $\boldsymbol{X}$ is an $n \times k$ matrix, each column $\boldsymbol{x}{i}$ of which is an $n$-vector. The regressand $\boldsymbol{y}$ and each of the regressors $\boldsymbol{x}{1}$ through $\boldsymbol{x}_{k}$ can be thought of as points in $n$-dimensional Euclidean space, $E^{n}$. The $k$ regressors, provided they are linearly independent, span a $k$-dimensional subspace of $E^{n}$. We will denote this subspace by $S(X) .1$
The subspace $\mathcal{S}(\boldsymbol{X})$ consists of all points $z$ in $E^{n}$ such that $\boldsymbol{z}=\boldsymbol{X} \gamma$ for sume $\gamma$, where $\gamma$ is a $k$ =vectur. Strictly speaking, we shuuld refer to $S(X)$ as the subspace spanned by the columns of $\boldsymbol{X}$, but less formally we will often refer to it simply as the span of $\boldsymbol{X}$. The dimension of $\mathcal{S}(\boldsymbol{X})$ is always equal to $\rho(\boldsymbol{X})$, the rank of $\boldsymbol{X}$ (i.e., the number of columns of $\boldsymbol{X}$ that are linearly independent). We will assume that $k$ is strictly less than $n$, something which it is reasonable to do in almost all practical cases. If $n$ were less than $k$, it would be impossible for $\boldsymbol{X}$ to have full column rank $k$.
A Euclidean space is not defined without defining an inner product. In this case, the inner product we are interested in is the so-called natural inner product. The natural inner product of any two points in $E^{n}$, say $\boldsymbol{z}{i}$ and $\boldsymbol{z}{j}$, may be denoted $\left\langle z_{i}, z_{j}\right\rangle$ and is defined by
$$\left\langle\boldsymbol{z}{i}, \boldsymbol{z}{j}\right\rangle \equiv \sum_{t=1}^{n} z_{i t} z_{j t} \equiv \boldsymbol{z}{i}^{\top} \boldsymbol{z}{j} \equiv \boldsymbol{z}{j}^{\top} \boldsymbol{z}{i}$$
1 The notation $S(\boldsymbol{X})$ is not a standard one, there being no standard notation that we are comfortable with. We believe that this notation has much to recommend it and will therefore use it hereafter.
## 经济代写|计量经济学作业代写Econometrics代考|The spaces S(X) and S⊥(X)
This is done by connecting the point $z$ with the origin and putting an arrowhead at $\boldsymbol{z}$. The resulting arrow then shows graphically the two things about a vector that matter, namely, its length and its direction. The Euclidean length of a vector $z$ is
$$|z| \equiv\left(\sum_{t=1}^{n} z_{t}^{2}\right)^{1 / 2}=\left|\left(z^{\top} z\right)^{1 / 2}\right|$$
where the notation emphasizes that $|z|$ is the positive square root of the sum of the squared elements of $z$. The direction is the vector itself normalized to have length unity, that is, $z /|z|$. One advantage of this convention is that if we move one of the arrows, being careful to change neither its length nor its direction, the new arrow represents the same vector, even though the arrowhead is now at a different point. It will often be very convenient to do this, and we therefore adopt this convention in most of our diagrams.
Figure $1.1$ illustrates the concepts discussed above for the case $n=2$ and $k=1$. The matrix of regressors $\boldsymbol{X}$ has only one column in this case, and it is therefore represented by a single vector in the figure. As a consequence, $\mathcal{S}(\boldsymbol{X})$ is one-dimensional, and since $n=2, \mathcal{S}^{\perp}(\boldsymbol{X})$ is also one-dimensional. Notice that $\mathcal{S}(\boldsymbol{X})$ and $\mathcal{S}^{\perp}(\boldsymbol{X})$ would be the same if $\boldsymbol{X}$ were any point on the straight line which is $\mathcal{S}(\boldsymbol{X})$, except for the origin. This illustrates the fact that $\mathcal{S}(\boldsymbol{X})$ is invariant to any nonsingular transformation of $\boldsymbol{X}$.
As we have seen, any point in $\mathcal{S}(\boldsymbol{X})$ can be represented by a vector of the form $\boldsymbol{X} \boldsymbol{\beta}$ for some $k$-vector $\boldsymbol{\beta}$. If one wants to find the point in $\mathcal{S}(\boldsymbol{X})$ that is closest to a given vector $\boldsymbol{y}$, the problem to be solved is that of minimizing, with respert tn the chnice of $\boldsymbol{\beta}$, the diktance hetween $\boldsymbol{y}$ and $\boldsymbol{X} \boldsymbol{\beta}$. Minimizing this distance is evidently equivalent to minimizing the square of this distance.
## 经济代写|计量经济学作业代写Econometrics代考|The Geometry of Least Squares
⟨和一世,和j⟩≡∑吨=1n和一世吨和j吨≡和一世⊤和j≡和j⊤和一世
1 符号小号(X)不是标准的,没有我们喜欢的标准符号。我们相信这个符号有很多值得推荐的地方,因此以后会使用它。
## 经济代写|计量经济学作业代写Econometrics代考|The spaces S(X) and S⊥(X)
|和|≡(∑吨=1n和吨2)1/2=|(和⊤和)1/2|
## 广义线性模型代考
statistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 | 2,917 | 9,511 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.96875 | 4 | CC-MAIN-2024-22 | latest | en | 0.819632 |
https://www.halfbakery.com/idea/Orbital_20Gravitational_20Wave_20Observatory | 1,566,710,436,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00332.warc.gz | 823,207,907 | 6,336 | h a l f b a k e r y
Professional croissant on closed course. Do not attempt.
meta:
account: browse anonymously, or get an account and write.
user: pass:
register,
# Orbital Gravitational Wave Observatory
(+4) [vote for, against]
Gravitational waves have already been detected. The experimental apparatus involves incredibly long holes with lasers shining about and doing very careful interferometry on the resultant light. Their signal is often a fraction of a proton width and for that reason they must work very hard to stop temperature or leaning undergraduates adding spurious signals.
Nature has provided science with a finely tuned gravitational sensor. The cat. Everyone knows cats land on their feet when they find themselves flung from tall structures. This implies they have full ability to align with the gravitational field in a manner sensitive to polarity.
This behavior can be used to put together a gravitational wave observatory based upon a feline array. To avoid the overwhelming local effects of the Earth, the array must be positioned in space. Fortunately, blasting animals into space is fully baked. Once we have an orbiting capsule full of cats we need to prod them until they're all in the middle. Then we need a readout. I recommend finding a breed with a pleasing dorsal-ventral color separation, or just paint one side with a hard-wearing white gloss. Simply set up cameras along all major axes through the craft. The cameras only need to the black/white so nothing fancy needed.
Now we wait. At baseline, the cats will assume a random orientation with a fairly even black-white mix along any axial pair of cameras. Now, when a gravity wave comes along, the cats will align, first with one direction and then the opposite node of the wave. The cats will oscillate back and forth along an axis perpendicular to the origin of the gravitational disturbance.
From these oscillations, the direction and frequency can be determined. Amplitude is trickier. Maybe some cats are more/less sensitive? Maybe a separate instrument? Perhaps measure the force generated by several cats attached to a rotating pole?
Anyhow, I think it's clear the direction we need to go.
— bs0u0155, Aug 07 2017
Laser Interferometer Space Antenna https://en.wikipedi...meter_Space_Antenna
[xaviergisz, Aug 07 2017]
We will transport all the cats into orbit, or to one of the LaGrange points, free of charge and at any time.
Or Mercury.
Or Neptune ... Neptune's nice at this season.
What about the Oort cloud ? There's a terrible shortage of felines out there ...
Life support not included.
[+]
— 8th of 7, Aug 07 2017
The first gravitational wave signal spanned a frequency range from 35 to 250 Hz, according to Wikipedia.
I worry that the moment of inertia of a typical cat will act as a low-pass filter that prevents such signals from being detectable.
Hopefully future advances in wide-bandwidth cat manufacturing techniques will allow this deficiency to be addressed.
— Wrongfellow, Aug 12 2017
There is a solution to that problem. The mass of a cat remains approximately constant, but its rotational inertia depends on its shape. By deforming the cat into a long, narrow rod its rotational inertia about the long axis can be made arbitarily small.
— MaxwellBuchanan, Aug 12 2017
Interesting! This suggests the idea of mounting three similarly-elongated cats along mutually perpendicular axes, thus achieving directional sensitivity.
— Wrongfellow, Aug 12 2017
<succumbs to giggling and hiccuping>
— 8th of 7, Aug 12 2017
[annotate]
back: main index | 782 | 3,595 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2019-35 | longest | en | 0.911248 |
https://studyres.com/doc/3577880/stat-503-1-and--2 | 1,582,690,249,000,000,000 | text/html | crawl-data/CC-MAIN-2020-10/segments/1581875146186.62/warc/CC-MAIN-20200226023658-20200226053658-00168.warc.gz | 571,059,387 | 17,267 | • Study Resource
• Explore
Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Document related concepts
Central limit theorem wikipedia, lookup
Transcript
```Solutions to Homework #6
1. Problem
5.30
page 403
(27 points)
(5
p)
Since the average will be approximately normal, +2 standard deviations will contain 95% of the
population. For 400 radiators, the standard deviation of the average is 0.4 / 400 0.02. (2p)
Therefore the average will lie within 0.15+20.02, i.e., 0.11 to 0.19. (3p)
2. Problem
5.38
page 405
(5
p)
(a) The average will have a normal distribution with mean 55,000 miles and standard deviation =
4500 / 8 1590.99. (3p)
(b) 51,800 is 2.011 standard deviations below the mean. The probability of being this low or lower is
0.0221. (2p)
3. Problem
5.40
page 405
(7
p)
(a) The distribution is approximately normal with mean 2.2 and standard deviation 0.1941. (3p)
(b) Z=-1.03 and the probability of being less than this is 0.1515. (2p)
(c) Fewer than 100 accidents in a year corresponds to an average of < 100/52 = 1.923. For this, Z=1.427 and the probability of being less than this is 0.0768. (2p)
4. Problem
5.44
page 406
(5
p)
(a) The mean of the total is the sum of the means, 100+250=350. The standard deviation is
2.52 2.82 3.7537. (3p)
(b) The two values given are 1.332 standard deviations from the mean. The probability between them
is 0.8171. (2p)
5. Problem
5.48
page 408
(5
p)
(a) D1=2(0.002) = 0.004 and D2=2(0.001) = 0.002. (2p)
(b) Standard deviation is given by 0.002 2 0.0012 0.0012 0.002449. So D=0.005, considerably
less than the engineer’s guess of 0.008. (3p)
```
Related documents | 597 | 1,722 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.3125 | 4 | CC-MAIN-2020-10 | longest | en | 0.833416 |
https://permpal.com/perms/basis/01342_01423_01432_10342_10423_10432_13042_13402_13420_14023_14032_14203_14230_14302_14320/ | 1,679,485,449,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00197.warc.gz | 514,097,167 | 14,309 | ###### Av(12453, 12534, 12543, 21453, 21534, 21543, 24153, 24513, 24531, 25134, 25143, 25314, 25341, 25413, 25431)
Counting Sequence
1, 1, 2, 6, 24, 105, 480, 2254, 10776, 52182, 255120, 1256596, 6226176, 30998994, 154959938, ...
Implicit Equation for the Generating Function
$$\displaystyle \left(3 x^{3}+4 x^{2}+20 x -4\right) F \left(x \right)^{3}-\left(x +3\right) \left(3 x^{3}+4 x^{2}+20 x -4\right) F \left(x \right)^{2}+\left(3 x^{5}+10 x^{4}+35 x^{3}+44 x^{2}+53 x -12\right) F \! \left(x \right)-3 x^{5}-7 x^{4}-19 x^{3}-17 x^{2}-17 x +4 = 0$$
Recurrence
$$\displaystyle a \! \left(0\right) = 1$$
$$\displaystyle a \! \left(1\right) = 1$$
$$\displaystyle a \! \left(2\right) = 2$$
$$\displaystyle a \! \left(3\right) = 6$$
$$\displaystyle a \! \left(4\right) = 24$$
$$\displaystyle a \! \left(5\right) = 105$$
$$\displaystyle a \! \left(6\right) = 480$$
$$\displaystyle a \! \left(n +6\right) = -\frac{3 \left(n -1\right) \left(n +1\right) a \! \left(n \right)}{2 \left(2 n +11\right) \left(n +5\right)}-\frac{n \left(67 n +191\right) a \! \left(n +1\right)}{2 \left(2 n +11\right) \left(n +5\right)}-\frac{5 \left(19 n +33\right) \left(n +1\right) a \! \left(n +2\right)}{2 \left(2 n +11\right) \left(n +5\right)}-\frac{\left(401 n^{2}+1569 n +1480\right) a \! \left(n +3\right)}{2 \left(2 n +11\right) \left(n +5\right)}+\frac{2 \left(37 n^{2}+209 n +285\right) a \! \left(n +4\right)}{\left(2 n +11\right) \left(n +5\right)}+\frac{\left(4 n^{2}+55 n +165\right) a \! \left(n +5\right)}{\left(2 n +11\right) \left(n +5\right)}, \quad n \geq 7$$
### This specification was found using the strategy pack "Point Placements Tracked Fusion" and has 14 rules.
Found on January 23, 2022.
Finding the specification took 0 seconds.
Copy to clipboard:
View tree on standalone page.
Copy 14 equations to clipboard:
\begin{align*} F_{0}\! \left(x \right) &= F_{1}\! \left(x \right)+F_{2}\! \left(x \right)\\ F_{1}\! \left(x \right) &= 1\\ F_{2}\! \left(x \right) &= F_{3}\! \left(x \right)\\ F_{3}\! \left(x \right) &= F_{13}\! \left(x \right) F_{4}\! \left(x \right)\\ F_{4}\! \left(x \right) &= F_{5}\! \left(x , 1\right)\\ F_{5}\! \left(x , y\right) &= \frac{y F_{6}\! \left(x , y\right)-F_{6}\! \left(x , 1\right)}{-1+y}\\ F_{6}\! \left(x , y\right) &= F_{1}\! \left(x \right)+F_{7}\! \left(x , y\right)\\ F_{7}\! \left(x , y\right) &= F_{8}\! \left(x , y\right)\\ F_{8}\! \left(x , y\right) &= F_{12}\! \left(x , y\right) F_{9}\! \left(x , y\right)\\ F_{9}\! \left(x , y\right) &= F_{10}\! \left(x , y\right)+F_{6}\! \left(x , y\right)\\ F_{10}\! \left(x , y\right) &= F_{11}\! \left(x , y\right)\\ F_{11}\! \left(x , y\right) &= F_{6}\! \left(x , y\right)^{2} F_{12}\! \left(x , y\right) F_{9}\! \left(x , y\right)\\ F_{12}\! \left(x , y\right) &= y x\\ F_{13}\! \left(x \right) &= x\\ \end{align*} | 1,255 | 2,810 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.75 | 4 | CC-MAIN-2023-14 | longest | en | 0.444401 |
https://www.coursehero.com/file/57756/NATS-101-Homework-Assignment-4/ | 1,516,214,233,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084886952.14/warc/CC-MAIN-20180117173312-20180117193312-00008.warc.gz | 867,388,501 | 25,404 | NATS 101 - Homework Assignment #4
NATS 101 - Homework Assignment #4 - Antonio Alarcon NATS...
This preview shows pages 1–2. Sign up to view the full content.
2/10/08 NATS 101: The World We Create Homework Assignment #4 1a) Radioisotope C 14 Carbon 12 – 6 protons, 6 neutrons Carbon 14 – 6 protons, 8 neutrons Carbon 14 as a radioisotope C 14 Half-life – 5,730 years 1b) Estimation 10,000 radioactive C 14 (normal) / 2,500 radioactive C 14 (remaining) = 2 Half-lives 2 Half-lives * C 14 Half-life (5,730 years) = 11,460 years old The homicide took place about 11,460 years ago, NOT 10 years ago 1c) Justification To solve the proposed problem, the following steps were taken: 1) Estimate the number of half-lives of C 14 decayed a. Achieved by comparing the normal amount of C 14 within the human body to the amount of C 14 remaining within the body and clothing 2) Find the half-life of C 14 a. Achieved by an on-line source [http://education.jlab.org/itselemental/ele006.html] 3) Estimate how many years ago the murder took place a. Achieved by multiplying the number of half-live of C 14 decayed by the number of years within one half-life of C 14 4) Compare the age of the body to the last recorded glacial retreat (ice age) a. Comparing the age of the body to the last recorded glacial retreat; both events occurred within the same time frame. This proved to be highly supporting information concerning the approximate age of the body, as 10,000 years ago (last glacial retreat), 11,460 years ago (homicide). [http://www2.nature.nps.gov/geology/parks/romo/index.cfm#geology]
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up to access the rest of the document.
{[ snackBarMessage ]}
Page1 / 3
NATS 101 - Homework Assignment #4 - Antonio Alarcon NATS...
This preview shows document pages 1 - 2. Sign up to view the full document.
View Full Document
Ask a homework question - tutors are online | 543 | 1,997 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.984375 | 3 | CC-MAIN-2018-05 | latest | en | 0.826609 |
https://mathoverflow.net/questions/15393/whitehead-products-on-manifolds | 1,675,077,129,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00028.warc.gz | 390,681,335 | 25,872 | • Connect sums of $\mathbb CP^2$'s are a decent example. The complement of a Cantor set in $\mathbb R^3$ would be a similar one. Feb 16, 2010 at 2:27
• The simplest interesting example is the Whitehead product for $S^2$, i.e., $\pi_2(S^2)\times\pi_2(S^2)\to\pi_3(S^2)$ is precisely given by sending $(1,1)$ to $2$. This is equivalent to the Hopf invariant of the Hopf map being 1, which is further equivalent to the linking number of fibres in the Hopf map being $2$. Feb 21, 2010 at 1:14
I guess that by the Whitehead Lie algebra, you mean the homotopy group Lie algebra $\pi_*(\Omega X)\simeq \pi_{*-1}(X)$ maybe tensored by the reals $R$. In that case there is a theorem of Felix-Halperin-THomas, called the dichotomy theorem which tells you that either this Lie algebra is finite-dimensional (and the space is said to be "elliptic"), or it is very big in the sense that the ranks of $\pi_k(X)$ grows exponentially with k (and the space is then called "hyperbolic"). If the Euler characteristic of the manifold is negative then the space is always hyperbolic/ Moreover when the space is hyperbolic the Whitehead Lie algebra is very far from being abelian: actually its radical is finite dimensional. Therefore any manifold with negative euler characteristic has an non abelian infinite dimensional homotopy Lie algebra.
To generalize what Ryan says, actually any connected sum of two simply connected manifolds $M$ and $N$ is hyperbolic unless the cohomology of both $M$ and $N$ are truncatated polynomial algebras on a single genrator (like the sphere or $CP(n)$). In particular the connected sum of 3 or more closed manifolds not having the rational homotopy type of a sphere is hyperbolic.
Another example of a non abelian Whitehead Lie algebra but finite dimensional, is the one associated to a manifold $M$ obtained as an $S^5$-bundle with base $S^3\times S^3$ and where the euler class of the bundle is the fundamental class of the base (or any non zero multiple of it). In that case the Whitehead rational Lie algebra $\pi_*(M)\otimes Q$ is of dimension $3$ with basis $x,y,[x,y]$ where $x$ and $y$ are in degree $3$ and $[x,y]$ is in degree $5$. Thus this manifold M is elliptic. Interestingly enough, the cohomology algebra of M is isomorphic to that of the connected sum $W$ of two copies of $S^3\times S^8$, but $W$ is hyperbolic. | 635 | 2,344 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2023-06 | latest | en | 0.914645 |
https://www.construct.net/en/forum/construct-2/general-discussion-17/bezier-spline-paths-updated-84961 | 1,716,308,899,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971058504.35/warc/CC-MAIN-20240521153045-20240521183045-00045.warc.gz | 616,816,045 | 20,789 | # Bezier Spline Paths? UPDATED for ASHLEY's Brain!!!
0 favourites
• 25 posts
From the Asset Store
Brain Games is a series of challenges, riddles and puzzles.
• Hey Ashley .....
I was just wandering around today & I was thinking about something that would be 'Amazing' (understatement of the year)
if it is possible to implement..
I would like to see in Construct 2 a new set of Objects
Bezier Path Loop This would be a closed loop comprised of Bezier Curves & nodes
&
Bezier Path Spline This would just be a Bezier Spline with two end node points
A graphical example of what I mean...
The idea would be ..You drag a Bezier Object into the Layout & set its number of nodes & then use drag handles to make the shape you need, and then The Bezier Object would function as a Custom movement path or as easy Solid or Physics Platforms... So each path would need to have built in variables like,
-Path speed ( the speed at which objects travel along it including Adjustment integration via Events)
other stuff that you might think of..
& also allow the Beziers to be affected by Various Behaviours..say for example....
-Sine (may cause buggy or interesting movements)
-Rotate
-Timer
-Solid & Jumpthru Could be a nice feature for building Spline based levels like seen here http://www.simbryocorp.com/Ferr2DTerrain/
Obviously it is not reasonable to expect that all Behaviours would work correctly but whatever does work would be cool...etc
The main purpose of the Bezier Path Objects is to allow any other objects to move along the Bezier object path...at various speeds, accelerations & interpolations etc
Event sheet Conditions for the Objects could include
-On path start,
-On path End,
-On loop complete,(for each time the Object completes one pass of the Loop, collect \$200 )
Events could be
-Move Object along Path by Direction...(left or right, clockwise, anti clockwise, whatever)
-Move Object along Path (Fixed Angle)
-Move Object along path (Perpendicular to Spline)
-Reverse Direction of Object movement
-Set Speed of Spline Path
-Set Acceleration of Spline Path
-Stop Object on path (defunct if able to set speed)
-Drop Object off path (or 'Destroy' Bezier)
-Add Object to Path at Node
-etc
Easing Motions along the Beziers Paths would be awesome as well..Maybe added as a Behaviour?....
I know we already have motion tweening and whatnot & that we do have these math functions in C2 ..but this would really add some groovy features if it could be made into a Behaviour ...or set of Behaviours, like
• Linear Motion Ease..
• Etc
Or perhaps It would be better to build these Motion Eases into the Properties of the Path object
So we could with the click of a button have linear, cubic, circular, sine, quad or exponential interpolations along the spline path...
and even better yet....
Different Interpolations "Between 'each' node of the entire spline path"
ie: between node 1 & 2 you have linear motion & then between 2 & 3 you have exponential...etc
O M G ! ! !
Then,
for Animations and motions....etc... We could do stuff like this
http://www.motionscript.com/articles/bounce-and-overshoot.html
with the click of a few buttons..
That would be pure legendary level stuff..
The Interpolations would work out, a little like this, Ashley...if you get what I mean http://gizma.com/easing/
I know its a big Ask...but it would be soooooooo Awesome...
is it doable?
Would love to see this as a permanent feature in Construct 2
What do you think?
• This would be amazing. Many times I wished that this was real. It would be amazing for level design or AI behaviour.
• There is plugin in plugin section that should do that.
• Oh man! You sir, are expressing fluently all those thinks that I wish for in my post! And you do that ate the same that I post it. And obviously you haven't read mine just as I read yours now!!! What are the chances?! I think that the stars are telling us that Ashley is actually going to add paths into the beta 181!!!
Well said sir, well said!
• well you know what they say.."great minds thinking" and whatnot...
My bet is that Ashley has already started the process..hehe
Maybe we are all on the same wavelength....Synchronism at work ?
• i totally support this
• ## Try Construct 3
Develop games in your browser. Powerful, performant & highly capable.
Construct 3 users don't see these ads
• megatronx The problem with 3rd party plugins is that you never know when the author is losing his motivation and they get abandoned. I got burned badly by this, so I won't use anything but factory plugins (and my own) for commercial projects.
• I didn't check, if Megatronx is right. But assuming, there is a plugin that does all that is asked for here:
You would already have a solution to work with, but ignore it, just because it wasn't done by Ashley? While that is a great compliment showing the trust in Ashley, it isn't very fair to all those 3rd party plugs, is it? I mean, if the plug works now, is offered for free in the plugs section and you're now needing it, why hesitating? Has anyone hesitated to buy a smartphone, although knowing that it won't work with future standards (just think of LTE)?
• I didn't check, if Megatronx is right. But assuming, there is a plugin that does all that is asked for here:
You would already have a solution to work with, but ignore it, just because it wasn't done by Ashley? While that is a great compliment showing the trust in Ashley, it isn't very fair to all those 3rd party plugs, is it? I mean, if the plug works now, is offered for free in the plugs section and you're now needing it, why hesitating? Has anyone hesitated to buy a smartphone, although knowing that it won't work with future standards (just think of LTE)?
Different situation, he's not an end-user, he's a developer. He might be developing a game he's going to sell, and if you're going to ship a product reliant on third-party components that may give up at any time and be unsupported, therefore, potentially making any game-breaking bugs irreparable... then you see the potential issue. Combine that with the fact that Eisenhans did suffer something similar judging by his wording, his avoidance is completely validated. If they hadn't experienced a set-back, loss or massive stress (or all of the above), avoiding third-party plug-ins because of a 'just-in-case' notion is still not a bad thing. Concern and caution are good measures to take.
• mystazsea well, in this case, I hope that it is quantum entanglement in all it's "spooky action at a distance" glory!
megatronx , If you are referring to the SplinePath, it looks to me that it does just the basic stuff and it's only controllable through events. Mystazsea describes in every detail an implementation of a spline tool that will be very useful and easy to use.
• supporting this request, happening to be in need of that actually!
been looking into the 3rd party solutions but those aren't as flexible as I'd need them to be.
• Different situation, he's not an end-user, he's a developer. He might be developing a game he's going to sell, and if you're going to ship a product reliant on third-party components that may give up at any time and be unsupported, therefore, potentially making any game-breaking bugs irreparable... then you see the potential issue.
Different situation, bezier spline paths are just a matter of simple maths. So simple, that only +,- and * are involved. As a developer you could do this easily on your own, just using events and functions (see another spline example done with events and functions and simple math in the old Construct Classic). So any plugin offering spline paths that work right now, will also work in 10 years (if C2 still exists then).
If they hadn't experienced a set-back, loss or massive stress (or all of the above), avoiding third-party plug-ins because of a 'just-in-case' notion is still not a bad thing. Concern and caution are good measures to take.
I sincerely hope it's the minority that thinks this way, because else it will be the end of a community sharing platform. Why should anyone of those 3rd party developers still invest their time and work so hard to help other developers getting faster to their goals, if nobody wants to use their offers? Or maybe they should just charge for it? Cause often times people think they get better quality when paying for a product.
• I sincerely hope it's the minority that thinks this way, because else it will be the end of a community sharing platform. Why should anyone of those 3rd party developers still invest their time and work so hard to help other developers getting faster to their goals, if nobody wants to use their offers? Or maybe they should just charge for it? Cause often times people think they get better quality when paying for a product.
I agree with that viewpoint as well, but you can't deny there's a place for both risk and caution when it comes to using and/or buying third-party plug-ins. Just as people shouldn't 100% stop using third-party plug-ins, they shouldn't dive 100% into using every one that is developed. It depends on the person, their situation and their personality. Since you put forward the pro statement, I just put down a con, I wasn't saying caution overruled using any third-party plug-ins as a blanket statement, .
• megatronx , If you are referring to the SplinePath, it looks to me that it does just the basic stuff and it's only controllable through events. Mystazsea describes in every detail an implementation of a spline tool that will be very useful and easy to use.
I made one myself trough events And yes, it is visual. It's not that difficult to make.
Eisenhans That's fair enough.
• I agree with that viewpoint as well, but you can't deny there's a place for both risk and caution when it comes to using and/or buying third-party plug-ins. Just as people shouldn't 100% stop using third-party plug-ins, they shouldn't dive 100% into using every one that is developed. It depends on the person, their situation and their personality. Since you put forward the pro statement, I just put down a con, I wasn't saying caution overruled using any third-party plug-ins as a blanket statement, .
True, nobody should just take whatever plugin there is, or at least be aware that not all of them will be working forever (especially those, that access browser functions and the like). So I agree to this quote completely.
• 25 posts | 2,342 | 10,475 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.8125 | 3 | CC-MAIN-2024-22 | latest | en | 0.923502 |
https://jp.mathworks.com/matlabcentral/answers/512542-issue-about-ismember-wrong-output | 1,716,825,873,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971059044.17/warc/CC-MAIN-20240527144335-20240527174335-00725.warc.gz | 277,525,573 | 28,129 | 4 ビュー (過去 30 日間)
jason lee 2020 年 3 月 24 日
a=0.02:0.02:2;
b=0:0.01:5;
ismember(a(6),b)
ans =
logical
0
But it is clear that all elements in a belong to b, so where is the problem?
サインインしてコメントする。
採用された回答
Walter Roberson 2020 年 3 月 24 日
You forgot that in binary floating point representation there is no exact equivalent to 0.1 or 0.01 or 0.02, only approximations of those. When you add up those approximations of 0.01 you are not necessarily going to get exactly an approximation of 0.02 especially since you start the 0.02 accumulation at a different start point.
Consider the analogy in decimal of 1/3 to two decimal places. 1/3 decimal is 0.33333333 with the 3 infinitely repeated. To two decimals, 0.33. Now add another of the same to that and you get 0.66. Is that the same as the 2 decimal approximation of 2/3? No, the 2 decimal approximation of 2/3 is 0.67. This illustrates that when you add up truncated approximations that you do not necessarily get the same as the direct value.
The moral of the story is to avoid exact comparisons in floating point. See ismembertol
3 件のコメント1 件の古いコメントを表示1 件の古いコメントを非表示
Stephen23 2020 年 3 月 24 日
"but unfortunately, 0.9999999 is a LITTLE smaller than 1."
According to standard mathematics, 0.999... is equal to 1 (different representations of exactly the same number is very common in mathematics).
Walter Roberson 2020 年 3 月 24 日
The .9 repeated == 1 is however only true for infinite precision. If you were to stop at 10^43 decimal places, then you would indeed have a value mathematically less than one.
Physically, if you were to stop at 1e-43 metres, it is unclear what the result would be. There have been serious proposals that space as we know it does not exist below about that small, that everything is a quantum foam.
サインインしてコメントする。
カテゴリ
Help Center および File ExchangeC Matrix API についてさらに検索
R2019b
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
Translated by | 568 | 2,001 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.125 | 3 | CC-MAIN-2024-22 | latest | en | 0.762744 |
http://www.mathworks.com/help/econ/the-model-selection-process.html?requestedDomain=www.mathworks.com&nocookie=true | 1,519,340,341,000,000,000 | text/html | crawl-data/CC-MAIN-2018-09/segments/1518891814292.75/warc/CC-MAIN-20180222220118-20180223000118-00325.warc.gz | 488,081,149 | 16,213 | # Documentation
### This is machine translation
Translated by
Mouseover text to see original. Click the button below to return to the English version of the page.
## Econometric Modeling
### Model Selection
A probabilistic time series model is necessary for a wide variety of analysis goals, including regression inference, forecasting, and Monte Carlo simulation. When selecting a model, aim to find the most parsimonious model that adequately describes your data. A simple model is easier to estimate, forecast, and interpret.
• Specification tests help you identify one or more model families that could plausibly describe the data generating process.
• Model comparisons help you compare the fit of competing models, with penalties for complexity.
• Goodness-of-fit checks help you assess the in-sample adequacy of your model, verify that all model assumptions hold, and evaluate out-of-sample forecast performance.
Model selection is an iterative process. When goodness-of-fit checks suggest model assumptions are not satisfied—or the predictive performance of the model is not satisfactory—consider making model adjustments. Additional specification tests, model comparisons, and goodness-of-fit checks help guide this process.
### Econometrics Toolbox Features
Modeling QuestionsFeaturesRelated Functions
What is the dimension of my response variable?
• The conditional mean and variance models, regression models with ARIMA errors, and Bayesian linear regression models in this toolbox are for modeling univariate, discrete-time data.
• Separate models are available for multivariate, discrete-time data, such as VAR and VEC models.
• State-space models support univariate or multivariate response variables.
Is my time series stationary?
• Stationarity tests are available. If your data is not stationary, consider transforming your data. Stationarity is the foundation of many time series models.
• Or, consider using a nonstationary ARIMA model if there is evidence of a unit root in your data.
Does my time series have a unit root?
• Unit root tests are available. Evidence in favor of a unit root suggests your data is difference stationary.
• You can difference a series with a unit root until it is stationary, or model it using a nonstationary ARIMA model.
How can I handle seasonal effects?
• You can deseasonalize (seasonally adjust) your data. Use seasonal filters or regression models to estimate the seasonal component.
• Seasonal ARIMA models use seasonal differencing to remove seasonal effects. You can also include seasonal lags to model seasonal autocorrelation (both additively and multiplicatively).
Is my data autocorrelated?
• Sample autocorrelation and partial autocorrelation functions help identify autocorrelation.
• Conduct a Ljung-Box Q-test to test autocorrelations at several lags jointly.
• If autocorrelation is present, consider using a conditional mean model.
• For regression models with autocorrelated errors, consider using FGLS or HAC estimators. If the error model structure is an ARIMA model, consider using a regression model with ARIMA errors.
What if my data is heteroscedastic (exhibits volatility clustering)?
• Looking for autocorrelation in the squared residual series is one way to detect conditional heteroscedasticity.
• Engle’s ARCH test evaluates evidence against the null of independent innovations in favor of an ARCH model alternative.
• To model conditional heteroscedasticity, consider using a conditional variance model.
• For regression models that exhibit heteroscedastic errors, consider using FGLS or HAC estimators.
Is there an alternative to a Gaussian innovation distribution for leptokurtic data?
• You can use a Student’s t distribution to model fatter tails than a Gaussian distribution (excess kurtosis).
• You can specify a t innovation distribution for all conditional mean and variance models, and ARIMA error models in Econometrics Toolbox™.
• You can estimate the degrees of freedom of the t distribution along with other model parameters.
How do I decide between several model fits?
• You can compare nested models using misspecification tests, such as the likelihood ratio test, Wald’s test, or Lagrange multiplier test.
• Information criteria, such as AIC or BIC, compare model fit with a penalty for complexity.
Do I have two or more time series that are cointegrated?
• The Johansen and Engle-Granger cointegration tests assess evidence of cointegration.
• Consider using the VEC model for modeling multivariate, cointegrated series.
• Also consider cointegration when regressing time series. If present, it can introduce spurious regression effects.
What if I want to include predictor variables?
• ARIMAX, VARX, regression models with ARIMA errors, and Bayesian linear regression models are available in this toolbox.
• State-space models support predictor data.
What if I want to implement regression, but the classical linear model assumptions might not apply?
• Regression models with ARIMA errors are available in this toolbox.
• Regress robustly using FGLS or HAC estimators.
• Use Bayesian linear regression.
• For a series of examples on time series regression techniques that illustrate common principles and tasks in time series regression modeling, see Econometrics Toolbox Examples.
• For more regression options, see Statistics and Machine Learning Toolbox™ documentation.
How do use the Kalman filter to analyze several unobservable, linear, stochastic time series and several, observable, linear, stochastic functions of them?
Standard, linear state-space modeling is available in this toolbox. | 1,107 | 5,641 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.75 | 3 | CC-MAIN-2018-09 | latest | en | 0.87964 |
https://brilliant.org/problems/limited-trigonometry/ | 1,490,250,619,000,000,000 | text/html | crawl-data/CC-MAIN-2017-13/segments/1490218186780.20/warc/CC-MAIN-20170322212946-00212-ip-10-233-31-227.ec2.internal.warc.gz | 760,787,791 | 12,893 | # Limited Trigonometry- I
Calculus Level 4
$f(x)=tan\left( \frac{x}{2} \right) \cdot sec \left( x \right)+tan\left( \frac{x}{2^2} \right) \cdot sec \left( \frac{x}{2} \right)+..+tan\left( \frac{x}{2^n} \right) \cdot sec \left( \frac{x}{2^{n-1}} \right)$ $g(x)=f(x)+tan\left( \frac{x}{2^n} \right)$
where $$x \in \left( -\frac{\pi}{2} , \frac{\pi}{2} \right)$$ and $$n \in \mathbb{N}$$
Evaluate the limit : $$\displaystyle \lim_{x \to 0} \left( \dfrac{g(x)}{x} \right)^{\frac{1}{x}}$$
× | 212 | 490 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.265625 | 3 | CC-MAIN-2017-13 | longest | en | 0.2319 |
http://forums.xkcd.com/viewtopic.php?f=17&t=82597&p=2966877 | 1,553,025,553,000,000,000 | text/html | crawl-data/CC-MAIN-2019-13/segments/1552912202125.41/warc/CC-MAIN-20190319183735-20190319205735-00461.warc.gz | 80,079,793 | 9,912 | ## chi² and wolfram alpha
For the discussion of math. Duh.
Moderators: gmalivuk, Moderators General, Prelates
lorb
Posts: 405
Joined: Wed Nov 10, 2010 10:34 am UTC
Location: Austria
### chi² and wolfram alpha
I want to know the p-value for a given chi² value (209) with given degrees of freedom (9). How do i get wolfram alpha to tell me that? (or any other free tool)
And, yes this is homework, but my background is social sciences and my profs will be happy enough if i tell them i can reject the null-hypothesis with a probability to err of less than .1% (which i know is true because 209>27.88). i am just curious how small my p-value is exactly.
Please be gracious in judging my english. (I am not a native speaker/writer.)
http://decodedarfur.org/
NathanielJ
Posts: 882
Joined: Sun Jan 13, 2008 9:04 pm UTC
### Re: chi² and wolfram alpha
For most problems like this, you could use this tool. However, the p-value is so small it just spits out 0.
Heck, even MATLAB spits out 0 in this case, meaning that the probability is smaller than 10^-16 (and I would guess that it's actually in the ballpark of 10^-35).
Homepage: http://www.njohnston.ca
Conway's Game of Life: http://www.conwaylife.com
Slpee
Posts: 69
Joined: Fri Jan 15, 2010 12:51 am UTC
Location: Cloud 9, just all the time.
### Re: chi² and wolfram alpha
Actually, If you've ever worked with the TI-83/84/similar TI device, you can usually type in the same functions and arguments into wolfram alpha as you would in those devices, in your case it would be chi^2cdf(209,1e99,9) the 209 is the lower bound, 1e99 is an arbitrarily large number for the upper bound and 9 is your degrees of freedom (in case you aren't familiar with the TI arguments). Wolfram seems to understand all that when I enter it (http://www.wolframalpha.com/input/?i=chi%5E2cdf%28209%2C1e99%2C9%29) but it wont give me a p-value. I suspect this means its too small even for wolfram to bother finding it, so sorry mate, I don't know what to tell you.
"Are you insinuating that a bunch of googly eyes hot-glued to a Cheeto constitutes a sapient being?"
Can't let you brew that Starbucks!
gmalivuk
GNU Terry Pratchett
Posts: 26592
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:
### Re: chi² and wolfram alpha
Mathematica says it's 4.3e-40
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome
(he/him/his)
gorcee
Posts: 1501
Joined: Sun Jul 13, 2008 3:14 am UTC
### Re: chi² and wolfram alpha
gmalivuk wrote:Mathematica says it's 4.3e-40
Just to add on this, I would be careful to cite such a number (obviously it's unnecessary to do so) because unless you have detailed knowledge of the specific algorithm used to compute the value, you cannot say for certain that the algorithm properly handles underflow.
For example, in many software routines, if you generate an nxn rank 1 matrix where the columns are all multiples of some nominal column, permutation of the order of those columns can lead to different results in, say, and SVD algorithm.
In general, be cautious with any result that gives you something under machine precision w/r.t. the input arguments. This doesn't imply that the answer is wrong, but before you rely on that number, you should be certain to ensure that underflow isn't corrupting those results.
eta oin shrdlu
Posts: 450
Joined: Sat Jan 19, 2008 4:25 am UTC
### Re: chi² and wolfram alpha
gorcee wrote:
gmalivuk wrote:Mathematica says it's 4.3e-40
Just to add on this, I would be careful to cite such a number (obviously it's unnecessary to do so) because unless you have detailed knowledge of the specific algorithm used to compute the value, you cannot say for certain that the algorithm properly handles underflow.
Mathematica is usually pretty good about these things. At any rate, it's right in this case. One way to check this is to get bounds for the chi-squared PDF integral, which in this case is$\frac{1}{2^{9/2}\Gamma(4.5)}\int_X^\infty x^{7/2} e^{-x/2} \, dx$with X=209. Four applications of integration by parts reduces the exponent of x in the integrand below 0:$=\frac{1}{2^{9/2}\Gamma(4.5)}\left[(2X^{7/2}+14X^{5/2}+70X^{3/2}+210X^{1/2})e^{-X/2}+105\int_X^\infty x^{-1/2}e^{-x/2}\,dx\right]\,.$The boundary terms evaluate to 4.29E-40 as gmalivuk says; the remaining integral gives the error term, which can be bounded as$0<105\int_X^\infty x^{-1/2}e^{-x/2}\,dx<105X^{-1/2}\int_X^\infty e^{-x/2}\,dx=210X^{-1/2}e^{-X/2}=3.00\cdot10^{-45}\,.$
gmalivuk
GNU Terry Pratchett
Posts: 26592
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:
### Re: chi² and wolfram alpha
Yeah, the inputs were infinite-precision integers, in which case Mathematica usually gives a machine precision number at whatever accuracy is required. (Note that 10-44 is a couple hundred orders of magnitude greater than the minimum machine number, which is where underflow happens.)
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome
(he/him/his)
clickclack
Posts: 4
Joined: Wed Mar 28, 2012 1:00 am UTC
### Re: chi² and wolfram alpha
Entering "chi^2 distribution 9 degrees of freedom critical value 209" into wolfram alpha a closed form expression for the right tail-probability, as 1 - Q(9/2, 0, 209/2). Clicking that expression, it makes another query, and returns a numerical answer, 4.2868...e-40.
gmalivuk
GNU Terry Pratchett
Posts: 26592
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:
### Re: chi² and wolfram alpha
Well yeah, presumably Mathematica and Alpha use the same implementation of the same algorithm for most things, as they're both Wolfram products.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome
(he/him/his)
lorb
Posts: 405
Joined: Wed Nov 10, 2010 10:34 am UTC
Location: Austria
### Re: chi² and wolfram alpha
Thank you all for your input.
You satisfied my curiosity quite well.
Please be gracious in judging my english. (I am not a native speaker/writer.)
http://decodedarfur.org/ | 1,851 | 6,507 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.21875 | 3 | CC-MAIN-2019-13 | latest | en | 0.901475 |
http://www.eeweb.com/blog/extreme_circuits/12-volt-car-battery-charger-circuit-schematic | 1,490,227,940,000,000,000 | text/html | crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00393-ip-10-233-31-227.ec2.internal.warc.gz | 486,897,408 | 18,566 | # Extreme Circuits
Circuit Design Blogger
# 12 Volt Car Battery Charger Circuit Schematic
Unlike many units, this battery charger continuously charges at maximum current, tapering off only near full battery voltage. In this unit, the full load current of the supply transformer/rectifier section was 4.4A. It tapers off to 4A at 13.5V, 3A at 14.0V, 2A at 14.5V and 0A at 15.0V.
Circuit operation:
Transistor Q1, diodes D1-D3 and resistor R1 form a simple constant current source. R1 effectively sets the current through Q1 – the voltage across this resistor plus Q1's emitter-base voltage is equal to the voltage across D1-D3. Assuming 0.7V across each diode and across Q1's base-emitter junction, the current through R1 is approximately 1.4/0.34 = 4.1A. IC ensures that Q1 (and thus the constant current source) is turned on.
Related Products: BatteriesBatteries
When the battery has fully charged, the current through IC drops to a very low value and so Q1 turns off (since there is no longer any base-emitter current). R2 limits the current through IC. It allows enough current to flow through the regulator so that Q1 is fully on for battery voltages up to about 13.5V. Decreasing the value of R2 effectively increases the final battery voltage by raising the current cutoff point. Conversely, a diode in series with one of the battery leads will reduce the fully-charged voltage by about 0.7V.
Circuit diagram:
#### 12 Volt Car Battery Charger Circuit Diagram
Parts:
##### Miscellaneous * B1 = 12 Volt Battery
Notes:
• Charger's input voltages are 20 volt AC
• R1 and R2 are high wattage resistor like 2W, 3W, 5W and could be above. Select wattage on your choice.
• Q1 and IC requires a good heatsink. If they are mounted on the same heatsink and will throttle the circuit back if Q1 gets too hot.
3 years ago: I am having problems finding data sheet for the MJ1504
transistor, if someone could give me link, also looking for equivalent for it..
2 years ago: Does it work after construction?
2 years ago: It is one of the very useful circuits in real life. Well explained. I have seen another advanced level circuit for this project in the post: http://www.electronicshub.org/car-battery-charger-circuit/. This circuit seems very easy to understand and use it. Thanks for sharing.
1 year ago: What is the Amp value for input current?
1 year ago: This is very nicely explained and it gives a whole idea about the car battery charger and the different circuits connected within it. This will definitely help the auto professionals. No doubt about it. As a test engineer, I follow this site about all the new updates and these are excellent. Last year my Mercedes showed some major technical issue while commuting and I took it to <a href="http://motronix.net/services/mercedes-repair/"> the most trusted and fully experienced repairing center in South Florida and they give the expert service with a very reasonable price.
8 months ago: this is great! thanks. i do have a few questions.
(1)Does this mean i can leave a lead acid battery permanently connected?
(2)Can i use a series of relays to modify this circuit into a back up supply for when ac power is off?
i am planning to make an autogate control board so i plan incorporate this circuit. i basically need to charge a 12v battery and supply power to an arduino(12v to 5v step down obviously). the battery will only kick in when there is power loss. otherwise the battery will always be connnected. I very much appreciate any advice.
5 months ago: Currently car batteries are coming to market with so many options and same goes to cars also. There are many ways where one can charge the battery properly. but should be careful always to use right battery cables and check the cable monthly wise. http://www.slideshare.net/upscaleautomotive/all-about-your-car-battery-and-its-care | 928 | 3,867 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.359375 | 3 | CC-MAIN-2017-13 | latest | en | 0.906595 |
https://www.mathworks.com/matlabcentral/cody/problems/233-reverse-the-vector/solutions/2012038 | 1,579,889,682,000,000,000 | text/html | crawl-data/CC-MAIN-2020-05/segments/1579250624328.55/warc/CC-MAIN-20200124161014-20200124190014-00180.warc.gz | 982,581,464 | 15,347 | Cody
# Problem 233. Reverse the vector
Solution 2012038
Submitted on 10 Nov 2019 by saeedeh ostovari
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
x = 1; y_correct = 1; assert(isequal(reverseVector(x),y_correct))
2 Pass
x = -10:1; y_correct = 1:-1:-10; assert(isequal(reverseVector(x),y_correct))
3 Pass
x = 'able was i ere i saw elba'; y_correct = 'able was i ere i saw elba'; assert(isequal(reverseVector(x),y_correct)) | 167 | 552 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2020-05 | latest | en | 0.736857 |
http://oeis.org/A193475/internal | 1,581,961,002,000,000,000 | text/html | crawl-data/CC-MAIN-2020-10/segments/1581875142603.80/warc/CC-MAIN-20200217145609-20200217175609-00524.warc.gz | 111,347,409 | 2,759 | The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation.
Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)
A193475 a(n) = 4*16^n - 2*4^n. 4
%I
%S 2,56,992,16256,261632,4192256,67100672,1073709056,17179738112,
%T 274877382656,4398044413952,70368735789056,1125899873288192,
%U 18014398375264256,288230375614840832,4611686016279904256,73786976286248271872,1180591620683051565056
%N a(n) = 4*16^n - 2*4^n.
%H Peter Luschny, <a href="http://oeis.org/wiki/User:Peter_Luschny/TheLostBernoulliNumbers">The lost Bernoulli numbers.</a>
%F Recurrence: a(0) = 2, a(1) = 56, a(n) = 20*a(n-1) - 64*a(n-2).
%F G.f.: (16*x+2)/(64*x^2-20*x+1).
%F E.g.f.: 4*exp(16*x) - 2*exp(4*x).
%p A193475 := proc(n) 2^(2*n+1); %^2; % - %% end: seq (A193475(n), n=0..20);
%K nonn
%O 0,1
%A Peter Luschny, Aug 07 2011
Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam
Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent
The OEIS Community | Maintained by The OEIS Foundation Inc.
Last modified February 17 12:32 EST 2020. Contains 331996 sequences. (Running on oeis4.) | 460 | 1,210 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.78125 | 3 | CC-MAIN-2020-10 | latest | en | 0.476062 |
https://developer.arm.com/docs/ddi0602/c/a64-sve-instructions-alphabetic-order/fcmla-vectors-floating-point-complex-multiply-add-with-rotate-predicated | 1,611,184,906,000,000,000 | text/html | crawl-data/CC-MAIN-2021-04/segments/1610703522133.33/warc/CC-MAIN-20210120213234-20210121003234-00653.warc.gz | 291,350,679 | 33,006 | You copied the Doc URL to your clipboard.
## FCMLA (vectors)
Floating-point complex multiply-add with rotate (predicated).
Multiply the duplicated real components for rotations 0 and 180, or imaginary components for rotations 90 and 270, of the floating-point complex numbers in the first source vector by the corresponding complex number in the second source vector rotated by 0, 90, 180 or 270 degrees in the direction from the positive real axis towards the positive imaginary axis, when considered in polar representation.
Then destructively add the products to the corresponding components of the complex numbers in the addend and destination vector, without intermediate rounding.
These transformations permit the creation of a variety of multiply-add and multiply-subtract operations on complex numbers by combining two of these instructions with the same vector operands but with rotations that are 90 degrees apart.
Each complex number is represented in a vector register as an even/odd pair of elements with the real part in the even-numbered element and the imaginary part in the odd-numbered element. Inactive elements in the destination vector register remain unmodified.
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 1 1 0 0 1 0 0 size 0 Zm 0 rot Pg Zn Zda
#### SVE
FCMLA <Zda>.<T>, <Pg>/M, <Zn>.<T>, <Zm>.<T>, <const>
```if !HaveSVE() then UNDEFINED;
if size == '00' then UNDEFINED;
integer esize = 8 << UInt(size);
integer g = UInt(Pg);
integer n = UInt(Zn);
integer m = UInt(Zm);
integer da = UInt(Zda);
integer sel_a = UInt(rot<0>);
integer sel_b = UInt(NOT(rot<0>));
boolean neg_i = (rot<1> == '1');
boolean neg_r = (rot<0> != rot<1>);```
### Assembler Symbols
Is the name of the third source and destination scalable vector register, encoded in the "Zda" field.
<T> Is the size specifier, encoded in size:
size <T>
00 RESERVED
01 H
10 S
11 D
Is the name of the governing scalable predicate register P0-P7, encoded in the "Pg" field.
Is the name of the first source scalable vector register, encoded in the "Zn" field.
Is the name of the second source scalable vector register, encoded in the "Zm" field.
<const> Is the const specifier, encoded in rot:
rot <const>
00 #0
01 #90
10 #180
11 #270
### Operation
```CheckSVEEnabled();
integer pairs = VL DIV (2 * esize);
bits(PL) mask = P[g];
bits(VL) operand1 = Z[n];
bits(VL) operand2 = Z[m];
bits(VL) operand3 = Z[da];
bits(VL) result;
for p = 0 to pairs-1
addend_r = Elem[operand3, 2 * p + 0, esize];
addend_i = Elem[operand3, 2 * p + 1, esize];
elt1_a = Elem[operand1, 2 * p + sel_a, esize];
elt2_a = Elem[operand2, 2 * p + sel_a, esize];
elt2_b = Elem[operand2, 2 * p + sel_b, esize];
if ElemP[mask, 2 * p + 0, esize] == '1' then
if neg_r then elt2_a = FPNeg(elt2_a);
addend_r = FPMulAdd(addend_r, elt1_a, elt2_a, FPCR);
if ElemP[mask, 2 * p + 1, esize] == '1' then
if neg_i then elt2_b = FPNeg(elt2_b);
addend_i = FPMulAdd(addend_i, elt1_a, elt2_b, FPCR);
Elem[result, 2 * p + 0, esize] = addend_r;
Elem[result, 2 * p + 1, esize] = addend_i;
Z[da] = result;```
### Operational information
This instruction might be immediately preceded in program order by a MOVPRFX instruction that conforms to all of the following requirements, otherwise the behavior of either or both instructions is unpredictable:
• The MOVPRFX instruction must specify the same destination register as this instruction.
• The destination register must not refer to architectural register state referenced by any other source operand register of this instruction.
The MOVPRFX instructions that can be used with this instruction are as follows:
• An unpredicated MOVPRFX instruction.
• A predicated MOVPRFX instruction using the same governing predicate register and source element size as this instruction. | 1,083 | 3,820 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.96875 | 3 | CC-MAIN-2021-04 | latest | en | 0.721213 |
https://www.shaalaa.com/question-paper-solution/cbse-cbse-12-mathematics-class-12-2014-2015-outside-delhi-set-1-comtt_10290 | 1,669,541,166,000,000,000 | text/html | crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00335.warc.gz | 1,057,354,074 | 19,802 | # Mathematics 2014-2015 CBSE (Commerce) Class 12 Outside Delhi Set 1 Comtt question paper with PDF download
#### Topics
0.01 Relations and Functions
0.02 Inverse Trigonometric Functions
0.03 Matrices
0.04 Determinants
0.05 Continuity and Differentiability
0.06 Applications of Derivatives
0.07 Integrals
0.08 Applications of the Integrals
0.09 Differential Equations
0.10 Vectors
0.11 Three - Dimensional Geometry
0.12 Linear Programming
0.13 Probability
## Previous Year Question Paper for CBSE Class 12 Mathematics - free PDF Download
Find CBSE Class 12 Mathematics previous year question papers PDF. CBSE Class 12 Maths question paper are provided here in PDF format which students may download to boost their preparations for the Board Exam. Previous year question papers are designed by the experts based on the latest revised CBSE Class 12 syllabus. Practicing question paper gives you the confidence to face the board exam with minimum fear and stress since you get proper idea about question paper pattern and marks weightage. CBSE previous year question paper Class 12 pdf can be dowloaded but Shaalaa allows you to see it online so downloading is not necessary. When exams date come closer the anxiety will increase and students are confused where to start. Practicing Previous Year Question Papers is the best way to prepare for your board exams and achieve good score.
Question Paper for CBSE Class 12 Mathematics are very useful for students so that they can better understand the concepts by practicing them regularly. Students should practice the questions provided in the Previous Year Question Paper to get better marks in the examination. Question papers for CBSE Class 12 Maths question paper gives an idea about the questions coming in the board exams and previous years papers give the sample questions asked by CBSE in the exams. By solving the Question Papers, you can scale your preparation level and work on your weak areas. It will also help the candidates in developing the time-management skills.
## How CBSE Class 12 Previous Year Question Papers Help Students ?
• Students get an idea of question paper pattern and marking scheme of the exam so they can manage their exam time to attempt all questions to score more.
• CBSE Class 12 previous year papers will help students by how one question is ask in different way, importance of question by how many time this question asked in previous year exam.
• Question papers will helps students to prepare for exam.
To get all Question paper
• Solving previous year question paper will boost students confidence in exam time and also give you an idea About the important questions and topics to be prepared for the board exam.
• We also provide important question asked in previous year question paper by arranging the chapter and how many time this question appeared in exam previously.
To get important question bank | 600 | 2,898 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.640625 | 3 | CC-MAIN-2022-49 | latest | en | 0.889361 |
https://yourquickinformation.com/how-hot-does-a-300-watt-bulb-get/ | 1,726,632,788,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00089.warc.gz | 995,691,046 | 35,134 | ## How hot does a 300 watt bulb get?
The bulbs can reach temperatures ranging from about 970 degrees Fahrenheit for a 300-watt tubular halogen bulb to 1,200 degrees Fahrenheit for a 500-watt tubular halogen bulb.
How much heat does a 250 watt heat lamp put off?
Therefore, if the heat lamp you purchase has 250 watts with a 10\% lighting efficiency, it means that the amount of heat it produces is at 225 watts. Heat lamps are widely used in raising chicks due to the ample amount of heat they provide.
How much heat does a light bulb add to a room?
For example, a 420 sq. ft. room with a 10 ft ceiling containing 145 kg of air, will need a 40-watt light bulb to burn for one hour to increase the room temperature by just one degree. Think about how many light bulbs you would need to increase the temperature by ten degrees!
READ ALSO: What are practical steps in project management?
### How much heat does a 100 watt heat bulb produce?
According to the Wikipedia online encyclopaedia, a 100 watt bulb is 2.1\% efficient. In other words, it produces about 2 watts of light and 98 watts of heat.
How bright is a 300 watt bulb?
Spec Sheet
No. 0397400
Volts 120
Color Temperature (Kelvin) 2850
Life (Hours) 750
Brightness (Lumens) 5870
How much heat do lights give off?
The energy consumed by a 100-watt GLS incandescent bulb produces around 12\% heat, 83\% IR and only 5\% visible light. In contrast, a typical LED might produce15\% visible light and 85\% heat. Especially with high-power LEDs, it is essential to remove this heat through efficient thermal management.
## Will a heat lamp keep a dog warm?
A standard 250-watt heat lamp can emit 95°F of heat, which is enough to keep your furry ones warm and protected even in the coldest weather conditions. However, the temperature needs to be regulated constantly and checked on so that it doesn’t get too hot, which can be uncomfortable for your pooch.
READ ALSO: How do you stop a gaslighter?
How hot is a 150 watt heat bulb?
The 150 watts can produce up to 250F of radiating surface heat.
Can a light bulb make a room hot?
Tip #3: Change to Cooler CFL and LED Light Bulbs Sitting under a lamp to read or just to eat dinner can make you feel warm — it can actually increase the temperature of the room. If you’re using a regular incandescent bulb, it can get as hot as 500 degrees Fahrenheit, depending on the wattage.
### How many lumens does a 300 watt light bulb put out?
Product Attributes
Lumens 5,950
Base Type R7s
Lighting Technology Halogen
Length 4.71 in.
Diameter 0.31 in.
How much heat does a light bulb put out?
100 watts of electricity will be converted to, at most 8 watts of light (including UV light) and 92\% will still come out as heat. So the main factors which determine how much heat a lamp puts out, are what type of lamp it is, and its wattage.
READ ALSO: Is quartzite non-foliated or foliated?
What happens when a 100 watt light bulb is turned on?
When a 100-watt lamp is turned on, 100-watts of electricity is transformed into 100-watts of light and heat. The same is true for a 50-watt light bulb; 50-watts of electrical energy becomes 50-watts of light and heat. (A watt is a unit of power). Some lamps are more efficient at producing light than others.
## How can I reduce the heat emitted by my light bulbs?
To reduce the heat emitted by regular incandescent and halogen bulbs, use a lower watt bulb (like 60 watts instead of 100). Fluorescent light bulbs use an entirely different method to create light.
How efficient is a 100 watt light bulb?
According to the Wikipedia online encyclopaedia, a 100 watt bulb is 2.1\% efficient. In other words, it produces about 2 watts of light and 98 watts of heat. A halogen lamp is a bit better. For every 100 watts you put in, you get about 3.5 watts of light and 96.5 watts of heat. | 971 | 3,844 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2024-38 | latest | en | 0.900266 |
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-2-set-theory-2-5-survey-problems-exercise-set-2-5-page-103/19 | 1,534,529,043,000,000,000 | text/html | crawl-data/CC-MAIN-2018-34/segments/1534221212639.36/warc/CC-MAIN-20180817163057-20180817183057-00510.warc.gz | 915,525,159 | 12,522 | # Chapter 2 - Set Theory - 2.5 Survey Problems - Exercise Set 2.5: 19
The number of elements in set B and set C but not set A = 3.
#### Work Step by Step
From the given Venn diagram, the regions IV represents to set B and set C but not set A. It's given from the Venn diagram that: The number of elements belongs to region VI = 3. Therefore the number of elements in set B and set C but not set A = 3.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 146 | 565 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.25 | 3 | CC-MAIN-2018-34 | longest | en | 0.863546 |
https://or.stackexchange.com/questions/3829/approximation-methods-for-a-mixed-integer-convex-optimization-problem | 1,618,826,075,000,000,000 | text/html | crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00526.warc.gz | 536,227,120 | 34,369 | # Approximation methods for a mixed integer convex optimization problem
I have a convex objective function, e.g., minimizing the negative entropy function. My constraints are also linear. The only issue is that I also have binary variables.
I am currently aware of AIMMS's outer approximation (AOA) that is said to be a good options. My questions are:
1. Is such an outer approximation method using another solver to solve each relaxed problem? For example, if the variables were continuous, then minimizing the negative entropy function would be solved with MOSEK. Do you think solvers like AOA will still use MOSEK but will apply some sort of branch&bounding?
2. What other options than AOA do I have? I prefer using MATLAB and YALMIP.
Mosek 9.x can natively solve mixed-integer exponential cone problems.
Formulate the problem in YALMIP, specifying the binary variables as binvar, and Mosek as the solver. YALMIP will call Mosek to exploit its native mixed-integer exponential cone capability.
Here is a mixed-integer example (mixture of binary and continuous):
x = binvar(3,1); % binary
y = sdpvar(2,1); % continuous
optimize([A*[x;y] <= b,0 <= y <= 1],-entropy([x;y]),sdpsettings('solver','mosek'))
• Wow! That MOSEK... – independentvariable Apr 4 '20 at 16:28
• You can also throw in some Second Order Cone constraints, it you're feeling chipper. BTW, this can also be solved in CVX 2.2, with Mosek 9.x as solver, and will utilize Mosek's native mixed-integer exponential cone capability. CVXPY as well. – Mark L. Stone Apr 4 '20 at 16:32
• Actually I dont have binary variables but a negative one norm in the objective function. I know that YALMIP logical modelling will hopefully first reformulate this with binary variables and then call MOSEK. – independentvariable Apr 4 '20 at 17:21
• Yes, that works in YALMIP. CVX would require you to manually do the logic (Big M) modeling. – Mark L. Stone Apr 4 '20 at 17:29 | 501 | 1,931 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.984375 | 3 | CC-MAIN-2021-17 | longest | en | 0.906922 |
https://search.r-project.org/CRAN/refmans/dataquieR/html/acc_univariate_outlier.html | 1,627,259,570,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046151972.40/warc/CC-MAIN-20210726000859-20210726030859-00154.warc.gz | 516,984,526 | 3,093 | acc_univariate_outlier {dataquieR} R Documentation
## Function to identify univariate outliers by four different approaches
### Description
A classical but still popular approach to detect univariate outlier is the boxplot method introduced by Tukey 1977. The boxplot is a simple graphical tool to display information about continuous univariate data (e.g., median, lower and upper quartile). Outliers are defined as values deviating more than 1.5 \times IQR from the 1st (Q25) or 3rd (Q75) quartile. The strength of Tukey’s method is that it makes no distributional assumptions and thus is also applicable to skewed or non mound-shaped data Marsh and Seo, 2006. Nevertheless, this method tends to identify frequent measurements which are falsely interpreted as true outliers.
A somewhat more conservative approach in terms of symmetric and/or normal distributions is the 6 * σ approach, i.e. any measurement not in the interval of mean(x) +/- 3 * σ is considered an outlier.
Both methods mentioned above are not ideally suited to skewed distributions. As many biomarkers such as laboratory measurements represent in skewed distributions the methods above may be insufficient. The approach of Hubert and Vandervieren 2008 adjusts the boxplot for the skewness of the distribution. This approach is implemented in several R packages such as robustbase::mc which is used in this implementation of dataquieR.
Another completely heuristic approach is also included to identify outliers. The approach is based on the assumption that the distances between measurements of the same underlying distribution should homogeneous. For comprehension of this approach:
• consider an ordered sequence of all measurements.
• between these measurements all distances are calculated.
• the occurrence of larger distances between two neighboring measurements may than indicate a distortion of the data. For the heuristic definition of a large distance 1 * σ has been been chosen.
### Usage
acc_univariate_outlier(
resp_vars = NULL,
label_col,
study_data,
meta_data,
exclude_roles,
n_rules = 4
)
### Arguments
resp_vars variable list the name of the continuous measurement variable label_col variable attribute the name of the column in the metadata with labels of variables study_data data.frame the data frame that contains the measurements meta_data data.frame the data frame that contains metadata attributes of study data exclude_roles variable roles a character (vector) of variable roles not included n_rules integer from=1 to=4. the no. of rules that must be violated to flag a variable as containing outliers. The default is 4, i.e. all.
### Value
a list with:
• SummaryTable: data.frame with the columns Variables, Mean, SD, Median, Skewness, Tukey (N), 6-Sigma (N), Hubert (N), Sigma-gap (N), Most likely (N), To low (N), To high (N) Grading
• SummaryPlotList: ggplot2 univariate outlier plots
### ALGORITHM OF THIS IMPLEMENTATION:
• Select all variables of type float in the study data
• Remove missing codes from the study data (if defined in the metadata)
• Remove measurements deviating from limits defined in the metadata
• Identify outlier according to the approaches of Tukey (Tukey 1977), SixSigma (-Bakar et al. 2006), Hubert (Hubert and Vandervieren 2008), and SigmaGap (heuristic)
• A output data frame is generated which indicates the no. of possible outlier, the direction of deviations (to low, to high) for all methods and a summary score which sums up the deviations of the different rules
• A scatter plot is generated for all examined variables, flagging observations according to the no. of violated rules (step 5). | 778 | 3,650 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.625 | 3 | CC-MAIN-2021-31 | latest | en | 0.922107 |
http://acrophobic.me/post/basically-domestic-electrical-wiring-is-a-parallel-connection | 1,566,799,438,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00002.warc.gz | 9,814,402 | 9,565 | # basically domestic electrical wiring is a parallel connection
acrophobic.me 9 out of 10 based on 1000 ratings. 300 user reviews.
Domestic electrical wiring is basically a : (a) series ... Domestic electrical wiring is basically a : (a) series connection (b) parallel ... connection within each room and parallel connection elsewhere Join Sarthaks eConnect Today Largest Online Education munity! Domestic electrical wiring is basically a Answers with ... Know answer of objective question : Domestic electrical wiring is basically a. Answer this multiple choice objective question and get explanation and result.It is provided by OnlineTyari in English Domestic electrical wiring is basically a: | Gkseries MCQs domestic electrical wiring is basically a: Advertisement. Q. Domestic electrical wiring is basically a: A parallel connection. B series connection. C combination of series and parallel connections. D series connection within each room and parallel connection elsewhere. Answer & Explanation. How To Wire Lights in Parallel? Electrical Technology The common household circuits used in electrical wiring installation are (and should be) in parallel. Mostly, switches, Outlet receptacles and light points etc are connected in parallel to maintain the power supply to other electrical devices and appliances through hot and neutral wire in case if one of them gets fail. Why domestic appliances are connected in parallel? If you mean any domestic home which has appliances connected to a mains electrical supply, each applicance is connected to one "leg" of the supply in parallel with other appliances. Electrical Circuit Basis: Series vs. Parallel Circuits Circuit Basics. As electrons flow through the wire loop from the source (hot wires) and back to the source (neutral wires) the current flow can power lights or other devices that are installed within the loop. Any interruption in the pathway (such as a switch being opened) interrupts the pathway and stops to flow of electrical current. Why is parallel arrangement used in domestic wiring ... In parallel circuits, each electrical appliance gets the same voltage (220V) as that of the power supply line. 4. In the parallel connection of electrical appliances, the overall resistance of the household circuit is reduced due to which the current from the power supply is high. house wiring or home wiring connection diagram house wiring या room wiring का connection कैसे किया जाता है , इस video में diagram के द्वारा दिखाया और बताया ... Series and parallel circuits A circuit composed solely of components connected in series is known as a series circuit; likewise, one connected completely in parallel is known as a parallel circuit. In a series circuit, the current that flows through each of the components is the same, and the voltage across the circuit is the sum of the individual voltage drops across each component. [1] Why is parrallel arrangement used for domestic circuits ... All the domestic appliances works on the supply voltage. Hence it is obvious to use a parallel connection, otherwise voltage requirement wouldn't be met. You don't want the whole house to go down because of a faulty appliance, that's what happens when you use series circuit. Wiring in parallel is easier, less complicated and cost effective too. What is pilot wire in electrical engineering? Quora A pilot wire is a communication cable between the two relays located at different ends. Whenever a transmission line or equipment is to be protected by using a differential relay, a wire is connected between the CTs (current transformer) which are... why are parallel connections used in household wiring ... Parallel wiring in homes is normally wired in a ring circuit so as to give a variable current rating at no voltage drop, so you can have 1 or multiple appilences connected with minimum voltage drop as the current is increased, it also means you can have more outlet points per circuit than a sinlge line of points Conducting Electrical House Wiring: Easy Tips & Layouts Want to know regarding some easy clues for doing electrical house wiring quickly? The article explains through simple line diagrams how to wire up flawlessly different electrical appliances and gadgets commonly used in houses through mains power. The quick grasping tips provided here can certainly be very useful for newbies in the field. Series and Parallel Connection of Two Bulbs | Which will glow Brighter | Basic electrical connection In this video I explained the series and parallel connection of two bulb with practical as well as circuit diagram and also how to connect two bulb in series and parallel. Help for Understanding Simple Home Electrical Wiring Diagrams The idea sounds great as that gives you the freedom to customize the design for home wiring layout, and also help in saving quite a lot of money. But this is not possible before you are well versed with the basics of electrical wiring and know exactly how to chalk out correct home electrical wiring diagrams.
## basically domestic electrical wiring is a parallel connection Gallery
### pin nog ops fortnite skin images to pinterest
#### New Update
astra j gtc fuse box , 2002 jaguar fuel filter location , new honda gold wing gl1100 wiring diagram electrical system , electric iron wiring diagram , yamaha home audio wiring subwoofer wiring diagram , basic electronics new to electronics illuminated spst switch , 1987 mazda b2200 carburetor diagram , wiring diagram get image moreover lincoln town car wiring diagram , garage door opener also with chamberlain garage door opener wiring , multiplexer with ttl ic 74251 , bronco wiring diagram in addition 1965 ford fairlane wiring diagram , f650 wiring diagram furthermore ford focus rear suspension diagram , usb wiring diagram for iphone 8 , 200ampinlinecircuitbreakerstereoaudiocarrv200a200ampfuse , kawasaki engine diagrams , 1997 toyota rav4 fuse box location , can bus interface design drawing dave ross blog , jeep liberty fuse box diagram 2006 , 240v 3 phase wiring diagram wwwpbasecom image 34724259 , ether wireless work diagram , 36 led flasher driver circuit diagram tradeoficcom , strat wiring problem question help fender stratocaster guitar , purolator fuel filter f64711 , 73 cougar wiring diagram , hvac thermostat wiring diagrams , from j markus sourcebook of electronic circuits mcgrawhill 1968 , table saw wiring diagram wiring diagram schematic , itasca wiring diagrams together with stereo wiring harness diagram , full body dumbell circuit workout pictures photos and images for , wiring harness halfords , vehicle wiring harness repair , 2004 saab engine diagram , vw wiring diagram for 1961 , 2015 dodge ram engine diagram , radio wiring diagram for 1999 f250 , and here is another diagram i found that calls this the detonation , quadra fire 1100 wiring diagram , jack wiring diagram as well to rj45 connector cat 6 wiring diagram , 2003 pontiac aztek fuel pump wiring diagram , 1996 4runner radio wiring diagram dual , mr2 wiring diagram 3sgte jdm 3rd gen turbo engine 1991 mr2 na auto , nicd charger with thermal peak detection , logic probe kit logic probe circuit schematic diagram , basketball court diagram dimensions , 94 bmw 325is wiring diagram , ipad 4 circuit diagram , body control module customer interest wipers wipers work on one , custom home electrical wiring , reading circuit board schematics , dt466 ecm wiring diagram furthermore ford escape v6 engine diagram , mazda 6 3 0 vvt location image wiring diagram engine , replacing old fuse box with breaker box , wiring hampton bay bathroom fan , lexus diagrama de cableado de series , 70 chevelle v8 engine diagram , wiring diagram 95 chevy s10 2 , instrument circuit cluster 1976 classic chevy truck parts , cadillac dts wiring diagram , how to wire a switched outlet with wiring diagrams , rolls royce phantom wiring diagram , relay timer switch circuit , craftsman drill wiring diagrams on makita power drill charger , simple circuit or electronic circuit , 220 hot tub wiring , alarm wiring diagram subaru baja wiring diagram code alarm wiring , mexican strat wiring diagram fender stratocaster mexican hss , ac drive panel wiring , 2012 ford f 150 wiring diagram , wiring diagram for 13 pin caravan socket , complete engine wiring harness dodge 4.7 , mercruiser 454 mag wiring diagram , wiring cat6 for at&t broadband , parts of a tornado diagram a mature tornado straight , wiring diagram hyundai accent 2005 , vintage trailer 110v wiring wiring diagrams pictures , peugeot 407 wiring diagrams , sany schema moteur megane , smith motors wiring diagram ao smith boat lift motors at boat lift , nest thermostat wiring diagram for heat pump , solving linear electric circuit of 3 equations by cramer39s rule , generating long time delays , how to make a 7 circuit labyrinth from one meander blogmymaze , question a threebulb circuitwhen a single lowresistance , 1999 chevy silverado doors , 2012 bmw x3 fuse diagram , dodge dart fuse diagram , wire harness fuse , for kids to have fun with circuits and discuss its design process , amp wiring diagram for 2003 taurus , motorstar motorcycle wiring diagram , 1999 ford escort engine diagram , wiring diagram furthermore honda odyssey 250 atv on odyssey 250 atv , engine diagram ford 3tlyt46ltritonengine , 2003 ford f 350 door wiring , dodge sprinter fuel filter parts , panoz del schaltplan kr51 , repin image john deere power pull parts on pinterest , door schematic for 2009 pontiac g6 , dc brush motor wiring diagram , honeywell v8043e wiring diagram , ford taurus factory stereo wiring diagram , rules for block diagram reduction , genesis motor schema cablage rj45 maison , 98 honda accord ignition wiring diagram , pics photos chevy rear differential diagram , photo eye wiring schematic symbol , 2002 chevy truck rear turn signal wiring diagram , fusion wiring diagram parking , fram fuel filter element , stereo hook up tuner eq and receiver amp wiring part 1 youtube , mercruiser 43 i o loss of power when throttling up in either , 2003 nissan xterra 3300 v6 fuse box diagram , wiring diagram 1983 husky wr250 , 2002 mazda 323 fuel filter location , pin trailer plug wiring diagram on rv connector wiring diagram , 2002 honda accord pgm fi main relay location , j b wiring diagram , pt cruiser vacuum hose diagram chrysler , wiringpi i2c example code , wiringpi input meaning , copper base pcb circuit board manufacturers high tg material , snowmobile wiring diagrams review ebooks , wiring my house , 2003 maxima window wiring diagrams , auto wiring diagram 912 porsche wiring diagram , 2012 equinox fuse box , speaker wiring diagram 2004 toyota camry , tr6 wiring schematic , ignition schematic for a ford f800 truck , dragon gauge tachometer wiring diagram , trailer plug wiring diagram on semi trailer wiring harness kit , 2000 oldsmobile silhouette serpentine belt diagram fixya , 1956 chevy steering column wiring diagram picture , forest river wiring diagrams forest engine image for user , 220 volt gfci breaker wiring diagram on wiring diagram 220 dryer , | 2,301 | 11,084 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2019-35 | latest | en | 0.888762 |
https://digitalcommons.isical.ac.in/doctoral-theses/272/ | 1,715,993,510,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971057216.39/warc/CC-MAIN-20240517233122-20240518023122-00469.warc.gz | 182,349,800 | 9,361 | ## Doctoral Theses
### Some Problems in Differential and Subdifferential Calculus of Matrices.
5-28-2014
5-28-2015
#### Institute Name (Publisher)
Indian Statistical Institute
Doctoral Thesis
#### Degree Name
Doctor of Philosophy
Mathematics
#### Department
Theoretical Statistics and Mathematics Unit (TSMU-Delhi)
#### Supervisor
Bhatia, Rajendra (TSMU-Delhi; ISI)
#### Abstract (Summary of the Work)
A central problem in many subjects like matrix analysis, perturbation theory, numerical analysis and physics is to study the effect of small changes in a matrix A on a function f(A). Among much studied functions on the space of matrices are trace, determinant, permanent, eigenvalues, norms. These are real or complex valued functions. In addition, there are some interesting functions that are matrix valued. For example, the (matrix) absolute value, tensor power, antisymmetric tensor power, symmetric tensor power.When a function is differentiable, one of the ways to study the above problem is by using the derivative of f at A, denoted by Df(A). In order to obtain first order perturbation bounds, it is helpful to have information about kDf(A)k. In general, finding the exact value of the norm of any operator is not an easy task. It might be easier and adequate to find good estimates on kDf(A)k. Higher order perturbation bounds can be obtained using the norms of the higher order derivatives.Some interesting functions like norms are not differentiable at some points. But they possess the useful property of being convex. In such a case, the notion of subderivative is used in place of the derivative.This thesis consists of two parts. In one of them, we study (higher order) derivatives of the maps that take a matrix to its kth tensor power, kth antisymmetric tensor power and kth symmetric tensor power. We obtain explicit formulas for these derivatives and compute their norms. We also obtain expressions for the map that takes a matrix to its permanent. In the other part, we study the subdifferentials of norm functions and use them to investigate Birkhoff-James orthogonality in the space of matrices. These results are then applied to obtain some distance formulas. Such formulas have been of interest to many mathematicians.Let M(n) denote the space of n×n complex matrices. Let A(i|j) denote the (n−1)×(n−1) submatrix obtained from A by deleting its ith row and jth column. Let det : M(n) → C be the map that takes a matrix A to its determinant. This map is differentiable and the famous Jacobi formula gives its derivative as D det(A)(X) = tr(adj(A)X), (0.1)
ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843053
ISILib-TH413 | 661 | 2,789 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2024-22 | latest | en | 0.892716 |
https://www.coursehero.com/file/195777/MOOREHW9/ | 1,524,148,338,000,000,000 | text/html | crawl-data/CC-MAIN-2018-17/segments/1524125936969.10/warc/CC-MAIN-20180419130550-20180419150550-00115.warc.gz | 787,970,755 | 27,974 | {[ promptMessage ]}
Bookmark it
{[ promptMessage ]}
MOOREHW9
# MOOREHW9 - Chapter 13 Chemical Kinetics Rates of Reactions...
This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up to access the rest of the document.
Unformatted text preview: Chapter 13: Chemical Kinetics: Rates of Reactions Three factors that can affect the rate of a chemical reaction are described at the beginning of Section 13.3. The}: are concentrations of reactants. the temperature the reaction is run at, and the presence or absence of a catalyst. Other factors may also be involved. Some ofthese factors may not affect the rate of specific reactions. . .stm11-nr‘: (a) 3 x 10-30 (h) 4 x 10-15 (c) 4 x 10-10 {a} 1.9x 10-fi .S'ri'ategy anfi’Eryfanatfon: Given the activation energy and several temperatures, determine the fraction of molecules whose energies would be energetic enough to react. —E ERI In Equation 13.3 of Section 13.5, the exponential term. e a ._ is described as the fraction of sufficiently energetic molecules. Convert temperatures to Kelvin: °C - 213.15 = Kelvin. An example ofthe calculation is here for answer (a): 100. 3C — 1179.15 = 3—."3 K I —E RI 4:139] k.T.-'mol).-'I:0_UUES‘_4 kJ.’ mol-KJIISB K) ; Fraction: e a = e = 544-0: 3 x 10—30 1" (K) Fraction of sufficiently energetic molecules (a) 3.?3 e410 = s a: 10—30 (b) 4?3 €355 =4 a: 10—16 (c) F3 e-ll-T' =4 x 10-10 (d) 121's e-13-33=1.9 2c 10—5 V” Roasonaba‘eAnru-nr' Check: The fraction of molecules with enough energy to react increases Wll'l'l increasing temperature. 65. Amour: See graphs below Strategy andEipfarraHon: Draw these diagrams using the information described in the solution to Question :5 and in Section 13.4. 5-H and cm are identical for these diagrams. (a) Em...IEE = Emrmmr AH = (.15 k1 moi-1) — (—145 kJ mot-1) = 220. M mol-l I 7‘5 kJs‘mol 320. kl-"fito. i_|i —145 15111101 13301231113 lde'LlCtS Reaction Progress (b) EHEWEE = Emmmd— AH = {:55 kJ mot-1) — (—TD. kJ moi—1] = 135 M 11101—1 65 kJ.-"1nol i 13 S kJs’mol | i | E I | | on. anm] i I _ _ _ _ _ _ _ _ _ _ _ ___l._ ____________ | | reactants products Reaction Progress (:3 Euflme = Eaimm— AH = {35 kJ mot-1) — (+10. M moi—1] = 15 kJ 11101—1 85 Iii-"11101 reactants pI'OdllCIS Reaction Progre 33 Chapter 13: Thermodynanncs: Dnectionality of Chemical Reactions 25. Juan-er: (21) Item 2 (to) Item 2 (1:) Item 2 Strategy and Explanation: Use the qualitative guidelines for entropy changes described in Section 18.3. (a) Item 2 has higher entropy since it is identical to item 1 except that its temperature is higher. Molecules at higher temperature have higher entropy. (In) Item 2, dissolved sugar. has higher entropy than item l_. solid sugar. because solute molecules are more random than those in a solid crystal. (c) Item 2, the mixture of water and alcohol together has higher entropy than item 1, water and alcohol separate. Mixing makes the molecules more random. 27. APTS‘H’QF.‘ (a) NoCl (to) P4 (c) FHflu'Ofioq) Sontag! and Explanation: Use the methods described in Problem—Solving Example 18.2 and the qualitative guidelines for entropy changes described in Section 13.3. (a) Comparing NaCl and CaO, we find that the biggest difference between these two ionic solids is the attractions due to the charges on the ions. According to Coulomb’s law, the Ca2+ and 02' ions have greater interaction than the Na+ and Cl‘ ions, so NaCl has a larger entropy-"mol than CaO. (h) P4 molecules have more atoms than Cl] molecules, so P4 has a larger entropy per mol than (31:. (c) The solid NH4NO3 crystal is more ordered than the aqueous NH4_ and ND; ions, so the aqueous NH4N03 has a larger entropy per mol than the solid NH4N03. 31. Amn-‘er: (a) negative (13) positive {c} negative (d) positive Strategy and Explanation. Use the qualitative guidelines for entropy changes described in Section 18.3. (a) The reaction has more gas-phase reactants (2 mol) than gas—phase products (1 mol), so the entropy change will be negative. (In) The reaction has fewer gas-phase reactants mol) than gas-phase products {3 mol), so the entropy change will be positive. (c) The reaction has more gas-phase reactants (4 mol) than gas—phase products (2 mol), so the entropy change will be negative. (d) The reaction has fewer gas—phase reactants (0 mol] than gas—phase products (1 mol). so the entropy change will be positive. ...
View Full Document
{[ snackBarMessage ]} | 1,369 | 4,533 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2018-17 | latest | en | 0.760494 |
https://discuss.d2l.ai/t/topic/1751 | 1,638,871,023,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964363337.27/warc/CC-MAIN-20211207075308-20211207105308-00054.warc.gz | 287,801,321 | 6,069 | 线性代数
1 Like
axis = 0按照行,可以理解为把“行”给抹去只剩1行,也就是上下压扁。
axis = 1按照列,可以理解为把“列”给抹去只剩1列,也就是左右压扁。
5 Likes
1 Like
https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/geometry-linear-algebraic-ops.html#geometry-of-vectors
Thanks for your suggestions @Jiaomubaobao233 ! Would you like to be a contributor and post a PR for your proposition?
`A / A.sum(axis=1)`到底是什么意思啊
1 Like
1 Like | 151 | 388 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.796875 | 3 | CC-MAIN-2021-49 | latest | en | 0.431667 |
https://www.projectpro.io/recipes/determine-if-time-series-is-stationery | 1,718,387,903,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861568.20/warc/CC-MAIN-20240614173313-20240614203313-00639.warc.gz | 843,850,052 | 29,662 | # How to determine if a time series is stationery?
This recipe helps you determine if a time series is stationery
## Recipe Objective
Time series are stationary if they do not have trend or seasonal effects. Summary statistics calculated on the time series are consistent over time, like the mean or the variance of the observations. It can be observed easily through plots or summary statistics.
So this recipe is a short example on how to determine if a time series is stationary. Let's get started.
Learn About the Application of ARCH and GARCH models in Real-World
## Step 1 - Import the library
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt ```
Let's pause and look at these imports. Numpy and pandas are general ones. Here matplotlib.pyplot will help us in plotting
## Step 2 - Setup the Data
``` df = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/a10.csv', parse_dates=['date']).set_index('date') ```
Here, we have used one time series data from github. Also, we have set our index to date.
## Step 3 - Visualizing
``` df.plot() plt.show() ```
We have simply plotted dataset, taking time on x axis and values on y axis.
## Step 4 - Calculating Summary
``` X = df.value split = round(len(X) / 2) X1, X2 = X[0:split], X[split:] mean1, mean2 = X1.mean(), X2.mean() var1, var2 = X1.var(), X2.var() ```
We have split our dataset in two set. Next, we are trying to caluclate mean and variance of both split dataset.
## Step 5 - Printing results
``` print('mean1=%f, mean2=%f' % (mean1, mean2)) print('variance1=%f, variance2=%f' % (var1, var2)) ```
Simply print the mean and variance.
## Step 6 - Lets look at our dataset now
Once we run the above code snippet, we will see:
```Scroll down the ipython file to visualize the output.
```
A disparity in values can be seen indicating presence of non-stationary points.
What Users are saying..
#### Jingwei Li
Graduate Research assistance at Stony Brook University
ProjectPro is an awesome platform that helps me learn much hands-on industrial experience with a step-by-step walkthrough of projects. There are two primary paths to learn: Data Science and Big Data.... Read More
#### Relevant Projects
##### NLP Project on LDA Topic Modelling Python using RACE Dataset
Use the RACE dataset to extract a dominant topic from each document and perform LDA topic modeling in python.
##### Recommender System Machine Learning Project for Beginners-3
Content Based Recommender System Project - Building a Content-Based Product Recommender App with Streamlit
##### House Price Prediction Project using Machine Learning in Python
Use the Zillow Zestimate Dataset to build a machine learning model for house price prediction.
##### Build a Multi Class Image Classification Model Python using CNN
This project explains How to build a Sequential Model that can perform Multi Class Image Classification in Python using CNN
##### Learn How to Build a Logistic Regression Model in PyTorch
In this Machine Learning Project, you will learn how to build a simple logistic regression model in PyTorch for customer churn prediction.
##### MLOps Project on GCP using Kubeflow for Model Deployment
MLOps using Kubeflow on GCP - Build and deploy a deep learning model on Google Cloud Platform using Kubeflow pipelines in Python
##### MLOps Project to Deploy Resume Parser Model on Paperspace
In this MLOps project, you will learn how to deploy a Resume Parser Streamlit Application on Paperspace Private Cloud.
##### Digit Recognition using CNN for MNIST Dataset in Python
In this deep learning project, you will build a convolutional neural network using MNIST dataset for handwritten digit recognition.
##### MLOps using Azure Devops to Deploy a Classification Model
In this MLOps Azure project, you will learn how to deploy a classification machine learning model to predict the customer's license status on Azure through scalable CI/CD ML pipelines.
##### LLM Project to Build and Fine Tune a Large Language Model
In this LLM project for beginners, you will learn to build a knowledge-grounded chatbot using LLM's and learn how to fine tune it. | 927 | 4,165 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.75 | 4 | CC-MAIN-2024-26 | latest | en | 0.821362 |
https://www.math.colostate.edu/~mueller/research.html | 1,660,722,669,000,000,000 | text/html | crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00447.warc.gz | 739,175,086 | 4,540 | [EIT Lab] [Project SAXS] [Publications] [Students] [Collaborators] [Inverse Problems Seminar] [EIT Seminar]
--- an Updated Version will be added shortly ---
ELECTRICAL IMPEDANCE TOMOGRAPHY ( E I T )
W h a t i s E I T ?
EIT is an imaging technique in which low levels of current are applied through electrodes on the surface of the body, the resulting voltage is measured on the electrodes, and an inverse problem is solved computationally to determine the conductivity distribution in the interior. The results are then used to form an image of the interior of the body. Researchers at RPI have developed the ACT3 system to image ventilation and perfusion in human subjects.
The ACT3 system images subjects in real time. A small demonstration is illustrated below. In the photo, Prof. Jon Newell and I each have a finger in a tank of saline connected to ACT3. On the monitor are reconstructed images. Red depicts high conductivity and blue depicts low conductivity. The left-hand image on the monitor is a static image, which unfortunately didn't photograph well. The right-hand image is a difference image taken from a reference frame before we placed our fingers in the tank. Since our fingers are more resistive than the saline in the tank, they show up as blue regions on the monitor.
When we shake hands, the current being applied through the electrodes now flows between us. Since the image is two dimensional, the system assumes that there must be a conductor in the tank between our fingers. This shows up as a red region between the images of our fingers on the monitor.
Below is a typical EIT image of a cross-section of a patient's chest during systole. A reference image was taken during diastole, when the heart is filled with blood. When the heart contracts, blood empties from the heart and is pumped into to the lungs and out to the rest of the body. Since blood is very conductive, the lungs now appear as red regions, and the heart appears as a blue region, since it is now less conductive than during diastole. This process can be observed in real time.
To learn more about EIT, please visit the EIT website at RPI, read the recent survey article in SIAM Review (March 1999) by M. Cheney, D. Isaacson, and J. Newell, or follow the links to another EIT research group.
C U R R E N T R E S E A R C H
A Direct Reconstruction Algorithm for a 2-D Geometry
This project involves the numerical realization of the reconstruction algorithm outlined in A. Nachman's 1996 proof of uniqueness for the inverse conductivity problem in 2-D. This algorithm is a direct method for solving the full nonlinear inverse problem, and uses the D-bar method of inverse scattering. The first results of this work are found in [5] and [6]. More recently, we have further developed the algorithm for use with experimental data, [2]. Other work on this algorithm can be found in [1], [3], and[4]. Collaborators on this project include Samuli Siltanen (GE Healthcare, Finland), David Isaacson (RPI), Jonathan Newell (RPI), Kim Knudsen (Aarlborg University, Denmark), Matti Lassas(Helsinki University of Technology, Finland), and Jutta Bikowski (CSU).
Below is an image of a phantom chest containing agar heart and lungs and a reconstruction using the D-bar method.
Planar Arrays
I am also interested in algorithms for planar arrays of electrodes. Planar electrode arrays are useful in detecting breast tumors as well as monitoring ventilation and perfusion. To form real-time images, fast algorithms are needed to solve the inverse conductivity problem. A 3-D linearization-based algorithm has been developed and tested on experimental data. Agar targets designed to simulate breast tumors were suspended in a saline-filled tank with conductivity simulating that of a human breast. A picture of the tank is found below. The algorithm was also used to detect conductivity changes due to ventilation and perfusion in a human subject. More recently, Jutta Bikowski (M.S. 2004, CSU) extended this work using the Fast Multipole Method and the shunt model to model the electrodes.
* may require ISDN or faster connection. | 911 | 4,144 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2022-33 | latest | en | 0.955735 |
https://minuteshours.com/3687-minutes-in-hours-and-minutes | 1,679,594,964,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00736.warc.gz | 445,133,297 | 4,963 | # 3687 minutes in hours and minutes
## Result
3687 minutes equals 61 hours and 27 minutes
You can also convert 3687 minutes to hours.
## Converter
Three thousand six hundred eighty-seven minutes is equal to sixty-one hours and twenty-seven minutes. | 58 | 253 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2023-14 | latest | en | 0.816776 |
https://corporatefinanceinstitute.com/resources/knowledge/finance/cash-ratio-formula/ | 1,550,538,620,000,000,000 | text/html | crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00073.warc.gz | 538,073,712 | 130,642 | Cash Ratio
A liquidity ratio that measures a company’s ability to pay off short-term liabilities with highly liquid assets
What is Cash Ratio?
The cash ratio, sometimes referred to as the cash asset ratio, is a liquidity metric that indicates a company’s capacity to pay off short-term debt obligations with its cash and cash equivalents. Compared to other liquidity ratios such as the current ratio and quick ratio, the cash ratio is a stricter, more conservative measure because only cash and cash equivalents – a company’s most liquid assets – are used in the calculation.
Formula for Cash Ratio
The formula for calculating the ratio is as follows:
Where:
• Cash includes legal tender (coins and currency) and demand deposits (checks, checking account, bank drafts, etc.).
• Cash equivalents are assets that can be converted into cash quickly. Cash equivalents are readily convertible and subject to insignificant risk. Examples include savings accounts, T-bills, and money market instruments.
• Current liabilities are obligations due within one year. Examples include short-term debt, accounts payable, and accrued liabilities.
Example of Cash Ratio
Company A’s balance sheet lists the following items:
• Cash: \$10,000
• Cash equivalents: \$20,000
• Accounts receivable: \$5,000
• Inventory: \$30,000
• Property & equipment: \$50,000
• Accounts payable: \$12,000
• Short-term debt: \$10,000
• Long-term debt: \$20,000
The cash ratio for Company A would be calculated as follows:
The figure above indicates that Company A possesses enough cash and cash equivalents to pay off 136% of its current liabilities. Company A is highly liquid and can easily fund its debt.
Interpretation of the Cash Ratio
The cash ratio indicates to creditors, analysts, and investors the percentage of a company’s current liabilities that cash and cash equivalents will cover. A ratio above 1 means that the company will be able to pay off its current liabilities with cash and cash equivalents.
Creditors prefer a high cash ratio as it indicates that the company can easily pay off its debt. Although there is no ideal figure, a ratio of not lower than 0.5 to 1 is usually preferred. The cash ratio figure provides the most conservative insight into a company’s liquidity since only cash and cash equivalents are taken into consideration.
It is important to realize that the ratio does not necessarily provide a good financial analysis of a company, because businesses do not ordinarily keep cash and cash equivalents at the same amount as current liabilities. In fact, they are usually making poor use of their assets if they hold large amounts of cash on their balance sheet. When cash sits on the balance sheet, it is not generating a return. Therefore, excess cash is often re-invested for shareholders to realize higher returns.
Key Takeaways
• The cash ratio is a liquidity ratio that measures a company’s ability to pay off short-term liabilities with highly liquid assets.
• Compared to the current ratio and the quick ratio, this is the most conservative measure of a company’s liquidity position.
• There is no ideal figure, but a ratio of at least 0.5 to 1 is usually preferred.
• The cash ratio may not provide a good analysis of a company as it is unrealistic for companies to hold large amounts of cash.
CFI offers the Financial Modeling & Valuation Analyst (FMVA)™ certification program for those looking to take their careers to the next level. To keep learning and advancing your career in finance, the following resources will be helpful:
• Acid-Test Ratio
• Analysis of Financial Statements
• Financial Analysis Ratios Glossary
• Guide to Financial Modeling
Financial Analyst Training
Get world-class financial training with CFI’s online certified financial analyst training program!
Gain the confidence you need to move up the ladder in a high powered corporate finance career path.
Learn financial modeling and valuation in Excel the easy way, with step-by-step training. | 806 | 4,001 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3 | 3 | CC-MAIN-2019-09 | longest | en | 0.92481 |
http://planetetutors.com/cbse-detail.asp?id=26&ch_id=14&icat=184 | 1,521,366,661,000,000,000 | text/html | crawl-data/CC-MAIN-2018-13/segments/1521257645604.18/warc/CC-MAIN-20180318091225-20180318111225-00467.warc.gz | 231,037,777 | 6,911 | ## Class XI Physics Important Questions
12/18/2011 CBSE
14. Derive a relation between rotational Kinetic Energy of a
rigid body and its moment of inertia?
15. Calculate the work done in splitting a drop of water of
droplets. Given surface Tension of water =
0.072 N/m.
17. State Camot's theorem. Calculate lhe efficiency of a
Carnat's engine working between temperatures O0C and 100°C.
18. Show that the vertical oscillations of a loaded spring are simple
harmonic.
19. Derive the equations of motion.
1. V=u+at
2. s = ut + 1/2 at2
by graphical method.
20. Derive an expression for the horizontal range of
a projectile and calculate the angle at Projection for obtaining maximum
horizontal range.
21. Calculate the
acceleration of the body and tension in the string in the following diagram.
Given g = 10 mls2.
Assume that there is no friction.
22.
Derive an expression for F in the following diagram to give an acceleration a to
the body of mass m if the co-efficient of friction between the body and the
inclined
plane is µ and the angle made by the inclined plane with the horizontal is θ
23. Derive an expression for the
velocities after a perfectly elastic collision between two bodies of masses m,
and rn, moving with velocities u1 and u2 in the same direction.
24. A dumb - bell formed by two masses 2
kg each separated by a massless rod 20 cm long is rotating with angular velocity
10 rad/s about an axis passing through the mid point of the rod and perpendicular to its length. What
will
be
the new angular velocity if
each mass is suddenly displaced by 5 cm towards the centre.
26.
The moment of inertia of a ring of mass 1 kg is 10-4 kg m
2
about an axis passing through its centre and perpendicular to its plane. Calculate its
a. along its diameter.
b. Along its tangent
25. Describe the variation of acceleration due to gravity (g)
with depth and draw a graph showing the variation of g as one moves outwards from the
centre of earth. (gv5 r graph)
Assuming that the VISCOUS force F of a spherical body moving through a
liquid depends on
1. the co·efficient of viscosity
('1) of
Hquid
2. the radius (r) of the spherical body.
3. the speed (v) of the body, derive an expression for F (stokes
law) by the method of dimensions.
27. Describe the principle of Carnot's refrigerator with diagram
and derive expression for its co-efficient of performance.
28. What is Doppler effect? Derive expression for the apparent
frequency sensed by a stationary listener when the source is (a) approaching and (b)
receding,
OR
a. What are damped oscillations ? Draw graph showing the
variation of displacement with time for damped harmonic oscillations.
b. Derive expression for displacement, velocity and acceleration
in SHM using circle of reference concept.
29. a. State any four postulates of kinetic theory of gases.
b. Derive an expression for pressure exerted by ail ideal gas in
a container using kinetic theory of gases.
OR
a. Derive Boyle's Law and Charles law using kinetic theory of
gases.
b. Explain the kinetic interpretation of temperature and hence
define absolute zero?
c. Calculate the average kinetic energy of a molecule of an ideal gas at 400
k ? | 763 | 3,197 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.234375 | 3 | CC-MAIN-2018-13 | latest | en | 0.897483 |
http://stackoverflow.com/questions/4953258/how-to-calculates-the-average-of-elements-in-an-unsigned-char-array/4953324 | 1,394,327,065,000,000,000 | text/html | crawl-data/CC-MAIN-2014-10/segments/1393999669324/warc/CC-MAIN-20140305060749-00084-ip-10-183-142-35.ec2.internal.warc.gz | 170,218,095 | 17,274 | # How to calculates the average of elements in an unsigned char array?
I've got a quick and I am assuming question but I have not been able to find anything online.
How to calculates the average of elements in an unsigned char array? Or more like it, perform operations on an unsigned char?
-
Do you know how to take the average of a data set? Do you understand the concept of arrays? Do you want to understand the syntax for working with arrays? Do you want to do it in C or C++? (C and C++ are different languages. You will get different answers for each. The `[c]` and `[c++]` tags are not equivalent!) – In silico Feb 10 '11 at 4:10
Arithmetic operations work just fine on `unsigned char`, although you may occasionally be surprised by the fact that arithmetic in C always promotes to `int`.
In C++'s Standard Template Library,
``````#include <numeric>
template<class InputIterator, class T>
T accumulate(InputIterator first, InputIterator last, T init);
``````
To calculate the sum of `unsigned char arr[]`, you may use `accumulate(arr, arr + sizeof(arr) / sizeof(arr[0]), 0)`. (0 is an `int` here. You may find it more appropriate to use a different type.)
Without STL, this is trivially computed with a loop.
The average is the sum divided by the length (`sizeof(arr) / sizeof(arr[0])`).
-
How can I hold an unsigned char value? Let's say sum += unSignedCharArray[i][j]. what is the type of sum? – Everton Feb 10 '11 at 5:10
@Everton: See my complete solution. the sum is stored in an `int` type variable! – Nawaz Feb 10 '11 at 5:21
Can you show me how to implement without STL? I am a beginner and it would help me a lot! Thank you! – Everton Feb 10 '11 at 7:49
C++03 and C++0x:
``````#include <numeric>
int count = sizeof(arr)/sizeof(arr[0]);
int sum = std::accumulate<unsigned char*, int>(arr,arr + count,0);
double average = (double)sum/count;
``````
Online Demo : http://www.ideone.com/2YXaT
C++0x Only (using lambda)
``````#include <algorithm>
int sum = 0;
std::for_each(arr,arr+count,[&](int n){ sum += n; });
double average = (double)sum/count;
``````
Online Demo : http://www.ideone.com/IGfht
-
No need to do all the math as floating point. You can wait to convert until you finish the sum. If your integers are 32-bit, you'd need a 24MB array to have any chance of overflow, so I think it's perfectly reasonable to keep the sum in an `unsigned int`. – R.. Feb 10 '11 at 4:20
If nothing else, you need to make it clear that you mean `double`, not `float`, since `float` will have worse overflow issues (after a large number of elements, around 32k, are added, adding further elements will not change the sum!) which are harder to detect and harder to work around. IMO, using floating point any time you don't really want its semantics is a huge programming error that's sure to bite you unless you're an expert in numerical analysis. – R.. Feb 10 '11 at 10:36 | 767 | 2,900 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.015625 | 3 | CC-MAIN-2014-10 | latest | en | 0.84843 |
https://www.jiskha.com/search/index.cgi?query=4%29+a+student+score+is+83+and+91+on+her+first+two+quizzes.+write+and+solve+a+compound+inequality+to+find+possible+values+for+a+thord+quiz+score+that+would+give+anverage+between+85+and+90.+a.+85%E2%89%A483%2B91%2Bn%2F3+%E2%89%A490%3B+81%E2%89%A4n%E2%89%A496 | 1,506,410,537,000,000,000 | text/html | crawl-data/CC-MAIN-2017-39/segments/1505818695113.88/warc/CC-MAIN-20170926070351-20170926090351-00451.warc.gz | 791,999,572 | 12,833 | # 4) a student score is 83 and 91 on her first two quizzes. write and solve a compound inequality to find possible values for a thord quiz score that would give anverage between 85 and 90. a. 85≤83+91+n/3 ≤90; 81≤n≤96
229,134 results
4) a student score is 83 and 91 on her first two quizzes. write and solve a compound inequality to find possible values for a thord quiz score that would give anverage between 85 and 90. a. 85≤83+91+n/3 ≤90; 81≤n≤96 b. 85≤83+91/2+n≤90; -2&#...
### Algebra
A student scored 81 and 95 on her first two quizzes. Write and solve a compound inequality to find the possible values for a third quiz score that would give her an average between 85 and 90.
### Math
Jackie scored 82 and 88 on her first two quizzes. Write and solve a compound inequality to find the possible values for a third quiz score that would give her an average between 80 and 85, inclusive. a. 80=<(82+88+n)/3=<85;70=<n=<85 b. 70=<(82+88+n)/3=<85;80...
### Algebra 1
A student scored 78 and 92 on his first two quizzes. Use a compound inequality to find the possible values for a third quiz score that would give him an average between 80 and 85, inclusive. (1 point) 78<=x<=92 75<=x<=85*** 70<=x<=85 80<=x<=85
### Algebra
A student scores 83 and 91 on her first 2 tests. Write and solve a compound inequality to find the possible values for a 3rd test score that would giver her an average between 85 and 90 inclusive. I know she has to score between 81 & 96 but I'm not sure how to show it the ...
### Math
You have a mean score of 86 on 9 quizzes. The teacher decides to drop your lowest quiz score to determine your final quiz mean. After dropping the lowest score, your mean score on the 8 remaining quizzes is 88. What score was dropped?
### math
7. The tennis team is selling key chains as a fundraiser. If its goal is to raise is at least 180, how many key chains must it sell at \$2.25 each to meet that goal? Write and solve an inequality. a. 2.25k (greater than or equal to) 180; k (greater than or equal to) 100 b. 180k...
### Statistic
There are two independent multiple-choice quizzes where quiz 1 has eight questions and quiz 2 has 15 questions. Each question in the first quiz has four choices and each question in the second quiz has five choices. Suppose a student answers the questions in the quizzes by ...
### statistic and probability
There are two independent multiple-choice quizzes where quiz 1 has eight questions and quiz 2 has 15 questions. Each question in the first quiz has four choices and each question in the second quiz has five choices. Suppose a student answers the questions in the quizzes by ...
### math
STATISTICS The mean score for samantha's first s algebra quizzes was 88. If she scored a 95 on her next quiz, what will her mean score be for all 7 quizzes?
### Algebra
Tom bowled 135 and 145 in his two first games. Write and Solve a compound inequality to find the possible values for a third game that would give him an average between 120 and 130, inclusive.
### Math
Suppose there is a quiz in your mathematics class every week. The value of each quiz is 50 points. After the first 6 weeks, your average mark on these quizzes is 36. a) What average mark must you receive on the next 4 quizzes so that your average is 40 on the first 10 quizzes...
### Math
7. A student has received scores of 88, 82, and 84 on three quizzes. If tests count twice as much as quizzes, what is the lowest score the student can get on the next test to achieve an average score of at least 70? A. 13 B. 48 C. 70 D. 96 The correct answer is B, but I would ...
### Math
An instructor gives regular 20-point quizzes and 100-point exams in a mathematics course. Average scores for six students, given as ordered pairs (x,y) where x is the average quiz score and y is the average test score, are (18, 87), (10, 55), (19, 96), (16, 79), (13, 76), and...
### Statistic
A quiz consists of 10 multiple-choice questions. Each question has 5 possible answers and only one of them is correct. A student has not studied for the quiz and will pick his answers randomly. 1)Define the random variable x that will be used for problem, and give the ...
### Joseph Ayo Babalola University
Score 1,Score 2 and Score 3 are the marks scored by a student in three tests. Write a pseudo code,an algorithm and draw a flowchart to find the average of best two score
### Math
Karl's scores on the first five science tests are shown in the table. Test 1 Test 2 Test 3 Test 4 Test 5 85 84 90 95 88 Part A. Write am inequality that represents how to find the score he must receive on the sixth test to have an average score of more than 88. Part B. Solve ...
### math
Tim wants his mean (average) quiz score in history class to be 90. His first 3 quiz scores were 86, 92, and 94. What score should he make on the 4th quiz in order to have a mean (average) quiz score of exactly 90? I really need the answer ! PLZ PLZ PLZ ! cant figure it outtt...
### Computer science
Score 1,Score 2 and Score 3 are the marks scored by a student in three tests. Write a pseudocode,an algorithm and draw the flowchart to find the average of best two scores.
### algebra
Use the five steps for problem solving to answer the following question. Please show all of your work. The average of two quiz scores is 81. If one quiz score is six more than the other quiz score, what are the two quiz scores?
### algebra
please translate “A student scored five points more on the second quiz than he did the first. He scored eleven points lower on the third quiz than he did the first. If the average of the three quiz scores is 70, what was his score on the first quiz?”
### i dont know how to solve this
The chart below shows the highest quiz score Cynthia received in four of her classes. Class Highest Quiz Score English 13 out of 14 correct Science 93% Math 25/27 History 0.93- In which class did she have the highest score?
### programming
would someone be able to give me an idea of how to start coding this in C? You just finished playing cards with a group of friends and each time your score changed in the game you wrote it down. Given the total number of points it takes to win the game and all of your scores (...
### Math
Vanessa has taken 4 quizzes so far and her average score so far is an 88. If she gets a 100, a perfect score on the remaining two quizzes, what will her new average be?
### math help pls
Jess played two days of golf . on the second day , he got a score of -6 . his total score for the two days was 0 . define a variable . then write and solve an equation to find the score jess got on the first day . is the answer 6?
### Math
esse played two days of golf. On the second day, he got a score of 6 below par, or -6. His total score for the two days was 0 above par, or 0. Define a variable then write and solve an equation to find the score jesse got on the first day. show your work thanks
### stats
Use normal approximation(Z-score for p-hat) to find the probability that student scores 80% or lower on a 100 question quiz. A)If the test contains 250 questions, what is the probability that student will score 80% or lower
### MATH
(4 pts) The score on an exam from a certain MAT 112 class, X, is normally distributed with \mu = 77.6 and \sigma = 10.9. NOTE: Assume for the sake of this problem that the score is a continuous variable. A score can thus take on any value on the continuum. (In real life, ...
### Math
On a test whose distribution is approximately normal with a mean of 50 and a standard deviation of 10, the results for three students were reported as follows: Student Opie has a T-score of 60. Student Paul has a z-score of -1.00. Student Quincy has a z-score of +2.00. Obtain ...
### Stasistic
On a test whose distribution is approximately normal with a mean of 50 and a standard deviation of 10, the results for three students were reported as follows: Student Opie has a T-score of 60. Student Paul has a z-score of -1.00. Student Quincy has a z-score of +2.00. Obtain ...
### math
a six-faced die was thrown 28 times. the table shows the number that each possible score occurred. score 1 2 3 4 5 6 frequency 8 6 6 2 4 2 (a) After the 27th throw the median score was 2. what was the least possible score on the 28th throw? (b) The die was then thrown twice ...
### algebra 2
the 2 unequalities -26<4k-2 and 2k-1<6 A. Write a compound inequality to combine the inequalities shown previously. and B. Solve the compound inequality for values of k. Show your work. Write your final answer as one inequality. please help
### Math
First period: 85, 83, 74, 70, 88,95,89,72,90,83,77,91,98,89,82,84 Second period: 95,89,82,81,72,69,100,97,75,91,82,79,96,81,80,95,89,97,83,71 Mark is a student in the first period class. His score on the exam was 95%. Zoe is a student in the second period class, she also ...
The chart below shows the highest quiz score Cynthia received in four of her classes. Class Highest Quiz Score English 13 out of 14 correct Science 93% Math 25/27 History 0.93- In which class did she have the highest score?
### Stats
Suppose that your score on an exam has a positive z-score in comparison to your classmates. A. Is it possible that your score is below the mean score?
### Math
Uh...Yea... Sorry about this, but I"m trying to study and I need someone to walk me through, missed a lot of school. fyi, when there's a / its under the entire string before it until you hit the inequality symbol. Tony Bowled 135 and 145 in his first two games. Write and solve...
### Help - Math
A teacher gave his class two quizzes; 80% of the class passed the first quiz, but only 60% of the class passed both quizzes. What percent of those who passed the first quiz also passed the second quiz?
### math
Bruce scored a 12, 18, 16, and 17 on his four quizzes. What must his score on the fifth quiz be if he wants an overall average of 16.
### math
Students in an English class need a mean of at least 90 points on four tests to earn an A. One student has scored 87,92,85. Write and solve an inequality to find what score the student needs on the next test to earn an A. 020714 I think that's it xDDD -pokes name- but you ...
### statistics
SAT scores are normally distributed. The SAT in English has a mean score of 500 and a standard deviation of 100. a. Find the probability that a randomly selected student's score on the English part of the SAT is between 400 and 675. b. What is the minimum SAT score that a ...
### Statistics
In the quiz in Exercises 1 and 2, the grading scheme is as follows: each right answer is awarded 5 points, and 1 point is taken off for each wrong answer. Recall that the quiz consists of 10 questions, and that in the class, the average number of right answers is 6.2 and the ...
### STAT1
In the quiz in Exercises 1 and 2, the grading scheme is as follows: each right answer is awarded 5 points, and 1 point is taken off for each wrong answer. Recall that the quiz consists of 10 questions, and that in the class, the average number of right answers is 6.2 and the ...
### statistics
Calculate the z-score for each set of data. Determine who did better on her respective test, Tonya or Lisa. Student English Test Grade z-Score Student Math Test Grade z-Score John 82 Jim 81 Julie 88 Jordan 85 Samuel 90 Saye 79 Tonya 86 Lisa 82 Mean 86.50 81.75 St. Dev. 3.42 2....
### social science statistics
a professor was curious to whether the students in a very large class she was teaching who turned in their first scored differently from the overall mean on the test.The overall mean score on the test was 75 with a standard deviation of 10;the score were approximately normally...
### Math
In your class, you have scores of 73,85,76, and 92 on the first four of five tests. To get a grade of Upper C, the average of the first five tests scores must be greater than or equal to 70 and less than 80. a) Solve an inequality to find the least score you can get on ...
### Java programming
– One Dimensional Array Consider this example problem: You are given 7 quiz scores of a student. You are to compute the final quiz score, which is the sum of all scores after subtracting the lowest one. For example, if the scores are 8 7 8.5 9.5 7 5 10 then the final ...
### Math
A student scored 65, 80, 74 on the first three tests during the term. What does the student need to score on the fourth test to ensure and an average score that is above 75?
### Math
To get a B in physics, a student must score an average of 80 on four tests. First three scores were 68, 75 & 84. What is the lowest score the student can make on the last test and still get a B?
### math
A professor grades students on three tests, four quizzes, and a final examination. Each test counts as two quizzes and the final examination counts as two tests. Sara has test scores of 60, 80, and 89. Sara's quiz scores are 85, 94, 83, and 84. Her final examination score is ...
### math
A professor grades students on three tests, four quizzes, and a final examination. Each test counts as two quizzes and the final examination counts as two tests. Sara has test scores of 60, 80, and 89. Sara's quiz scores are 85, 94, 83, and 84. Her final examination score is ...
### Algebra
The total of dharma's, Eugene, and Ferns final exam score is 272 Dharma score is added to Eugene score the sum is 28 points less than twice ferns score if Eugene score is eight point higher than Dharma score what is each student's final an exam score?
### algebra
Briana's second test score was 8 points higher than her first score. Her third score was 88. She had a B average (between 80 and 89 inclusive) for three tests. What can you conclude about her first test score?
### Statistics
Suppose that two different tests A and B are to be given to a student chosen at random from a certain population. Suppose also that the mean score on test A is 85, and the standard deviation is 10; the mean score on test B is 90, and the standard deviation is 16; the scores on...
### Algebra 1
what is the equations "The sum of two numbers is 75. The first is 9 more than 5 times the second. Find the first number?" What is the equations "Jack's bowling score is 20 less than 3 times Jill's score. Thw sum of their scores is 220. Find the score of each?"
### algebra
each week, mandy's algebra teacher gives a 10 point math quiz. after 5 weeks, mandy has earned a total of 36 points for an average of 7.2 points per quiz. she would like to raise her average to 9 points. on how many quizzes must she score 10 points in order to reach her goal?
### Maths..
Uh...Yea... Sorry about this, but I"m trying to study and I need someone to walk me through, missed a lot of school. fyi, when there's a / its under the entire string before it until you hit the inequality symbol. Tony Bowled 135 and 145 in his first two games. Write and solve...
### Math!
Uh...Yea... Sorry about this, but I"m trying to study and I need someone to walk me through, missed a lot of school. fyi, when there's a / its under the entire string before it until you hit the inequality symbol. Tony Bowled 135 and 145 in his first two games. Write and solve...
### Csb
A student obtained the following quiz scores: 90, 85, 93, and 78. What should be the minimum score he has to obtain in the fifth quiz to have an average of at least 80.
### Math
9 students in a group found that their mean score was 86 for the first math test. On the next test, each student in the group scored 7 points higher than on the first text. What was their mean score for the two tests? Any help would be appreciated.
### Statistic in Social Science
The verbal part of the Graduate Record Exam (GRE) has a mean of 500 and a standard deviation of 100. Use the normal distribution to answer the following questions: a. If you wanted to select only student at or above the 90th percentile, what verbal GRE score would you use as a...
### Research and Statistics
I want to ensure I've done this problem correctly. Thanks! 3. In one elementary school, 200 students are tested on the subject of Math and English. The table below shows the mean and standard deviation for each subject. Mean SD Math 67 9.58 English 78 12.45 One student’s ...
### math
On the last math quiz, there were four A's, two B's, five C's, and one D's. If the math quizzes were returned to the students in random order, what is the probability that the first quiz returned to a student was an A or a B?
### CIS 115
Design a solution that requests and receives student names and an exam score for each. The program should continue to accept names and scores until the user inputs a student whose name is “alldone”. ***After the inputs are complete determine which student has the highest ...
### Information Technology
Develop an algorithm, flow chart and pseudocode that accept as input three unit test scores and a project score for seven students. The algorithm, flow chart and pseudocode should accept seven examination scores. The students's overall score comprised of their weighted mid-...
### Math
Jesse played two days of golf. On the second day, he got a score of 6 below par, or -6. His total for the two days was 0 above par, or 0. Define a variable. Then write and solve an equation to find the score Jesse got on the first day. Show your work.
### statistics
A student scored 84 points on a test where the mean score was 79 and the standard deviation was 4. Find the student's z score, rounded to 2 decimal places
### Measures of Central Tendency
The table below shows the scores of a group of students on a 10-point quiz. Test Score Frequency 3 0 4 2 5 2 6 1 7 2 8 3 9 0 10 4 The mean score on this test is: The median score on this test is
### Measures of Central Tendency
The table below shows the scores of a group of students on a 10-point quiz. Test Score Frequency 3 1 4 3 5 1 6 1 7 0 8 3 9 4 10 3 The mean score on this test is: The median score on this test is:
### I need help, I'm stuck...what is the correct answe
The table below shows the scores of a group of students on a 10-point quiz. Test Score Frequency 3 0 4 2 5 2 6 1 7 2 8 3 9 0 10 4 The mean score on this test is: 6.5 The median score on this test is: 7.5
### educational statics
the average SAT score at a local university is (=60, and standard is (x=50. Assume that the university SAT score has a normal distribution convert a student score x=700 into z score
### math
a final examination score in General maths is weighted four times as much as each test score If the student has a final exam score of 87% and test grades of 70% and 92%, then the mean score is
### Maths
Write a fortran 77 program that can display the score and the average score of 50 student
### statistics- normal standard distribution
a set of data is normally distributed with a mean of 500 and standard deviation of 100: *what would t he standard score for a score of 700 be? According to my calculations 700-500/100=200/100=2 how would i interpret that. please show work. i know that i went wrong somewhere, ...
### Marth
To get an A in history, a student must score an average of 90 on four papers. Scores on the first three papers were 92, 83, and 88. What is the lowest score that a student can make on the last paper and still get an A?
### math
To get an A in history, a student must score an average of 90 on four papers. Scores on the first three papers were 92, 83, and 88. What is the lowest score that a student can make on the last paper and still get an A?
### Math
To get an A in psychology, a student must score an average of 90 on four tests. Scores on the first three tests were 96,92,88. What is the lowest score the student can make on the last test and still get an A?
### math
To get a C in physics, a student must score an average of 70 on four tests. Scores on the first three tests were 63, 85, and 82. What is the lowest score the student can make on the last test and still get a C?
### math10
A class of 30 students took two quizzes. Sixteen passed the first quiz and 20 passed the second quiz. If four students failed both quizzes, how many pass both?
### comper science
write a program that accept the name of a student score obtained in 5 courses and compute the average score of the student.your program must display the name of that student on tab 5
### algebra
erika has received scores of 82, 87, 93, 95, and 90 on math quizzes. What score must Erika get on her next quiz to have an average of 90?
### Statistic
A distribution has a standard deviation of ó = 9.5. Find the z-score for a score above the mean by 4 points. note: round to 2 decimal places, and enter the value of the z-score but do not write "z = " in your answer
### Math
a group of 100 students took a quiz. Their average score was 76 points. If the average score for boys was 80 points and the average score for girls was 70 points, how many girls participated in the quiz
### MATH
To get an A in history, a student must score an average of 90 on four papers. Scores on the first three papers were 92, 83, and 88. What is the lowest score that a student can make on the last paper and still get an A? CAN YOU PLEASE SHOW IN DETAIL THANKS
### Math
John has scores of 88,73,90,85,93 on five math quizzes. What scores must john earn on the next math quiz to have a mean quiz score of exactly 88?
### stats
2. A standardized exam was provided to all 3rd graders in Arizona schools. The average score was 75 with a standard deviation of 10. Assuming that the scores were normally distributed, answer the following questions. (a) What z-score corresponds with a score of 75? (b) What z-...
### math
Mary's quiz scores were 92 85 78 92 71 77 and 80 she told her mom she had an average score if 92 for her quiz scores.which term best Describes Her Average Score ? Mean median mode or range please answer😊
### Math
Write a mathematical expression to represent this situation. Then find the value of the expression to solve the problem. You play a game where your score -6 points on the first turns and one each of the next 3 turns. What is your score after those 4 turns?
### algebra
Jane is taking a class and asks you to help her figure out what her last grade must be in the class to get a 85 in the class. She tells you the following calculation is used to find your final grade: w2d2 x is her last score which has not been taken. Explain this equation and ...
### math
Jane is taking a class and asks you to help her figure out what her last grade must be in the class to get a 85 in the class. She tells you the following calculation is used to find your final grade: 75+85+90+X ___________ > 4 x is her last score which has not been taken. ...
### writing inequalities
you must have an average score of at least 80 to get a B on your report card. You have scores of 61,70,99, and 70. What is the minimium score you must get on the last test to get a B on your report card? the minimum score you can make and receive an 80 is 100 The minimum score...
### English
Let's play the word game with slides. This is Team 1 and this is Team two. You should pick an item and I will click on the picture. Then you have to solve the problem. According to the score with the problem, you will get the point. I will give you a small sheet of score paper...
### Math
The class average on a math quiz was 74 and the standard deviation was 6.8. Find the z-score for a test score of 82.
### Statistics
I am having trouble with this problem. A participant in a cognitive psychology study is given 50 words to remember and later asked to recall as many as he can of them. This participant recalls 17. What is the (a)variable, (b)) possible values, and (c) score? is this correct? ...
### Statistics
A sample of n=20 has a mean of M = 40. If the standard deviation is s=5, would a score of X= 55 be considered an extreme value? Why or why not? I understand that the score in question is 3 standard deviations above the mean. But I thought that I would need to convert this to a...
### Frequency-Math
The table below shows the scores of a group of students on a 10-point quiz. Test Score Frequency 3 0 4 2 5 2 6 1 7 2 8 3 9 0 10 4 The mean score on this test is: The median score on this test is
### Stastistics
A linear regression line y = 0.8x-10 is computed to predict the final exam score y on the basis of the first score x on the first test. Maria scores 78 on her first test. What would be the predicted value of her score on the final exam?
### algebra
You earned the following scores on five science tests: 75, 82, 90,84, and 71. You want to have an average score of at least 80 after you take the sixth test. a. Write and solve an inequality to find the possible schores that you can earn on your sixth test in order to meet ...
### Algebra
The score of Ragini in mathematic is 23 more than two third of a score in English if he score X marks in English what is her score in mathematics?
### Statistics: IQ Score (X) and Exam Score (Y)
Assuming that the regression equation for the relationship between IQ score and psychology exam score is Y' = 9+ 0.274X, what would you expect the psychology exam scores to be for the following individuals given their IQ exam scores? Individual Tim Tom Tina Tory IQ Score (X) ...
### Discrete Math
A quiz consists of 3 multiple choice questions, each of which have 10 answer choices. You are allowed three attempts at the quiz. What is the probability that someone would end up with a perfect score for the quiz simply by guessing the answers? | 6,489 | 25,617 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.546875 | 4 | CC-MAIN-2017-39 | longest | en | 0.951758 |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=77&t=41316 | 1,568,679,897,000,000,000 | text/html | crawl-data/CC-MAIN-2019-39/segments/1568514572980.56/warc/CC-MAIN-20190917000820-20190917022820-00060.warc.gz | 554,645,368 | 11,327 | ## Hess's Law depends on enthalpy as a state function
Josephine Lu 4L
Posts: 62
Joined: Fri Sep 28, 2018 12:18 am
### Hess's Law depends on enthalpy as a state function
Why does Hess's law depend on the fact that enthalpy is a state property?
Samantha Kwock 1D
Posts: 61
Joined: Fri Sep 28, 2018 12:24 am
### Re: Hess's Law depends on enthalpy as a state function
Hess's law allows you to add two reaction enthalpies together to determine the reaction enthalpy of a third reaction. If enthalpy were not a state function, its value would be dependent on the pathway it took to form the products. This would then invalidate the method of adding two other reaction enthalpies together as this would be a different pathway than the third composite reaction and a different enthalpy value.
Nahelly Alfaro-2C
Posts: 59
Joined: Wed Nov 15, 2017 3:04 am
### Re: Hess's Law depends on enthalpy as a state function
Remember that state properties do not depend on a path taken to obtain that state and they can also be added or subtracted. Therefore changes in enthalpy (Hess's Law) are additive like a state function. In addition, enthalpy change at each step of a multi-step reaction can be additive to give the total enthalpy change.
Catly Do 2E
Posts: 64
Joined: Fri Sep 28, 2018 12:24 am
### Re: Hess's Law depends on enthalpy as a state function
Hess's Law states that the heat of a specific reaction is equal to the sum of the heats of reaction. This depends on the fact that enthalpy is a state function because state function values do not depend on the path that is taken to reach that specific value. Therefore, it is okay to add/subtract these values, as stated in Hess's Law. If enthalpy was not a state function, Hess's Law would be false.
Return to “Heat Capacities, Calorimeters & Calorimetry Calculations”
### Who is online
Users browsing this forum: No registered users and 1 guest | 495 | 1,903 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.9375 | 3 | CC-MAIN-2019-39 | latest | en | 0.923142 |
https://www.my.freelancer.com/projects/data-processing-excel/rosendaal-inr/ | 1,521,488,217,000,000,000 | text/html | crawl-data/CC-MAIN-2018-13/segments/1521257647044.86/warc/CC-MAIN-20180319175337-20180319195337-00462.warc.gz | 839,312,317 | 37,427 | # Rosendaal INR
I need help to create a so-called Rosendaal calculator of INR (a measurement in blood) for our research project. Our data has 3000000 rows and here I attached a sample of it in MS Access. PT_ID is a patient identifier. INR is the test value. Date – date of the test. Each patient had multiple measurements – and is suppose to have the test value in range from 2-3 (inclusive). Our goal is to calculate the percentage of time each patient was within the specified range (2-3).
Example: Patient has an INR reading of 2.4 on October 1st, then a reading of 3.2 on October 17th. Assuming the patient gradually moves toward a reading of 3.2 throuhout the 16 day period between Oct. 1st and the 17th we can estimate that the patient was within their therapeutic range (2-3) for a majority of that time period. The spreadsheet would need to 1. calculate the total shift (example 2.4-3.2 = 0.8 increase) that is within therapeutic range (0.6 of shift within range (2.4-3.0 = 0.6) 2. Calculate the percent of total shift within therapeutic range (0.6/0.8 = 75%) 3. estimate number of days since last visit that were within range (75% x 16 days since last visit = 0.75 x 16 = 12 days within range, 4 days out of range) Percentage for that time period is 75% in range, and 12 total days in range.
0) We need to exclude Patients with less than 5 measurements (PT_ID repeated <5 times)
1) We need to sort data ascending for both PT_ID and Date.
2) Calculate the time difference (days) between every test for each Patient
3) Calculate the difference between each test value
4) Calculate the fraction of time each patient was within the specified range (2-3)
I can see the end result as report where rows are represented by non-duplicate PT_ID with a value of percentage of time each patient was within the INR range (2-3).
Kemahiran: Pemprosesan Data, Excel, Microsoft Access
Tentang Majikan:
( 2 ulasan ) Vancouver, Canada
ID Projek: #1543990
torajiv
I am a Lab Technologist by profession and an Access programmer by hobby. I know coding and I know what is INR! More details in PMB Thanks
(71 Ulasan)
5.6
## 14 pekerja bebas membida secara purata \$116 untuk pekerjaan ini
pishty
I am interested in your project.
(30 Ulasan)
5.6
imudita
I have done it. Please see PM.
(27 Ulasan)
4.6
selopezr
Hello there! It´ll be a pleasure to serve you!. I´m attaching a sample of my work.
(15 Ulasan)
4.2
hfbaker
I will calculate the percentage of time that each patient has an INR reading outside the range 2 to 3 based on the assumption that INR varies linearly between the beginning and end of each interval. I will send a priva Lagi
(4 Ulasan)
3.7
TarekYagh
Hi. I have the required skills and knowledge to satisfy your needs. I have made several Access databases so far. Please check your messages inbox for more details.
(6 Ulasan)
3.1
AccuPro
Can do this.
(4 Ulasan)
3.0
gerrymwalsh
Hi, I have over 20 years access database development experience and can produce this excel spreadsheet for you.
(1 Ulasan)
2.4
doaabayoumy
HI I have experience creating complex queries for accounting purposes; i think i can do this for you easily please read you private message
(2 Ulasan)
1.6
iosifrn
Hello, I have over 16 years of programming experience in MS Access, VBA, Excel, Web Development, ASP.NET, SQL Server, ORACLE, MySQL, C#, Visual Basic and others. I worked for companies like SIKORSKY, GE Aircraft Engin Lagi
(1 Ulasan)
1.0
daexpert
Hi, I can help you for getting the final report. I had good skills in MS access. Please refer to PM for more details. Regards,
(0 Ulasan)
0.0
samuelfranca
I can help because I have an experience of 10 years in handling databases in MS Access. Please if um need more details, you can email me. Regards,
(0 Ulasan)
0.0
EdDrakes
Please see the private message and attachment. | 1,057 | 3,847 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.25 | 3 | CC-MAIN-2018-13 | latest | en | 0.933985 |
http://mathhelpforum.com/differential-geometry/94814-ellipse-geometry-print.html | 1,511,502,920,000,000,000 | text/html | crawl-data/CC-MAIN-2017-47/segments/1510934807089.35/warc/CC-MAIN-20171124051000-20171124071000-00171.warc.gz | 211,524,813 | 3,390 | # Ellipse Geometry
• Jul 10th 2009, 08:51 AM
Matty B
Ellipse Geometry
Hi All
Can anyone help me with regards ellipse geometry. (Headbang)
I have an issue were i have to be able to calculate the major and minor axis of an ellipse when i have the circumference.
I know Ellipse Circumference = pi*sqrt((major^2+minor^2)/2)
My relation from major to minor is: major = minor + constant (say 2.5 to start)
This now leaves me with:
Ellipse Circumference = pi*sqrt(((minor + 2.5)^2+minor^2)/2)
I have re-arranged the equation to give me:
2*((ellipse circumference/pi)^2) = (minor + 2.5)^2 + minor^2
Continuing to manipulate I get:
2*((ellipse circumference/pi)^2) = 2*minor^2 + 5*minor + 6.25
I am now stuck on where to go to calculate minor if i know the circumference as using a quadratic equation -b (+/-) sqrt(b^2-4ac)/2a I am left with a negative square root which i can not solve.
Can anyone HELP, where do i go from here?
• Jul 10th 2009, 09:29 AM
galactus
I may be misunderstanding, but the length of an ellipse circumference is not
gotten by that formula. As a matter of fact, the circumference of an ellipse is
rather difficult to calculate. That is where Elliptic integrals come in.
• Jul 10th 2009, 11:11 AM
Matty B
Ellipse Geometry
Hi Galactus
I got the approximation formula from the following link
www.csgnetwork.com/circumellipse.html
Ive done an initial bit of re-arranging which i why my formula stated looks different to that in the link.
• Jul 11th 2009, 02:46 AM
simplependulum
For a closed ellipse , the length of the ellipse circumference is
$2a\pi [ \sum_{n=0}^{\infty} (\frac{k}{16})^{n} \frac{ [\binom{2n}{n}]^2}{1-2n} ]$
with major a and minor b (a>b) , $k = \frac{a^2 - b^2}{a^2}$
• Jul 12th 2009, 01:21 AM
Matty B
simplependulum
Given that formula you have can you tell were it came from?,
Given that formula you have stated, what is "n", and how would i solve to get a or b, i will still be left with having to solve 2*minor^2 + 5*minor + 6.25 were i replace a with b + constant (2.5)
Cheers | 623 | 2,032 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.8125 | 4 | CC-MAIN-2017-47 | longest | en | 0.900912 |
https://www.lawnsite.com/threads/how-much.53085/ | 1,506,411,509,000,000,000 | text/html | crawl-data/CC-MAIN-2017-39/segments/1505818695113.88/warc/CC-MAIN-20170926070351-20170926090351-00396.warc.gz | 806,455,240 | 30,321 | # How Much?
Discussion in 'Lawn Mowing' started by bucmaster, Sep 17, 2003.
1. ### bucmasterLawnSite Memberfrom Roanoke,VaMessages: 37
I have a customer that has beds that he wants brick chips put in.
The total footage is 1236 sq ft or 138 sq yards.
I know there is so type of formula to use but it has been a long time and I am a bit rusty. Can someone get me in the ball part here.
I come up with about 50 cubic yards. Thats got a 10% fudge factor in there.
Am I close? Is this an amount that I should have brought in on a dump or should and can I haul this much economically on a F250,
Thanks
2. ### Mikes Lawn LandscapeLawnSite Senior Memberfrom TexasMessages: 458
1" Deep = 4 yards
2" Deep = 8 Yards
3" Deep = 12 yards
4" Deep = 16 yards
50 yards will do the area about 12" deep
I tink you over estimated just a bit
3. ### bucmasterLawnSite Memberfrom Roanoke,VaMessages: 37
how are you figuring this mike? not doubting you but what method of caculation are you using?
4. ### GLANBannedfrom Long Island, New YorkMessages: 1,647
http://www.gardenplace.com/content/calculator/mulch_calc.html#
For any product. You can charge 2 - 3X your cost including the labor. Can charge a bit more if the job has obsticles and distance to move the product. Price the job so that it is worth if for you.
And yes, quantity discounts do apply.
Wood Mulch costing you \$20 a yard. Charge 3x, large quantity can go as low as 2x
Gravel costing you \$80 a yard. Charge 2x
Above is just an example
5. ### Mikes Lawn LandscapeLawnSite Senior Memberfrom TexasMessages: 458
Bucmaster,
I use the 80 sq ft per yard at 4" deep rule then from there I can calculate in my head.
I look real funny talking to myself going "lets see 80 at 4 160 at 2 1200 divided by 160 no 160 x 10 = 1600 minus - 320 = 1280 thats two yards - 10 = 8 yards ah hell sell um 10 I'll make more money"
You get the drift Just remember 80 sq ft at 4" and you can calculate any amount from there.
Thanks Mike | 563 | 1,980 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2017-39 | latest | en | 0.937491 |
https://www.physicsforums.com/threads/point-charges.110013/ | 1,545,174,342,000,000,000 | text/html | crawl-data/CC-MAIN-2018-51/segments/1544376829997.74/warc/CC-MAIN-20181218225003-20181219011003-00497.warc.gz | 973,809,302 | 12,264 | # Homework Help: Point charges
1. Feb 9, 2006
### zekester
There are two point charges q1 located at (4,0,-3) and q2 at (2,0,1) if q2=4nC find q1 such that the force on a test charge at (5,0,6) has no x-component. I think I would know how to do this problem if I knew what a test charge was. Is it just one electron.
2. Feb 9, 2006
### Tide
The point is that doesn't matter what the charge is (as long as it's not zero). If you were to place a charge $q_3$ at (5, 0, 6) what would $q_1$ have to be in order that the force acting on $q_3$ due to the electric field produced by $q_1$ and $q_2$ would give zero x component.
3. Feb 9, 2006 | 216 | 642 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2018-51 | latest | en | 0.975703 |
http://mathhelpforum.com/discrete-math/143719-binomial-coeffients.html | 1,480,789,871,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698541066.79/warc/CC-MAIN-20161202170901-00185-ip-10-31-129-80.ec2.internal.warc.gz | 179,207,215 | 10,362 | 1. ## Binomial Coeffients
I have the following in my book:
" $(N K ) = n! / k! (n - k)!$
This formula is symmetric in k and n-k:
$( N K ) = ( n / n - k)$"
I'm trying to understand what it means with 'this formula is symmetric in n and n-k'. I have tried to look for an answer, but haven't found anything. Does anyone know what that is supposed to mean?
2. $\binom{N}{K}=\frac{N!}{K!(N-K)!}$
$\binom{N}{N-K}=\frac{N!}{(N-K)![N-(N-K)]!}=\frac{N!}{(N-K)!(K)!}$ | 167 | 463 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.515625 | 4 | CC-MAIN-2016-50 | longest | en | 0.946653 |
http://mytutorialworld.com/home/subjects/mechanical/mechanical-engineering-science/friction-and-types-of-friction/ | 1,516,473,451,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084889681.68/warc/CC-MAIN-20180120182041-20180120202041-00604.warc.gz | 247,878,873 | 21,197 | # What is Friction ?
A force acting in the opposite direction to the motion of the body is called force of friction or simply friction.
## Different types of Friction
2. ### Rolling friction
1. Static Friction : The friction experienced by a body, when it is at rest is known as static friction.
2. Dynamic Friction : The friction experienced by a body, when it is in motion is called dynamic friction. It is also called as kinetic friction. Dynamic friction is further classified into two types.
1. Sliding Friction : The friction experienced by a body, when it slides over another body, is known as sliding friction.
2. Rolling friction : The friction experienced by a body, when balls or rollers are interposed between the two surfaces, is known as rolling friction.
1. It enables us to walk without slipping.
2. The breaks and tiers of our cars and bicycles depend on friction to function properly.
3. The ridges in the skin of our fingers and palms enable us to grasp and hold objects due to friction.
4. Nails and screws are held in wood by fiction.
1. The main disadvantage of friction is that it produces heat in various parts of machines. In this way some useful energy is wasted as heat energy.
2. Due to friction we have to exert more power in machines.
3. It opposes the motion.
4. Due to friction, engines of automobiles consume more fuel.
5. Friction reduces the efficiency of engine and other machines.
## Limiting Friction
The maximum value of frictional force, which comes into play, when a body just begins to slide over the surface of the other body, is known as limiting friction.
## Coefficient of Friction
It is defined as the ratio of limiting friction to the normal reaction between the two bodies.
1. Laws of Static Friction
1. The force of friction always acts in a direction opposite to that in which the body tends to move.
2. The magnitude of force of friction is exactly equal to the force, which tends the body to move.
3. The magnitude of the limiting friction bears a constant ratio to the normal reaction between the two surfaces.
4. The force of friction is independent of the area of content between the two forces.
5. The force of friction depends upon the roughness of the surfaces.
2. Laws of Dynamic Friction
1. The force of friction always acts in a direction, opposite to that which the body tends to move.
2. the magnitude of the kinetic friction bears a constant ratio to the normal reaction between the two surfaces.
3. For moderate speeds, the force of friction remains constant. But it decreases slightly with the increase in speed. | 550 | 2,590 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.453125 | 3 | CC-MAIN-2018-05 | longest | en | 0.947864 |
https://us.metamath.org/mpeuni/onnev.html | 1,721,249,024,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514801.32/warc/CC-MAIN-20240717182340-20240717212340-00830.warc.gz | 517,116,467 | 4,299 | Metamath Proof Explorer < Previous Next > Nearby theorems Mirrors > Home > MPE Home > Th. List > onnev Structured version Visualization version GIF version
Theorem onnev 6283
Description: The class of ordinal numbers is not equal to the universe. (Contributed by NM, 16-Jun-2007.) (Proof shortened by Mario Carneiro, 10-Jan-2013.) (Proof shortened by Wolf Lammen, 27-May-2024.)
Assertion
Ref Expression
onnev On ≠ V
Proof of Theorem onnev
StepHypRef Expression
1 snsn0non 6281 . . 3 ¬ {{∅}} ∈ On
2 snex 5300 . . . 4 {{∅}} ∈ V
3 id 22 . . . 4 (On = V → On = V)
42, 3eleqtrrid 2900 . . 3 (On = V → {{∅}} ∈ On)
51, 4mto 200 . 2 ¬ On = V
65neir 2993 1 On ≠ V
Colors of variables: wff setvar class Syntax hints: = wceq 1538 ∈ wcel 2112 ≠ wne 2990 Vcvv 3444 ∅c0 4246 {csn 4528 Oncon0 6163 This theorem was proved from axioms: ax-mp 5 ax-1 6 ax-2 7 ax-3 8 ax-gen 1797 ax-4 1811 ax-5 1911 ax-6 1970 ax-7 2015 ax-8 2114 ax-9 2122 ax-10 2143 ax-11 2159 ax-12 2176 ax-ext 2773 ax-sep 5170 ax-nul 5177 ax-pr 5298 This theorem depends on definitions: df-bi 210 df-an 400 df-or 845 df-3or 1085 df-3an 1086 df-tru 1541 df-ex 1782 df-nf 1786 df-sb 2070 df-mo 2601 df-eu 2632 df-clab 2780 df-cleq 2794 df-clel 2873 df-nfc 2941 df-ne 2991 df-ral 3114 df-rex 3115 df-rab 3118 df-v 3446 df-sbc 3724 df-dif 3887 df-un 3889 df-in 3891 df-ss 3901 df-pss 3903 df-nul 4247 df-if 4429 df-pw 4502 df-sn 4529 df-pr 4531 df-op 4535 df-uni 4804 df-br 5034 df-opab 5096 df-tr 5140 df-eprel 5433 df-po 5442 df-so 5443 df-fr 5482 df-we 5484 df-ord 6166 df-on 6167 This theorem is referenced by: (None)
Copyright terms: Public domain W3C validator | 818 | 1,703 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.171875 | 3 | CC-MAIN-2024-30 | latest | en | 0.145817 |
https://functions.wolfram.com/ElementaryFunctions/ArcSech/27/02/12/01/01/0003/ | 1,721,366,654,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514866.83/warc/CC-MAIN-20240719043706-20240719073706-00310.warc.gz | 243,617,719 | 7,457 | html, body, form { margin: 0; padding: 0; width: 100%; } #calculate { position: relative; width: 177px; height: 110px; background: transparent url(/images/alphabox/embed_functions_inside.gif) no-repeat scroll 0 0; } #i { position: relative; left: 18px; top: 44px; width: 133px; border: 0 none; outline: 0; font-size: 11px; } #eq { width: 9px; height: 10px; background: transparent; position: absolute; top: 47px; right: 18px; cursor: pointer; }
ArcSech
http://functions.wolfram.com/01.30.27.0042.01
Input Form
ArcSech[z] == (Sqrt[1/z - 1]/Sqrt[1 - 1/z]) (Pi/2 - I ArcCsch[I z])
Standard Form
Cell[BoxData[RowBox[List[RowBox[List["ArcSech", "[", "z", "]"]], "\[Equal]", RowBox[List[FractionBox[RowBox[List[SqrtBox[RowBox[List[FractionBox["1", "z"], "-", "1"]]], " "]], SqrtBox[RowBox[List["1", "-", FractionBox["1", "z"]]]]], RowBox[List["(", RowBox[List[FractionBox["\[Pi]", "2"], "-", RowBox[List["\[ImaginaryI]", " ", RowBox[List["ArcCsch", "[", RowBox[List["\[ImaginaryI]", " ", "z"]], "]"]]]]]], ")"]]]]]]]]
MathML Form
sech - 1 ( z ) 1 1 - 1 z 1 z - 1 ( π 2 - csch - 1 ( z ) ) z 1 1 -1 1 z -1 1 2 -1 1 z -1 -1 1 2 2 -1 -1 z [/itex]
Rule Form
Cell[BoxData[RowBox[List[RowBox[List["HoldPattern", "[", RowBox[List["ArcSech", "[", "z_", "]"]], "]"]], "\[RuleDelayed]", FractionBox[RowBox[List[SqrtBox[RowBox[List[FractionBox["1", "z"], "-", "1"]]], " ", RowBox[List["(", RowBox[List[FractionBox["\[Pi]", "2"], "-", RowBox[List["\[ImaginaryI]", " ", RowBox[List["ArcCsch", "[", RowBox[List["\[ImaginaryI]", " ", "z"]], "]"]]]]]], ")"]]]], SqrtBox[RowBox[List["1", "-", FractionBox["1", "z"]]]]]]]]]
Date Added to functions.wolfram.com (modification date)
2001-10-29 | 617 | 1,689 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.734375 | 3 | CC-MAIN-2024-30 | latest | en | 0.242169 |
http://www.uff.br/trianglecenters/X0292.html | 1,508,339,004,000,000,000 | text/html | crawl-data/CC-MAIN-2017-43/segments/1508187822992.27/warc/CC-MAIN-20171018142658-20171018162658-00113.warc.gz | 586,215,620 | 3,129 | ## X(292) (X(1)-HIRST INVERSE OF X(291))
Interactive Applet
You can move the points A, B and C (click on the point and drag it).
Press the keys “+” and “−” to zoom in or zoom out the visualization window and use the arrow keys to translate it.
You can also construct all centers related with this one (as described in ETC) using the “Run Macro Tool”. To do this, click on the icon , select the center name from the list and, then, click on the vertices A, B and C successively.
The JRE (Java Runtime Environment) is not enabled in your browser!
Download all construction files and macros: tc.zip (10.1 Mb).
This applet was built with the free and multiplatform dynamic geometry software C.a.R..
Information from Kimberling's Encyclopedia of Triangle Centers
Trilinears a/(a2 - bc) : b/(b2 - ca) : c/(c2 - ab)
Barycentrics a2/(a2 - bc) : b2/(b2 - ca) : c2/(c2 - ab)
X(292) lies on these lines: 1,39 2,334 6,869 9,87 37,86 44,660 58,101 106,813 171,893 269,1020 659,665
X(292) = isogonal conjugate of X(239)
X(292) = isotomic conjugate of X(1921)
X(292) = X(335)-Ceva conjugate of X(295)
X(292) = cevapoint of X(171) and X(238)
X(292) = crossdifference of any two points on line X(659)X(812)
X(292) = X(1)-Hirst inverse of X(291)
This is a joint work of
Humberto José Bortolossi, Lis Ingrid Roque Lopes Custódio and Suely Machado Meireles Dias.
If you have questions or suggestions, please, contact us using the e-mail presented here.
Departamento de Matemática Aplicada -- Instituto de Matemática -- Universidade Federal Fluminense | 490 | 1,588 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2017-43 | latest | en | 0.602029 |
https://www.coursehero.com/file/p726gl/Although-not-required-the-following-overview-diagram-is-helpful-to-understand/ | 1,555,720,660,000,000,000 | text/html | crawl-data/CC-MAIN-2019-18/segments/1555578528433.41/warc/CC-MAIN-20190420000959-20190420022959-00518.warc.gz | 671,709,747 | 186,216 | ACC 434 Chapter 4
Although not required the following overview diagram
• Notes
• 41
• 100% (2) 2 out of 2 people found this document helpful
This preview shows page 28 - 32 out of 41 pages.
Although not required, the following overview diagram is helpful to understand Keating’s job- costing system. 1. Professional Partner Labor Professional Associate Labor Budgeted compensation per professional Divided by budgeted hours of billable time per professional Budgeted direct-cost rate \$ 250,000 ÷2,000 \$125 per hour * \$130,000 ÷2,000 \$65 per hour * Can also be calculated as hours - labor partner budgeted Total costs labor partner budgeted Total = \$250,000 10 2,000 10 × × = \$2,500,000 20,000 = \$125 Can also be calculated as hours - labor associate budgeted Total costs labor associate budgeted Total = \$130,000 25 2,000 25 × × = \$3,250,000 50,000 = \$ 65 2. General Support Secretarial Support Budgeted total costs Divided by budgeted quantity of allocation base Budgeted indirect cost rate \$4,200,000 ÷ 70,000 hours \$60 per hour \$1,300,000 ÷ 20,000 hours \$65 per hour Professional Labor-Hours General Support COST OBJECT: JOB FOR CLIENT INDIRECT COST POOL COST ALLOCATION BASE } } } } DIRECT COST Indirect Costs Direct Costs Partner Labor-Hours Secretarial Support Professional Associate Labor Professional Partner Labor
Subscribe to view the full document.
4-29 3. Richardson Punch Direct costs: Professional partners, \$125 × 70; \$125 × 40 Professional associates, \$65 × 50; \$65 × 130 Direct costs Indirect costs: General support, \$60 × 120; \$60 × 170 Secretarial support, \$65 × 70; \$65 × 40 Indirect costs Total costs \$8,750 3,250 \$ 12,000 7,200 4,550 11,750 \$23,750 \$5,000 8,450 \$ 13,450 10,200 2,600 12,800 \$26,250 4. Richardson Punch Single direct - Single indirect (from Problem 4-32) Multiple direct – Multiple indirect (from requirement 3 of Problem 4-33) Difference \$12,000 23,750 \$11,750 undercosted \$18,000 26,250 \$ 8,250 undercosted The Richardson and Punch jobs differ in their use of resources. The Richardson job has a mix of 58% partners and 42% associates, while Punch has a mix of 24% partners and 76% associates. Thus, the Richardson job is a relatively high user of the more costly partner-related resources (both direct partner costs and indirect partner secretarial support). The refined-costing system in Problem 4-32 increases the reported cost in Problem 4-32 for the Richardson job by 50.5% (from \$12,000 to \$23,750) and the Punch Job by 31.4% (from \$18,000 to \$26,500).
4-30 4-34 (20 25 min.) Proration of overhead. 1. Budgeted manufacturing overhead rate is \$4,800,000 ÷ 80,000 hours = \$60 per machine-hour. 2. Manufacturing overhead underallocated = Manufacturing overhead incurred Manufacturing overhead allocated = \$4,900,000 – \$4,500,000* = \$400,000 *\$60 × 75,000 actual machine-hours = \$4,500,000 a. Write-off to Cost of Goods Sold Account (1) Account Balance (Before Proration) (2) Write-off of \$400,000 Underallocated Manufacturing Overhead (3) Account Balance (After Proration) (4) = (2) + (3) Work in Process Finished Goods Cost of Goods Sold Total \$ 750,000 1,250,000 8,000,000 \$10,000,000 \$ 0 0 400,000 \$400,000 \$ 750,000 1,250,000 8,400,000 \$10,400,000 b. Proration based on ending balances (before proration) in Work in Process, Finished Goods and Cost of Goods Sold. Account (1) Account Balance (Before Proration) (2) Proration of \$400,000 Underallocated Manufacturing Overhead (3) Account Balance (After Proration) (4) = (2) + (3) Work in Process Finished Goods Cost of Goods Sold Total \$ 750,000 1,250,000 8,000,000 \$10,000,000 ( 7.5%) (12.5%) (80.0% ) 100.0% 0.075 × \$400,000 = \$ 30,000 0.125 × \$400,000 = 50,000 0.800 × \$400,000 = 320,000 \$400,000 \$ 780,000 1,300,000 8,320,000 \$10,400,000
Subscribe to view the full document.
4-31 c. Proration based on the allocated overhead amount (before proration) in the ending balances of Work in Process, Finished Goods, and Cost of Goods Sold.
You've reached the end of this preview.
{[ snackBarMessage ]}
What students are saying
• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.
Kiran Temple University Fox School of Business ‘17, Course Hero Intern
• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.
Dana University of Pennsylvania ‘17, Course Hero Intern
• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.
Jill Tulane University ‘16, Course Hero Intern | 1,364 | 4,998 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.96875 | 3 | CC-MAIN-2019-18 | latest | en | 0.852801 |
https://answerclassic.com/how-many-loads-of-laundry-can-you-wash-if-you-work-3-shifts/ | 1,679,451,778,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00720.warc.gz | 128,240,423 | 35,287 | # how many loads of laundry can you wash if you work 3 shifts?
## How many loads of laundry will each shift pay for if the cost per load rises to 16 quarters quarters Express your answer numerically as an integer?
How Many Loads of Laundry Will Each Shift Pay for if the Cost Per Load Rises to 16 Quarters Quarters? 3.75 loads of your laundry will each shift pay for if the cost per load rises to 16 quarters in your laundry.
## How many loads of laundry should you do?
2 to 5 Loads of Laundry Per Week for 1 Person. If one person wash their wear clothes only then 2 to 5 loads of laundry is most for one person per week. 5 loads of laundry would be maximum per week per person for laundry their clothes.
## Which of the following will you use in this lab to generate carbon dioxide gas?
Carbon dioxide is produced whenever an acid reacts with a carbonate. This makes carbon dioxide easy to make in the laboratory. Calcium carbonate and hydrochloric acid are usually used because they are cheap and easy to obtain.
Running several loads of laundry in a row can result in solids going into your drain field. Try to only do full loads of laundry, not partial loads. If you run partial loads, remember to set the washer to the smallest option.
By placing your hand into your machine’s drum, you can see how much space is left. Perfect is if you can fit nothing else in the drum, just your hand and your wash. If you can’t get your hand into the drum, then it’s overloaded.
large loads of laundry, a full load is the more energy-efficient option. If you need to do a smaller load, be sure to choose the appropriate size setting on your washing machine. Too often, consumers select “large” and never change it. Even for energy-efficient front-loaders, that can be a waste.
## How much liquid does this graduated cylinder contain?
The smaller graduated cylinder can measure up to 5.0 mL of liquid. The liquid is found between 3.0 mL and 4.0 mL. More specifically, it is found between 3.4 mL and 3.6 mL.
## Which of the following is are the lowest temperature?
By international agreement, absolute zero is defined as precisely; 0 K on the Kelvin scale, which is a thermodynamic (absolute) temperature scale; and –273.15 degrees Celsius on the Celsius scale.
## What of the following can be classified as matter?
Matter can be classified according to physical and chemical properties. Matter is anything that occupies space and has mass. The three states of matter are solid, liquid, and gas. … Pure substances can be either chemical compounds or elements.
## Does baking soda and vinegar create CO2?
When you combine the solid (baking soda) and the liquid (vinegar), the chemical reaction creates a gas called carbon dioxide. Carbon dioxide is invisible, except as the bubbles of gas you may have noticed when the vinegar and baking soda mixture began to fizz. This gas is what made the balloon inflate.
## Can you remove CO2 from the atmosphere?
Carbon dioxide can be removed from the atmosphere as air passes through a big air filter and then stored deep underground. This technology already exists and is being used on a small scale.
## What happens if you put too much laundry in the washer?
If you overload the washing machine, clothes won’t be free to move around and get clean, because they can’t be properly soaked in water and laundry detergent. In addition, when you overload your washing machine, the washer adds extra stress to the motor and the tub bearings and you could damage your appliance.
## How many loads of laundry is too much for septic?
Septic tanks need to have their water usage minimized to help them function. This means that most people should avoid doing more than one to two loads of laundry using a traditional washing machine per day.
## Can washing machine leak if overloaded?
The washer can leak if it is overloaded or out of balance. Check to be sure the washer is level, reduce load sizes, and keep an eye out for the leak. … If you have a washer that features a spray rinse function, interfering with the cycle by manually advancing the timer can cause the washer to leak.
## What is the average washing machine load size?
Washer Capacity by Type of Washer
Both standard and high-efficiency top load washers range between 3.1 and 4.0 cubic feet. Front load high-efficiency washers can range from 4.0 cubic feet to an extra large capacity of 5.0 cubic feet. Most front loaders are between 4.2 and 4.5 cubic feet.
Front-loading high-efficiency (HE) washing machines can handle more laundry at once while also using less water. The general rule of thumb with them is to fill the drum about 80% of the way. It’s ok to pack the clothes in there within that 80%, but just don’t go overboard. The clothes still need room to tumble around.
## How can I ruin my washing machine?
1. You don’t empty your pockets. …
2. You put lingerie in the washer & dryer. …
3. You use too much detergent. …
4. You cram the washing machine too full. …
5. You leave wet clothes in the washing machine. …
7. You’re overusing dryer sheets. …
8. You’re mixing items.
## How many towels can I wash in one load?
A front-load washer can handle a 12-pound load on average, or about seven bath towels; a top-load washer can usually handle a 15- to 18-pound load, or nine to 11 bath towels.
## Why is it called graduated cylinder?
Why is it called a graduated cylinder? As its name indicates, it is a glass cylinder with marks along the side similar to those on a measuring cup. The volume is read by looking at the top of the fluid from the side and reading the mark on the glass from the lowest portion of the lens-like meniscus of the liquid.
## Why are graduated cylinders more accurate?
The accuracy of a graduated cylinder is higher because the graduations on the cylinder make it easier to more precisely fill, pour, measure, and read the amount of liquid contained within.
## What is graduated cylinder used for?
Graduated cylinders are long, slender vessels used for measuring the volumes of liquids. They are not intended for mixing, stirring, heating, or weighing. Graduated cylinders commonly range in size from 5 mL to 500 mL. Some can even hold volumes of more than a liter.
## What’s the warmest the Earth has ever been?
134.1 °F
According to the World Meteorological Organization (WMO), the highest registered air temperature on Earth was 56.7 °C (134.1 °F) in Furnace Creek Ranch, California, located in Death Valley in the United States, on 10 July 1913.
## What is the hottest degree ever recorded?
134 degrees Fahrenheit
With the Libya record abandoned, the official world record was given to a 134 degrees Fahrenheit (56.7°C) measurement taken at Death Valley on July 10, 1913.
## How hot can it get on Earth?
The highest temperature ever recorded on Earth was 136 Fahrenheit (58 Celsius) in the Libyan desert. The coldest temperature ever measured was -126 Fahrenheit (-88 Celsius) at Vostok Station in Antarctica.
## What is the two 2 classes of matter?
Classifying Matter
Matter can be classified into several categories. Two broad categories are mixtures and pure substances. A pure substance has a constant composition. All specimens of a pure substance have exactly the same makeup and properties.
## What elements can exist in all three states of matter?
Answer 1: Mercury and water are not the only substances capable of existing in three distinct states of matter. In fact, all of the elements, of which mercury is one, may exist in solid, liquid, or gas forms. Additionally, many substances exhibit more than one solid form, often with very different properties.
## What are the three general classes of matter?
Three States of Matter. The three states of matter are the distinct physical forms that matter can take: solid, liquid, and gas.
## Can I mix vinegar and baking soda in washing machine?
Vinegar and baking soda are the two best agents you can use to clean your washing machine. … One way is to mix 2 cups of vinegar, and 1/4 cup of baking soda and water each, then pour the mixture into the detergent cache of your washing machine. Simply run a cycle at the highest temperature.
## What is in elephant’s toothpaste?
What is Elephant Toothpaste? This large demonstration uses hydrogen peroxide (H2O2), sodium iodide (NaI) and soap. … That is usually 3% hydrogen peroxide, and your local salon probably uses 6%. The 30% hydrogen peroxide is not something you would put on a cut or scrape, but it works perfectly for this demonstration.
## What happens when you shake the bottle with vinegar and baking soda?
A chemical reaction between vinegar and baking soda creates a gas called carbon dioxide. Carbon dioxide is the same type of gas used to make the carbonation in sodas. … There is not enough room in the bottle for the gas to spread out so it leaves through the opening very quickly, causing an eruption!
## How can you test to see if a gas is carbon dioxide?
Carbon dioxide reacts with calcium hydroxide solution to produce a white precipitate of calcium carbonate. Limewater is a solution of calcium hydroxide. If carbon dioxide is bubbled through limewater, the limewater turns milky or cloudy white.
## How Many Clothes Can Be Washed In a 6 kg Washing Machine
Related Searches
how many loads of laundry will each shift pay for
how many dollars are required to do 10 loads of laundry?
which of the following are equal to 5 1010
compute 2.345 4.48697 round the answer appropriately
4.3 x10 6 is between which two numbers
while in europe, if you drive 111 km per day
compute 4.62 4.48697 round the answer appropriately
how many milliliters of liquid does the larger graduated cylinder contain
See more articles in category: FAQ | 2,141 | 9,733 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.25 | 3 | CC-MAIN-2023-14 | latest | en | 0.934439 |
https://eliteprofessionalwriters.com/2020/08/31/i-have-finished-most-of-my-study-packet-but-am-stuck-on-this-last-series-of-questions-i-have-bits-of-it-done-but-am-not-sure-if-it-is-correct/ | 1,603,181,896,000,000,000 | text/html | crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00673.warc.gz | 302,448,355 | 9,241 | # I have finished most of my study packet, but am stuck on this last series of questions. I have bits of it done, but am not sure if it is correct.
I have finished most of my study packet, but am stuck on this last series of questions. I have bits of it done, but am not sure if it is correct.
8. Raj bought a new super bright flashlight for his camping trip. Use your knowledge of circles and parabolas to define equations that model the beam of light and the reflector within the flashlight.
Part I: Raj tested his new flashlight by shining it on his bedroom wall. The beam of light can be described by the equation . How many inches wide is the beam of light on the wall?
Step 1: How many inches wide is the beam of light on the wall? Explain your reasoning.
Step 2: Graph the beam of light on the wall on the coordinate grid below.
Part II: Flashlights have parabolic reflectors that allow the reflected light from the bulb’s filament (focus) to be directed out in parallel beams of light. If the filament of Raj’s flashlight is located at (3, 4) and the directrix is at x= 1, create an equation for the cross section of his flashlight.
Step 1: If the focus is (3, 4) and the directrix is x = 1, identify the location of the vertex. Explain your thinking.
Step 2: The focus, directrix, and vertex can be defined in terms of the variables h, p, and k. Use the information from Step 1 to define h, p, and k for the equation that models the cross section of Raj’s flashlight. Show your work.
Focus (h + p, k)
Directrix x = h – p
Vertex (h, k)
h =
p =
k =
Step 3: Horizontal parabolas can be modeled by the equation . Use the information from Step 2 to write the equation for the cross section of Raj’s flashlight. Simplify your answer by writing the equation in standard form, . Show your work. | 458 | 1,810 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.15625 | 4 | CC-MAIN-2020-45 | latest | en | 0.905326 |
https://de.mathworks.com/matlabcentral/cody/problems/645-getting-the-indices-from-a-vector/solutions/3465553 | 1,607,191,084,000,000,000 | text/html | crawl-data/CC-MAIN-2020-50/segments/1606141748276.94/warc/CC-MAIN-20201205165649-20201205195649-00285.warc.gz | 257,418,764 | 17,426 | Cody
# Problem 645. Getting the indices from a vector
Solution 3465553
Submitted on 29 Oct 2020
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Fail
out = [3 4]; vec = [11 22 33 44]; thresh = 25; assert(isequal(findIndices(vec, thresh),out))
Unrecognized function or variable 'thres'. Error in findIndices (line 2) a=(vec>thres); Error in Test1 (line 4) assert(isequal(findIndices(vec, thresh),out))
2 Fail
out = [1 2]; vec = [33 44 11 22]; thresh = 25; assert(isequal(findIndices(vec, thresh),out))
Unrecognized function or variable 'thres'. Error in findIndices (line 2) a=(vec>thres); Error in Test2 (line 4) assert(isequal(findIndices(vec, thresh),out))
3 Fail
out = 5:10; vec = 10:10:100; thresh = 45; assert(isequal(findIndices(vec, thresh),out))
Unrecognized function or variable 'thres'. Error in findIndices (line 2) a=(vec>thres); Error in Test3 (line 4) assert(isequal(findIndices(vec, thresh),out))
4 Fail
out = [1 3 4 6 8]; vec = [12 10 13 14 9 17 5 18]; thresh = 11; assert(isequal(findIndices(vec, thresh),out))
Unrecognized function or variable 'thres'. Error in findIndices (line 2) a=(vec>thres); Error in Test4 (line 4) assert(isequal(findIndices(vec, thresh),out))
5 Fail
out = [1:3 7:9]; vec = [50 55 60 15 10 5 43 44 97 41]; thresh = 42; assert(isequal(findIndices(vec, thresh),out))
Unrecognized function or variable 'thres'. Error in findIndices (line 2) a=(vec>thres); Error in Test5 (line 4) assert(isequal(findIndices(vec, thresh),out))
6 Fail
out = 5:8; vec = [10 12 14 16 18 20 22 23 7 8 9]; thresh = 17; assert(isequal(findIndices(vec, thresh),out))
Unrecognized function or variable 'thres'. Error in findIndices (line 2) a=(vec>thres); Error in Test6 (line 4) assert(isequal(findIndices(vec, thresh),out))
7 Fail
out = [2 4:5 8 12:14 16]; vec = [10 81 24 65 97 13 45 68 24 35 16 79 123 76 45 60]; thresh = 51; assert(isequal(findIndices(vec, thresh),out))
Unrecognized function or variable 'thres'. Error in findIndices (line 2) a=(vec>thres); Error in Test7 (line 4) assert(isequal(findIndices(vec, thresh),out))
8 Fail
out = 1:2:9; vec = [11 9 12 8 13 7 14 6 15 5]; thresh = 10; assert(isequal(findIndices(vec, thresh),out))
Unrecognized function or variable 'thres'. Error in findIndices (line 2) a=(vec>thres); Error in Test8 (line 4) assert(isequal(findIndices(vec, thresh),out))
### Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting! | 858 | 2,591 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.359375 | 3 | CC-MAIN-2020-50 | latest | en | 0.460454 |
https://www.jiskha.com/questions/17274/In-planning-for-a-new-item-a-manufacturer-assumes-that-the-number-of-items-produced | 1,560,939,031,000,000,000 | text/html | crawl-data/CC-MAIN-2019-26/segments/1560627998943.53/warc/CC-MAIN-20190619083757-20190619105757-00423.warc.gz | 789,118,487 | 4,488 | # Phoenix
In planning for a new item, a manufacturer assumes that the number of items produced x and the cost in dollars C of producing these items are related by a linear equation. Projections are that 100 items will cost \$10,000 to produce and that 300 items will cost \$22,000 to produce. Find the equation that relates C and x.
Model it with this equation:
cost= K1* n + K2
where K2, K1 are constants, and n is the number of items.
Now plug the two equations in (for n=100 and n=300) and solve for K1 and K2.
1. 👍 0
2. 👎 0
3. 👁 61
## Similar Questions
1. ### Algebra
In planning for a new item, a manufacturer assumes that the number of items produced x and the cost in dollars C of producing these items are related by a linear equation. Projections are that 100 items will cost \$10,000 to
asked by Linda on May 20, 2007
2. ### math
Help I can go this far and then lost. In planning for a new item, a manufacturer assumes that the number of items produced x and the cost in dollars C of producing these items are related by a linear equation. Projections are that
asked by Dee on May 24, 2007
3. ### math,correction
The problem states: In planning for a new item, a manufacturer assumes that the number of items produced X and the cost in dollars C of producing these items are related by a linear equation. Projections are that 100 items will
asked by Jasmine20 on January 24, 2007
In planning a new item, a manufactuer assumes that the number of items produced x and the cost of dollars C of producing these items are related by a linear equation, Projections are that 100 items will cost \$22,000 to produce.
asked by David on February 13, 2007
5. ### Algebra I
In planning for a new item, a manufacturer assumes that the number of items produced x and the cost in dollars C of producing these items are related by a linear equation. Projections are that 100 itmes will cost \$10,000 to
asked by Katie on August 21, 2007
6. ### math
A manufacturer of lapel buttons test marketed a new item at a university. It was found that 1,000 items could be sold if they were priced at \$4, but only 600 items could be sold if the price were raised to \$8. On the other hand,
asked by Kendall on November 21, 2014
7. ### Math
A manufacturing company finds that they can sell 300 items at \$3.50 per item and 500 items at \$ 1.50 \$1.50 per item. If the relationship between the number of items sold x x and the price per item p p is a linear one: Find a
asked by Keonn'a on October 16, 2018
A small fast-food restaurant invests \$4,000 to produce a new food item that will sell for \$4.59. Each item can be produced for \$2.19. (a) How many items must be sold to break even? (Round your answer to the nearest whole number.)
asked by angie on December 5, 2014
9. ### algebra
A small fast-food restaurant invests \$4,000 to produce a new food item that will sell for \$4.59. Each item can be produced for \$2.19. (a) How many items must be sold to break even? (Round your answer to the nearest whole number.)
asked by angie on December 13, 2014 | 802 | 3,066 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.546875 | 4 | CC-MAIN-2019-26 | latest | en | 0.947011 |
https://www.physicsforums.com/threads/acceleration-of-two-masses-on-a-pulley.412699/ | 1,531,870,044,000,000,000 | text/html | crawl-data/CC-MAIN-2018-30/segments/1531676589932.22/warc/CC-MAIN-20180717222930-20180718002930-00450.warc.gz | 964,697,642 | 12,912 | # Homework Help: Acceleration of two masses on a pulley
1. Jun 26, 2010
### wwshr87
1. The problem statement, all variables and given/known data
Find the acceleration of the block on the left in terms of M, m and g.
See attachment
2. Relevant equations
F=ma
3. The attempt at a solution
From the free body diagram,
-Mg+Mg+mg=(m+M)*a
a=m*g/(m+M)
I think this is correct, however the solution states the correct answer as
a=m*g/(m+2M), indicating a problem with the signs.
#### Attached Files:
• ###### pulleys.jpg
File size:
7.5 KB
Views:
163
2. Jun 26, 2010
### rl.bhat
While finding the acceleration of the masses, you have to take into account the tension in the string.
(M +m)g - T = (M+m)a......(1)
T - mg = ma. ..........(2)
Now solve for a. | 230 | 759 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.328125 | 3 | CC-MAIN-2018-30 | latest | en | 0.840518 |
https://search.r-project.org/CRAN/refmans/CNAIM/html/pof_future_cables_66_33kv.html | 1,638,626,010,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00446.warc.gz | 567,125,111 | 2,526 | pof_future_cables_66_33kv {CNAIM} R Documentation
Future Probability of Failure for 33-66kV cables
Description
This function calculates the future annual probability of failure per kilometer for a 33-66kV cables. The function is a cubic curve that is based on the first three terms of the Taylor series for an exponential function. For more information about the probability of failure function see section 6 on page 30 in CNAIM (2017).
Usage
pof_future_cables_66_33kv(
cable_type = "66kV UG Cable (Gas)",
sub_division = "Aluminium sheath - Aluminium conductor",
utilisation_pct = "Default",
operating_voltage_pct = "Default",
sheath_test = "Default",
partial_discharge = "Default",
fault_hist = "Default",
leakage = "Default",
reliability_factor = "Default",
age,
simulation_end_year = 100
)
Arguments
cable_type String. A sting that refers to the specific asset category. See See page 15, table 1 in CNAIM (2017). Options: cable_type = c("33kV UG Cable (Gas)", "66kV UG Cable (Gas)", "33kV UG Cable (Non Pressurised)", "66kV UG Cable (Non Pressurised)", "33kV UG Cable (Oil)", "66kV UG Cable (Oil)") . The default setting is cable_type = "66kV UG Cable (Gas)". sub_division String. Refers to material the sheath and conductor is made of. Options: sub_division = c("Aluminium sheath - Aluminium conductor", "Aluminium sheath - Copper conductor", "Lead sheath - Aluminium conductor", "Lead sheath - Copper conductor") utilisation_pct Numeric. The max percentage of utilisation under normal operating conditions. operating_voltage_pct Numeric. The ratio in percent of operating/design voltage. sheath_test String. Only applied for non pressurised cables. Indicating the state of the sheath. Options: sheath_test = c("Pass", "Failed Minor", "Failed Major", "Default"). See page 141, table 168 in CNAIM (2017). partial_discharge String. Only applied for non pressurised cables. Indicating the level of partial discharge. Options: partial_discharge = c("Low", "Medium", "High", "Default"). See page 141, table 169 in CNAIM (2017). fault_hist Numeric. Only applied for non pressurised cables. The calculated fault rate for the cable in the period per kilometer. A setting of "No historic faults recorded" indicates no fault. See page 141, table 170 in CNAIM (2017). leakage String. Only applied for oil and gas pressurised cables. Options: leakage = c("No (or very low) historic leakage recorded", "Low/ moderate", "High", "Very High", "Default"). See page 142, table 171 (oil) and 172 (gas) in CNAIM (2017). reliability_factor Numeric. reliability_factor shall have a value between 0.6 and 1.5. A setting of "Default" sets the reliability_factor to 1. See section 6.14 on page 69 in CNAIM (2017). age Numeric. The current age in years of the cable. simulation_end_year Numeric. The last year of simulating probability of failure. Default is 100.
Value
Numeric array. Future probability of failure per annum per kilometre for 33-66kV cables.
Source
DNO Common Network Asset Indices Methodology (CNAIM), Health & Criticality - Version 1.1, 2017: https://www.ofgem.gov.uk/system/files/docs/2017/05/dno_common_network_asset_indices_methodology_v1.1.pdf
Examples
# Future probability of failure for 66kV UG Cable (Non Pressurised)
pof_66kV_non_pressurised <-
pof_future_cables_66_33kv(cable_type = "66kV UG Cable (Non Pressurised)",
sub_division = "Aluminium sheath - Aluminium conductor",
utilisation_pct = 75,
operating_voltage_pct = 50,
sheath_test = "Default",
partial_discharge = "Default",
fault_hist = "Default",
leakage = "Default",
reliability_factor = "Default",
age = 1,
simulation_end_year = 100)
# Plot
plot(pof_66kV_non_pressurised\$PoF * 100,
type = "line", ylab = "%", xlab = "years",
main = "PoF per kilometre - 66kV UG Cable (Non Pressurised)")
[Package CNAIM version 1.0.1 Index] | 1,036 | 3,804 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2021-49 | latest | en | 0.743149 |
https://byjus.com/maths/collinear-points/ | 1,713,992,643,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296819971.86/warc/CC-MAIN-20240424205851-20240424235851-00309.warc.gz | 129,476,947 | 113,665 | # Collinear Points
Collinear points are the points that lie on the same straight line or in a single line. If two or more than two points lie on a line close to or far from each other, then they are said to be collinear, in Euclidean geometry.
## Collinear Points Definition
The term collinear is the combined word of two Latin names ‘col’ + ‘linear’. ‘Col’ means together and ‘Linear; means line. Therefore, collinear points mean points together in a single line. You may see many real-life examples of collinearity such as a group of students standing in a straight line, a bunch of apples kept in a row, next to each other, etc.
In geometry, two or more points are said to be collinear, if they lie on the same line. Hence the collinear points are the set of points that lie on a single straight line.
### What are Collinear points in Maths?
From the above definition, it is clear that the points which lie on the same line are collinear points. To understand this concept clearly, consider the below figure and try to categorize the collinear and non-collinear points.
In the above figure, the set of collinear points are {A, D}, {A, C, F}, {A, P, R}, {Q, E, R} and {F, B, R}. The remaining points are said to be non-collinear, i.e. {P, B}, {C, E} and so on.
### Collinear Meaning
If we say, that three objects are placed in a line, that means the objects are collinear.
In more astonishing observation, the term collinear has been used for straightened things, that means, something being “in a row” or “in a line”.
## Non-Collinear Points
The set of points that do not lie on the same line are called non-collinear points. We cannot draw a single straight line through these points. The example of non-collinear points is given below:
## Collinear Points Formula
There are three methods to find the collinear points. They are:
• Distance Formula
• Slope Formula
• Area of triangle
### Using Distance Formula
If P, Q and R are three collinear points, then,
Distance from P to Q + Distance from Q to R = Distance from P to R
PQ + QR = PR
Now, by the distance formula we know, the distance between two points (x1, y1) and (x2, y2) is given by;
$$\begin{array}{l}D=\sqrt{\left(x_{2}-x_{1}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}}\end{array}$$
Hence, we can easily find the distance between the points P, Q and R, with the help of this formula.
### Using Slope Formula
Three or more points are said to be collinear if the slope of any two pairs of points is the same. The slope of the line basically measures the steepness of the line.
Suppose, X, Y and Z are the three points, with which we can form three sets of pairs, such that, XY, YZ and XZ are three pairs of points. Then, as per the slope formula,
If Slope of XY = Slope of YZ = Slope of XZ, then the points X, Y and Z are collinear.
Note: Slope of the line segment joining two points say (x1, y1) and (x2, y2) is given by the formula:
m = (y2 – y1)/ (x2 – x1)
### Using the Area of Triangle Formula
If the area of triangle formed by three points is zero, then they are said to be collinear. It means that if three points are collinear, then they cannot form a triangle.
Suppose, the three points P(x1, y1), Q(x2, y2) and R(x3, y3) are collinear, then by remembering the formula of area of triangle formed by three points we get;
$$\begin{array}{l}\frac{1}{2}\begin{vmatrix} x_{1}-x_{2} & x_{2}-x_{3} \\ y_{1}-y_{2} & y_{2}-y_{3} \end{vmatrix}=0\end{array}$$
Or
(1/2) | [x1(y2 – y3) + x2(y3 – y1) + x3[y1 – y2]| = 0
## Solved Examples
Example 1: Find if the points P(−3,−1), Q(−1,0), and R(1,1) are collinear.
Solution: The points P, Q and R are collinear, if;
(Distance between P and Q) + (Distance between Q and R) = Distance between P and R
By Distance formula, we can find the distance between two points.
So,
\begin{array}{l}\begin{aligned} \text { Distance between } \mathrm{PQ} &=\sqrt{\left(x_{2}-x_{1}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}} \\ &=\sqrt{(-1+3)^{2}+(0+1)^{2}} \\ &=\sqrt{5} \end{aligned}\end{array}
\begin{array}{l}\begin{aligned} \text { Distance between Q and R } &=\sqrt{\left(x_{3}-x_{2}\right)^{2}+\left(y_{3}-y_{2}\right)^{2}} \\ &=\sqrt{(1+1)^{2}+(1-0)^{2}} \\ &=\sqrt{5} \end{aligned}\end{array}
\begin{array}{l}\begin{aligned} \text { Distance between P and R } &=\sqrt{\left(x_{3}-x_{1}\right)^{2}+\left(y_{3}-y_{1}\right)^{2}} \\ &=\sqrt{(1+3)^{2}+(1+1)^{2}} \\ &=\sqrt{20}\\ &=2 \sqrt{5} \end{aligned}\end{array}
Hence we can conclude that,
√5 + √5 = 2√5
PQ + QR = PR
Therefore, P, Q and R are collinear.
Example 2: Show that the three points P(2, 4), Q(4, 6) and R(6, 8) are collinear.
Solution: If the three points P(2, 4), Q(4, 6) and R(6, 8) are collinear, then slopes of any two pairs of points, PQ, QR & PR will be equal.
Now, using slope formula we can find the slopes of the respective pairs of points, such that;
Slope of PQ = (6 – 4)/ (4 – 2) = 2/2 = 1
Slope of QR = (8 – 6)/ (6 – 4) = 2/2 = 1
Slope of PR = (8 – 4) /(6 – 2) = 4/4 = 1
As we can see, the slopes of all the pairs of points are equal.
Therefore, the three points P, Q and R are collinear.
Example 3: Find if P(2, 3), Q(4, 0) and R(6, -3) are collinear points.
Solution: As per the area of triangle formula for three coordinates in a plane,
$$\begin{array}{l}Area =\frac{1}{2}\begin{vmatrix} 2-4 & 4-6 \\ 3-0 & 0+3 \end{vmatrix}\\ = \frac{1}{2}\begin{vmatrix} -2 & -2\\ 3 & 3 \end{vmatrix}\end{array}$$
Area = ½ (6 – 6)
Area = 0
Hence, the points P, Q and R are collinear.
## Practice Questions
• Check whether the given points P (3,7), Q (6,5) and R (15,-1) are collinear.
• Check if the given points P(0,3), Q (1,5) and C (-1,1) are collinear.
• If A (5,2), B (3,-2) and C (8,8) are three points in a plane. Check whether the points are collinear.
• Show that the points A(1,-1) B(6,4) and C(4,2) are collinear, using the distance formula.
• Prove that the points A(7, -5), B(9, -3) and C( 13 1) are collinear, using section formula.
For more maths articles, visit byjus.com and download BYJU’S – The Learning App for exciting learning videos.
## Frequently Asked Questions on Collinear Points
Q1
### What are collinear points?
When two or more points lie on the same line, they are called collinear points.
Q2
### Can we draw a straight line through collinear points?
Yes, we can draw a straight line through collinear points.
Q3
### What are non-collinear points?
When the points in a place, does not lie on the same line, then such points are called non-collinear points.
Q4
### How can we determine if the points are collinear?
There are three methods to find if the points are collinear, they are:
Distance formula
Slope formula
Area of triangle
Q5
### If three points are collinear then slope of points formed by these points are equal. True or False?
True, if three points are collinear then slope of points formed by these points are equal. | 2,142 | 6,896 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.6875 | 5 | CC-MAIN-2024-18 | latest | en | 0.938392 |
https://origin.geeksforgeeks.org/tag/frequency-counting/page/15/ | 1,686,163,238,000,000,000 | text/html | crawl-data/CC-MAIN-2023-23/segments/1685224654012.67/warc/CC-MAIN-20230607175304-20230607205304-00677.warc.gz | 465,411,916 | 28,704 | # Tag Archives: frequency-counting
Given two strings S1 and S2 of lengths M and N respectively, the task is to calculate the sum of the frequencies of the characters… Read More
Given two arrays switch[], consisting of binary integers denoting whether a switch is ON(0) or OFF(1), and query[], where query[i] denotes the switch to be… Read More
Given an array arr[] of size N, and an integer K representing a digit, the task is to print the given array in increasing order… Read More
Given an array arr[] consisting of N positive integers, the task is to find the minimum count of divisions(integer division) of array elements by 2… Read More
Given a string S of length N consisting of 1, 0, and X, the task is to print the character (‘1’ or ‘0’) with the… Read More
Given a string S of length N and two integers M and K, the task is to count the number of substrings of length M… Read More
Given an array arr[] and an array query[] consisting of Q queries, the task for every ith query is to count the number of subarrays… Read More
Given a string S consisting of lowercase alphabets and an integer K, the task is to print all the words that occur K times in… Read More
Given an array arr[] consisting of N positive integers, the task is to modify every array element by reversing them binary representation and count the… Read More
Given an array arr[] consisting of N integers, the task is to count the number of valid pairs (i, j) such that arr[i] + arr[j]… Read More
Given a string S, the task is to check if the string can be split into two substrings such that the number of vowels in… Read More
Given two strings S1 and S2, the task is to check if S2 contains an anagram of S1 as its substring. Examples: Input: S1 =… Read More
Given a binary string, str of length N, the task is to find the maximum sum of the count of 0s on the left substring… Read More
Given two strings S1 and S2 of size N and M respectively, the task is to rearrange characters in string S1 such that S2 is… Read More
Given a binary string S of size N and three positive integers L, R, and K, the task is to find the minimum number of… Read More | 521 | 2,127 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.65625 | 3 | CC-MAIN-2023-23 | latest | en | 0.812006 |
https://plainmath.net/calculus-2/49223-find-definite-integral-int-0-2-x-2-3x-3-plus-1-1-3-dx | 1,674,983,213,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00292.warc.gz | 483,277,848 | 29,460 | Joan Thompson
2021-12-31
Find definite integral.
${\int }_{0}^{2}{x}^{2}{\left(3{x}^{3}+1\right)}^{\frac{1}{3}}dx$
Jenny Sheppard
Expert
Step 1
The given integral can be solved by the method of substitution. The substitution that will be used is
$3{x}^{3}+1=u$. This gives $9{x}^{2}dx=du$ or ${x}^{2}dx=\frac{du}{9}$.
This substitution absorbs the ${x}^{2}dx$ term into $\frac{du}{9}$. Calculate the corresponding limits of the integration in terms of the new variable u.
Step 2
Limits of integration for x are from 0 to 2. New variable for integration is $u=3{x}^{3}+1$. So the lower limit of integration in terms of new variable will be 1. Calculate the upper limit by substituting x=2.
$u=3\cdot {2}^{3}+1$
=3*8+1
=25
So, the integration with this substitution becomes ${\int }_{1}^{25}{u}^{\frac{1}{3}}\frac{du}{9}$. Calculate this integral using the integral
$\int {x}^{n}dx=\frac{{x}^{n+1}}{n+1}$.
${\int }_{1}^{25}{u}^{\frac{1}{3}}\frac{du}{9}=\frac{1}{9}{\left(\frac{{u}^{\frac{1}{3}+1}}{\frac{1}{3}+1}\right)}_{1}^{25}$
$=\frac{1}{9}\cdot \frac{3}{4}{\left({u}^{\frac{4}{3}}\right)}_{1}^{25}$
$=\frac{1}{12}\left({25}^{\frac{4}{3}}-1\right)$
Hence, the given definite integral is equal to $\frac{1}{12}\left({25}^{\frac{4}{3}}-1\right)$
Shawn Kim
Expert
${\int }_{0}^{2}{x}^{2}\left(3{x}^{3}+1{\right)}^{1/3}dx=\int {x}^{2}\sqrt[3]{3{x}^{3}+1}dx$
$=\frac{1}{9}\int \sqrt[3]{u}du$
$\int \sqrt[3]{u}du$
$=\frac{3{u}^{\frac{4}{3}}}{4}$
$\frac{1}{9}\int \sqrt[3]{u}du$
$=\frac{{u}^{\frac{4}{3}}}{12}$
$=\frac{{\left(3{x}^{3}+1\right)}^{\frac{4}{3}}}{12}$
$\int {x}^{2}\sqrt[3]{3{x}^{3}+1}$
$=\frac{{\left(3{x}^{3}+1\right)}^{\frac{4}{3}}}{12}+C$
Vasquez
Expert
$\begin{array}{}{\int }_{2}^{0}{x}^{2}\left(3{x}^{3}+1{\right)}^{1/3}dx\\ \int {x}^{2}×\left(3{x}^{3}+1{\right)}^{1/3}dx\\ \int \frac{{t}^{\frac{1}{3}}}{9}dt\\ \frac{1}{9}×\int {t}^{\frac{1}{3}}dt\\ \frac{1}{9}×\frac{3t\sqrt[3]{t}}{4}\\ \frac{1}{9}×\frac{3\left(3{x}^{3}+1\right)\sqrt[3]{3{x}^{3}+1}}{4}\\ \frac{\left(3{x}^{3}+1\right)\sqrt[3]{3{x}^{3}+1}}{12}\\ \frac{\left(3{x}^{3}+1\right)\sqrt[3]{3{x}^{3}+1}}{12}{|}_{0}^{2}\\ \frac{\left(3×{2}^{3}+1\right)\sqrt[3]{3×{2}^{3}+1}}{12}-\frac{\left(3×{0}^{3}+1\right)\sqrt[3]{3×{0}^{3}+1}}{12}\\ Answer:\\ \frac{25\sqrt[3]{25}-1}{12}\end{array}$ | 1,067 | 2,273 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 50, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.65625 | 5 | CC-MAIN-2023-06 | latest | en | 0.52584 |
https://discourse.julialang.org/t/numerical-code-warnings-and-errors-due-to-scopes/117946 | 1,725,762,715,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00406.warc.gz | 203,290,897 | 7,899 | # Numerical-Code: Warnings and Errors due to scopes
I am a newbie in Julia an I have written the following code:
``````using LinearAlgebra
using Plots
t = 0;
epsilon = 0.1;
########## initial data: Sod problem (rho,u,T) ##########
Prim_l = [1 0 1]; # 0 <= x <= 0.5
Prim_r = [0.125 0 0.1]; # 0.5 < x <= 1
########## domain in space ##########
x_start = 0; # start
x_end = 1; # end
k = 250; # cell discretization, grid-points
dx = (x_end-x_start)/k;
########## velocity space (v = molecular velocities at certain point x at time t) ##########
v_start = -4.5; # start
v_end = 4.5; # end
v_nodes = 100; # velocity discretization, velocity-points
v = collect(range(v_start,v_end,v_nodes)); # collect(range(a,b,n)) <-> LinRange(x): works!
w = 9/v_nodes*ones(size(v)); # width (for numerical integration)
########## calculation initial elements of U ##########
xx = LinRange(x_start,x_end,k+1);
xm = 0.5*(xx[1:end-1] + xx[2:end]);
U = zeros(k,3);
g_val = zeros(k+1,v_nodes);
U[:,1] = (xm.<=0.5)*Prim_l[1]+(xm.>0.5)*Prim_r[1]; # rho
U[:,2] = (xm.<=0.5)*Prim_l[2]+(xm.>0.5)*Prim_r[2]; # u bulk velocity of the gas
U[:,3] = (xm.<=0.5)*Prim_l[3]+(xm.>0.5)*Prim_r[3]; # temperatur
########## initial data: Flux, Source ##########
Flux = zeros(k+1,3);
Source = zeros(k,3);
########## parts of v ##########
vvp = kron(ones(k+1,1),v'.*(v'.>0)); # v^{+}
vvm = kron(ones(k+1,1),v'.*(v'.<0)); # v^{-}
vv = kron(ones(k+1,1),v'); # v
vv_s = kron(ones(k,1),v');
dt = 0.0005; # time-step
t_end = 0.14; # running time
while t < t_end
M = Maxwell(U, v);
vM = M[[1;1:k],:].*vvp + M[[1:k;k],:].*vvm; # term: <m*(v^{+}*M^{n}_{i} + v^{-}*M^{n}_{i+1})>
vg_dx = (g_val[1:k+1,:]-g_val[[1;1:k],:]).*vvp/dx + (g_val[[2:k+1;k+1],:]-g_val[1:k+1,:]).*vvm/dx; # term: [v^{+}*(g^{n}_{i+1/2}-g^{n}_{i-1/2})/deltax + v^{-}*(g^{n}_{i+3/2}-g^{n}_{i+1/2})/deltax]
vM_dx = vv.*(M[[1:k;k],:]-M[[1;1:k],:])/dx; # term: (v*(M^{n}_{i+1}-M^{n}_{i})/deltax)
A = id_min_proj(U, M, vg_dx, vM_dx, v, w); # return value id_min_proj: tensor([251,100],2)
vg_dx = A[1];
vM_dx = A[2];
g_val = (g_val - dt*vg_dx - dt/epsilon*vM_dx)/(1+dt/epsilon); # kinetic equation
Flux[:,1] = vM*w;
Flux[:,2] = (vv.*vM)*w;
Flux[:,3] = 0.5*(vv.^2 .*vM)*w;
g_dx = (g_val[2:k+1,:]-g_val[1:k,:])/dx;
Source[:,1] = g_dx*diagm(w)*v;
Source[:,2] = (vv_s.*g_dx)*diagm(w)*v;
Source[:,3] = 0.5*(vv_s.^2 .*g_dx)*diagm(w)*v;
U = U - dt/dx*(Flux[2:end,:]-Flux[1:end-1,:]) - dt*epsilon*Source; # macroscopic equation
t = t+dt;
end
rho = U[:,1];
u = U[:,2]./rho;
T = 2*U[:,3]./rho-u.^2;
``````
``````using LinearAlgebra
function id_min_proj(U, M, vg, vM, v, w)
rho = [U[1,1];U[:,1]];
u = [U[2,2];U[:,2]]./rho;
T = 2*[U[3,3];U[:,3]]./rho-u.^2;
M = [M[1,:]';M];
vv = kron(ones(size(rho)),v');
uu = kron(u,ones(size(v')));
TT = kron(T,ones(size(v')));
rrho = kron(rho,ones(size(v')));
########## terms of vg ##########
mean_phi = vg*w; # term: <phi>
mean_vphi = ((vv-uu).*vg)*w; # term: <(v-u)*phi>
mean_v2phi = ((0.5*(vv-uu).^2 ./TT .-0.5).*vg)*w; # term: <((v-u)^2/2T - 1/2)*phi>; d=1, because one-dimensional gas
vg_proj = (kron((mean_phi./rho), ones(size(v'))) + kron((mean_vphi./(rho.*T)), ones(size(v'))).*(vv-uu) + kron((mean_v2phi./rho), ones(size(v'))).*((vv-uu).^2 ./TT .-1)).*M;
ret_vg = vg_proj;
########## terms of vM ##########
mean_phi = vM*w; # term: <phi>
mean_vphi = ((vv-uu).*vM)*w; # term: <(v-u)*phi>
mean_v2phi = ((0.5*(vv-uu).^2 ./TT .-0.5).*vM)*w; # term: <((v-u)^2/2T - 1/2)*phi>; d=1, because one-dimensional gas
vM_proj = (kron((mean_phi./rho), ones(size(v'))) + kron((mean_vphi./(rho.*T)), ones(size(v'))).*(vv-uu) + kron((mean_v2phi./rho), ones(size(v'))).*((vv-uu).^2 ./TT .-1)).*M;
ret_vM = vM_proj;
return ret_vg, ret_vM
end
``````
``````using LinearAlgebra
function Maxwell(U, v)
rho = U[:,1]; # rho
u = U[:,2]./rho; # u bulk velocity of the gas
T = 2*U[:,3]./rho-u.^2; # temperatur
vv = kron(ones(size(rho)),v');
uu = kron(u,ones(size(v')));
TT = kron(T,ones(size(v')));
rrho = kron(rho,ones(size(v')));
return M = rrho./sqrt.(2*pi*TT).*exp.((-0.5*(vv-uu).^2)./TT);
end
``````
I get the following output in REPL:
``````┌ Warning: Assignment to `g_val` in soft scope is ambiguous because a global variable by the same name exists: `g_val` will be treated as a new local. Disambiguate by using `local g_val` to suppress this warning or `global g_val` to assign to the existing global variable.
└ @ ~/Julia.Projekte/Test.jl:71
┌ Warning: Assignment to `U` in soft scope is ambiguous because a global variable by the same name exists: `U` will be treated as a new local. Disambiguate by using `local U` to suppress this warning or `global U` to assign to the existing global variable.
└ @ ~/Julia.Projekte/Test.jl:82
┌ Warning: Assignment to `t` in soft scope is ambiguous because a global variable by the same name exists: `t` will be treated as a new local. Disambiguate by using `local t` to suppress this warning or `global t` to assign to the existing global variable.
└ @ ~/Julia.Projekte/Test.jl:84
ERROR: UndefVarError: `U` not defined
Stacktrace:
[1] top-level scope
@ ~/Julia.Projekte/Test.jl:61
┌ Warning: Assignment to `g_val` in soft scope is ambiguous because a global variable by the same name exists: `g_val` will be treated as a new local. Disambiguate by using `local g_val` to suppress this warning or `global g_val` to assign to the existing global variable.
└ @ ~/Julia.Projekte/Test.jl:71
┌ Warning: Assignment to `U` in soft scope is ambiguous because a global variable by the same name exists: `U` will be treated as a new local. Disambiguate by using `local U` to suppress this warning or `global U` to assign to the existing global variable.
└ @ ~/Julia.Projekte/Test.jl:82
┌ Warning: Assignment to `t` in soft scope is ambiguous because a global variable by the same name exists: `t` will be treated as a new local. Disambiguate by using `local t` to suppress this warning or `global t` to assign to the existing global variable.
└ @ ~/Julia.Projekte/Test.jl:84
ERROR: UndefVarError: `U` not defined
Stacktrace:
[1] top-level scope
@ ~/Julia.Projekte/Test.jl:61
``````
Okay … the matrix M is not calculated, I think, due to the warnings. I took some looks at books about Julia with code examples. I cannot find any hints with this scopes. All in all … this code works in Matlab. I have re-write this code in Julia, with some changes (such as the matrix-notations).
The code seems to be work, when I set’ global’ (but I think, this is the the final BEST solution!):
``````using LinearAlgebra
using Plots
t = 0;
epsilon = 0.1;
########## initial data: Sod problem (rho,u,T) ##########
Prim_l = [1 0 1]; # 0 <= x <= 0.5
Prim_r = [0.125 0 0.1]; # 0.5 < x <= 1
########## domain in space ##########
x_start = 0; # start
x_end = 1; # end
k = 250; # cell discretization, grid-points
dx = (x_end-x_start)/k;
########## velocity space (v = molecular velocities at certain point x at time t) ##########
v_start = -4.5; # start
v_end = 4.5; # end
v_nodes = 100; # velocity discretization, velocity-points
v = collect(range(v_start,v_end,v_nodes)); # collect(range(a,b,n)) <-> LinRange(x): works!
w = 9/v_nodes*ones(size(v)); # width (for numerical integration)
########## calculation initial elements of U ##########
xx = LinRange(x_start,x_end,k+1);
xm = 0.5*(xx[1:end-1] + xx[2:end]);
U = zeros(k,3);
g_val = zeros(k+1,v_nodes);
U[:,1] = (xm.<=0.5)*Prim_l[1]+(xm.>0.5)*Prim_r[1]; # rho
U[:,2] = (xm.<=0.5)*Prim_l[2]+(xm.>0.5)*Prim_r[2]; # u bulk velocity of the gas
U[:,3] = (xm.<=0.5)*Prim_l[3]+(xm.>0.5)*Prim_r[3]; # temperatur
########## initial data: Flux, Source ##########
Flux = zeros(k+1,3);
Source = zeros(k,3);
########## parts of v ##########
vvp = kron(ones(k+1,1),v'.*(v'.>0)); # v^{+}
vvm = kron(ones(k+1,1),v'.*(v'.<0)); # v^{-}
vv = kron(ones(k+1,1),v'); # v
vv_s = kron(ones(k,1),v');
dt = 0.0005; # time-step
t_end = 0.14; # running time
while t < t_end
global M = Maxwell(U, v);
global vM = M[[1;1:k],:].*vvp + M[[1:k;k],:].*vvm; # term: <m*(v^{+}*M^{n}_{i} + v^{-}*M^{n}_{i+1})>
global vg_dx = (g_val[1:k+1,:]-g_val[[1;1:k],:]).*vvp/dx + (g_val[[2:k+1;k+1],:]-g_val[1:k+1,:]).*vvm/dx; # term: [v^{+}*(g^{n}_{i+1/2}-g^{n}_{i-1/2})/deltax + v^{-}*(g^{n}_{i+3/2}-g^{n}_{i+1/2})/deltax]
global vM_dx = vv.*(M[[1:k;k],:]-M[[1;1:k],:])/dx; # term: (v*(M^{n}_{i+1}-M^{n}_{i})/deltax)
global A = id_min_proj(U, M, vg_dx, vM_dx, v, w); # return value id_min_proj: tensor([251,100],2)
vg_dx = A[1];
vM_dx = A[2];
global g_val = (g_val - dt*vg_dx - dt/epsilon*vM_dx)/(1+dt/epsilon); # kinetic equation
Flux[:,1] = vM*w;
Flux[:,2] = (vv.*vM)*w;
Flux[:,3] = 0.5*(vv.^2 .*vM)*w;
global g_dx = (g_val[2:k+1,:]-g_val[1:k,:])/dx;
Source[:,1] = g_dx*diagm(w)*v;
Source[:,2] = (vv_s.*g_dx)*diagm(w)*v;
Source[:,3] = 0.5*(vv_s.^2 .*g_dx)*diagm(w)*v;
global U = U - dt/dx*(Flux[2:end,:]-Flux[1:end-1,:]) - dt*epsilon*Source; # macroscopic equation
global t = t+dt;
end
rho = U[:,1];
u = U[:,2]./rho;
T = 2*U[:,3]./rho-u.^2;
plot(xm, rho)
``````
Kind regars!
Hi there!
You kinds disregarded the most important rule in Julia: don’t compute stuff in global scope! It’s slow and the scoping works a bit differently (as you experience).
So the quick fix is just to wrap your whole code (expect the `using` statements) in the first code block you posted in a function (e.g. `main()`) and then call it. That should be sufficient to make everything work (if the code is otherwise correct).
Hello,
When were scopes introduced in Julia?
I have some books about Julia, and nothing is mentioned about scopes there.
I used some code example from ‘Ferreira Introduction to Computational Physics with examples in Julia, 2016’.
Of course, these codes work with Julia due to scope problems.
I can’t tell you when the current scoping rules where implemented, but I think the latest change was soft global scope for the REPL which should’nt influence your code. You can read about scopes here:
https://docs.julialang.org/en/v1/manual/variables-and-scoping/
Please note that examples from 2016 are ancient with respect to Julia. Julia 1.0 was released in 2018. So you should probably find a more upto date source.
How does the Main() function look like, for example? I need corresponding inputs for functions, which are defined in brackets.
The quick fix is really just wrapping everything (except `using` statements) in a function and then calling that:
``````using LinearAlgebra
using Plots
function main() # wrap everything in a function
t = 0;
epsilon = 0.1;
########## initial data: Sod problem (rho,u,T) ##########
Prim_l = [1 0 1]; # 0 <= x <= 0.5
Prim_r = [0.125 0 0.1]; # 0.5 < x <= 1
########## domain in space ##########
x_start = 0; # start
x_end = 1; # end
k = 250; # cell discretization, grid-points
dx = (x_end-x_start)/k;
########## velocity space (v = molecular velocities at certain point x at time t) ##########
v_start = -4.5; # start
v_end = 4.5; # end
v_nodes = 100; # velocity discretization, velocity-points
v = collect(range(v_start,v_end,v_nodes)); # collect(range(a,b,n)) <-> LinRange(x): works!
w = 9/v_nodes*ones(size(v)); # width (for numerical integration)
########## calculation initial elements of U ##########
xx = LinRange(x_start,x_end,k+1);
xm = 0.5*(xx[1:end-1] + xx[2:end]);
U = zeros(k,3);
g_val = zeros(k+1,v_nodes);
U[:,1] = (xm.<=0.5)*Prim_l[1]+(xm.>0.5)*Prim_r[1]; # rho
U[:,2] = (xm.<=0.5)*Prim_l[2]+(xm.>0.5)*Prim_r[2]; # u bulk velocity of the gas
U[:,3] = (xm.<=0.5)*Prim_l[3]+(xm.>0.5)*Prim_r[3]; # temperatur
########## initial data: Flux, Source ##########
Flux = zeros(k+1,3);
Source = zeros(k,3);
########## parts of v ##########
vvp = kron(ones(k+1,1),v'.*(v'.>0)); # v^{+}
vvm = kron(ones(k+1,1),v'.*(v'.<0)); # v^{-}
vv = kron(ones(k+1,1),v'); # v
vv_s = kron(ones(k,1),v');
dt = 0.0005; # time-step
t_end = 0.14; # running time
while t < t_end
M = Maxwell(U, v);
vM = M[[1;1:k],:].*vvp + M[[1:k;k],:].*vvm; # term: <m*(v^{+}*M^{n}_{i} + v^{-}*M^{n}_{i+1})>
vg_dx = (g_val[1:k+1,:]-g_val[[1;1:k],:]).*vvp/dx + (g_val[[2:k+1;k+1],:]-g_val[1:k+1,:]).*vvm/dx; # term: [v^{+}*(g^{n}_{i+1/2}-g^{n}_{i-1/2})/deltax + v^{-}*(g^{n}_{i+3/2}-g^{n}_{i+1/2})/deltax]
vM_dx = vv.*(M[[1:k;k],:]-M[[1;1:k],:])/dx; # term: (v*(M^{n}_{i+1}-M^{n}_{i})/deltax)
A = id_min_proj(U, M, vg_dx, vM_dx, v, w); # return value id_min_proj: tensor([251,100],2)
vg_dx = A[1];
vM_dx = A[2];
g_val = (g_val - dt*vg_dx - dt/epsilon*vM_dx)/(1+dt/epsilon); # kinetic equation
Flux[:,1] = vM*w;
Flux[:,2] = (vv.*vM)*w;
Flux[:,3] = 0.5*(vv.^2 .*vM)*w;
g_dx = (g_val[2:k+1,:]-g_val[1:k,:])/dx;
Source[:,1] = g_dx*diagm(w)*v;
Source[:,2] = (vv_s.*g_dx)*diagm(w)*v;
Source[:,3] = 0.5*(vv_s.^2 .*g_dx)*diagm(w)*v;
U = U - dt/dx*(Flux[2:end,:]-Flux[1:end-1,:]) - dt*epsilon*Source; # macroscopic equation
t = t+dt;
end
rho = U[:,1];
u = U[:,2]./rho;
T = 2*U[:,3]./rho-u.^2;
end
main() # now call it
`````` | 4,973 | 15,189 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.15625 | 3 | CC-MAIN-2024-38 | latest | en | 0.567521 |
https://complexzeta.wordpress.com/2007/05/30/lexicographic-codes-nim-and-simple-groups/ | 1,531,848,588,000,000,000 | text/html | crawl-data/CC-MAIN-2018-30/segments/1531676589757.30/warc/CC-MAIN-20180717164437-20180717184437-00108.warc.gz | 633,862,521 | 14,227 | We first consider integral lexicographic codes (henceforth lexicodes). We enumerate all (infinite to the left) sequences of nonnegative integers that differ in at least $m$ positions from any previous sequence. The first few of these (with $m=3$) are the following:
…00000
…00111
…00222
…00333
…00444
…00555
…00666
and so forth. But after all those are done, we can continue with:
…01012
…01103
…01230
…01321
…01456
…01547
…01674
…01765
and so forth. After we have finished with all these, we can continue with
…02023
and so forth.
I think one of the nicest possible properties a code can have is linearity (i.e. it forms a vector space over some field). However, it might not appear that this lexicographic code is linear. First of all, there is no obvious field of scalars (since $\mathbb{N}$ is certainly not a field). Furthermore, the addition and multiplication don’t work out correctly. If we add …00222 and …01012, we get …01234, but this is not in our lexicode. Multiplication doesn’t work either: if we multiply …01012 by 2, we get …02024, which is also not in our lexicode.
However, this lexicode is linear! It forms a vector space over the field of (finite) nimbers! So lexicographic codes are mysteriously connected to combinatorial games.
So that’s one kind of lexicode. We can also consider binary lexicodes. In this case, we will restrict ourselves to 24-bit binary lexicodes with $m=8$. The first few sequences are
000000000000000000000000
000000000000000011111111
000000000000111100001111
000000000000111111110000
000000000011001100110011
and so forth. The list will ultimately contain 4096 sequences. There are 759 of them that contain exactly 8 1’s. These 759 have a very interesting property: for any five-element subset of $\{1,2,3,\ldots,24\}$, there is exactly one (of the 759) sequences that has 1’s in exactly those five positions. In other words, these sequences form the Steiner system $S(5,8,24)$.
Now for the simple groups. Well, the Mathieu group $M_{24}$ is the automorphism group of $S(5,8,24)$. $M_{24}$ is one of the 26 sporadic finite simple groups. | 585 | 2,113 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.40625 | 3 | CC-MAIN-2018-30 | latest | en | 0.894972 |
https://forum.math.toronto.edu/index.php?PHPSESSID=d06s0sgcn89udn6n7c5jessh97&topic=466.0;prev_next=prev | 1,652,812,604,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00382.warc.gz | 311,609,634 | 6,210 | ### Author Topic: MT problem 1 (Read 3972 times)
#### Victor Ivrii
• Elder Member
• Posts: 2599
• Karma: 0
##### MT problem 1
« on: October 29, 2014, 08:56:45 PM »
If exists, find the integrating factor $\mu(x,y)\$ depending only on $\ x\$, only on $\ y\$ and on $\ x \cdot y\$ justifying your answers and then solve the ODE
\begin{equation*}
\left(3x + \frac{6}{y}\right) + \left(\frac{x^2}{y}+\frac{3y}{x}\right)y'=0.
\end{equation*}
Also, find the solution satisfying $y(1) = 2\$.
#### Roro Sihui Yap
• Full Member
• Posts: 30
• Karma: 16
##### Re: MT problem 1
« Reply #1 on: October 30, 2014, 12:10:23 AM »
\begin{equation*}
\left(3x + \frac{6}{y}\right) + \left(\frac{x^2}{y}+\frac{3y}{x}\right)y'=0.
\end{equation*}
Let $M(x,y) = 3x + \frac{6}{y} \\$ and $N(x,y) = \frac{x^2}{y}+\frac{3y}{x}$. Then $M_y = \frac{-6}{y^2}$, $N_x = \frac{2x}{y}+\frac{-3y}{x^2}$
The equation is not exact. Consider $\frac{M_y - N_x}{N(x,y)}$,
\begin{equation*}
\frac{M_y - N_x}{N(x,y)} = \frac{\frac{-6}{y^2} - \frac{2x}{y} + \frac{3y}{x^2}}{\frac{x^2}{y}+\frac{3y}{x}} = \frac{-6x^2 - 2x^3y + 3y^3}{x^4y + 3y^3x}
\end{equation*}
This is not a function of $x$ only. The integrating factor depending only on $x$ does not exist
Consider $\frac{N_x - M_y}{M(x,y)}$,
\begin{equation*}
\frac{N_x - M_y}{M(x,y)} = \frac{\frac{2x}{y} - \frac{3y}{x^2} + \frac{6}{y^2}}{3x + \frac{6}{y}} = \frac{6x^2 + 2x^3y - 3y^3}{3x^3y^2 + 6x^2y}
\end{equation*}
This is not a function of $y$ only. The integrating factor depending only on $y$ does not exist.
Consider $\frac{N_x - M_y}{(x)M(x,y) - (y)N(x,y)}$,
\begin{equation*}
\frac{N_x - M_y}{(x)M - (y)N} = \frac{\frac{2x}{y} - \frac{3y}{x^2} + \frac{6}{y^2}}{3x^2 + \frac{6x}{y} - x^2 - \frac{3y^2}{x}} = \frac{6x^2 + 2x^3y - 3y^3}{6x^3y + 2x^4y^2 - 3y^4x} = \frac{1}{xy}
\end{equation*}
This is a function of $xy$ only. $\frac{d\mu}{dxy} = \frac{\mu}{xy}$. Let $u = xy$; $\frac{d\mu}{du} = \frac{\mu}{u}$, $\mu = u$, $\mu = xy$
Multiply the equation by $xy$
\begin{equation*}
\left(3x^2y + 6x\right) + \left(x^3+3y^2\right)y'=0.
\end{equation*}
Let $M(x,y) = 3x^2y + 6x$ and $N(x,y) = x^3+3y^2$.
There is a $\Psi(x, y)$ such that:
\begin{gather}
\Psi _x(x, y) = M(x,y) = 3x^2y + 6x \label{A}\\
\Psi _y (x, y) = N(x,y) = x^3+3y^2 \label{B}\end{gather}
Integrating equation (\ref{A}), $\Psi(x, y) = x^3y + 3x^2 + f(y)$, $\Psi_y (x, y) = x^3 + f'(y)$.
Comparing with equation (\ref{B}), $f'(y) = 3y^2$ and $f(y) = y^3$
Therefore $\Psi(x, y) = x^3y + 3x^2 + y^3 = c$.
When $x=1$, $y=2$, $2 + 3 + 8 = c$, $c = 13\implies \Psi(x, y) = x^3y + 3x^2 + y^3 = 13$
« Last Edit: October 30, 2014, 06:34:15 AM by Victor Ivrii » | 1,271 | 2,660 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.34375 | 4 | CC-MAIN-2022-21 | latest | en | 0.514506 |
https://www.reference.com/math/calculate-weighted-average-9c60dd7e140b321b | 1,487,617,999,000,000,000 | text/html | crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00563-ip-10-171-10-108.ec2.internal.warc.gz | 874,861,539 | 20,452 | Q:
# How do you calculate a weighted average?
A:
Calculate the weighted average of a data set using the formula w1x1 + w2x2 + … wnxn. It involves multiplying each number by its weight and then summing up their products. You need the data values, or x, and the data value's weights, or w.
## Keep Learning
1. Determine the data values
Determine the value of each data in the set. These are referred to as "x" in the formula.
2. Find the product of w and x
Multiply each data value, x, by its weight, w.
3. Calculate the weighted average
Using the formula for finding a weighted average, find the sum of all the products of the numbers and their respective weights. This is the weighted average.
Sources:
## Related Questions
• A: To calculate the average of a set of values, add up all the numbers. The sum is then divided by the count of values.... Full Answer >
Filed Under:
• A: Population density tells you how crowded a certain area is, on average. To calculate, you need measurements of area, the population count and a calculator.... Full Answer >
Filed Under:
• A: A z-score represents the amount of standard deviations a value is away from the mean average. Z-scores are calculated with a probability formula that requi... Full Answer >
Filed Under:
• A: Mean, median and mode are different ways of determining the average from a set of numbers. Range gives the difference between the highest and lowest values... Full Answer >
Filed Under:
PEOPLE SEARCH FOR | 332 | 1,479 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.375 | 3 | CC-MAIN-2017-09 | longest | en | 0.886922 |
http://gmatclub.com/forum/at-state-university-the-average-arithmetic-mean-salary-of-105164.html?fl=similar | 1,481,129,058,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698542217.43/warc/CC-MAIN-20161202170902-00490-ip-10-31-129-80.ec2.internal.warc.gz | 121,730,236 | 57,464 | At State University, the average (arithmetic mean) salary of : GMAT Data Sufficiency (DS)
Check GMAT Club App Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 07 Dec 2016, 08:44
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# At State University, the average (arithmetic mean) salary of
Author Message
TAGS:
### Hide Tags
Director
Joined: 07 Jun 2004
Posts: 612
Location: PA
Followers: 5
Kudos [?]: 688 [0], given: 22
At State University, the average (arithmetic mean) salary of [#permalink]
### Show Tags
21 Nov 2010, 07:19
1
This post was
BOOKMARKED
00:00
Difficulty:
35% (medium)
Question Stats:
62% (01:35) correct 38% (00:48) wrong based on 47 sessions
### HideShow timer Statistics
At State University, the average (arithmetic mean) salary of philosophy department professors is $56,000 and the average annual salary of business department professors is$74,000. If at State University there are two departments, what is the average annual salary at State University?
(1) The two departments have a total of 42 professors.
(2) There are twice as many professors in the philosophy department than in the business department.
(A) Statement (1) ALONE is sufficient, but statement (2) is not sufficient.
(B) Statement (2) ALONE is sufficient, but statement (1) is not sufficient.
(C) BOTH statements TOGETHER are sufficient, but NEITHER statement ALONE is sufficient.
(D) EACH statement ALONE is sufficient.
(E) Statements (1) and (2) TOGETHER are NOT sufficient.
Can some one explain the OA to me
[Reveal] Spoiler: OA
_________________
If the Q jogged your mind do Kudos me : )
Kaplan GMAT Instructor
Joined: 21 Jun 2010
Posts: 148
Location: Toronto
Followers: 44
Kudos [?]: 184 [1] , given: 0
### Show Tags
21 Nov 2010, 10:23
1
KUDOS
rxs0005 wrote:
At State University, the average (arithmetic mean) salary of philosophy department professors is $56,000 and the average annual salary of business department professors is$74,000. If at State University there are two departments, what is the average annual salary at State University?
(1) The two departments have a total of 42 professors.
(2) There are twice as many professors in the philosophy department than in the business department.
(A) Statement (1) ALONE is sufficient, but statement (2) is not sufficient.
(B) Statement (2) ALONE is sufficient, but statement (1) is not sufficient.
(C) BOTH statements TOGETHER are sufficient, but NEITHER statement ALONE is sufficient.
(D) EACH statement ALONE is sufficient.
(E) Statements (1) and (2) TOGETHER are NOT sufficient.
Can some one explain the OA to me
Step 1 of the Kaplan Method for DS: Analyze the stem
We see that there are only two groups, and we know the average of each individual group. We're asked to find the overall average.
We think: to solve for a weighted average, we need the weighting of each group.
Step 2 of the Kaplan Method for DS: Evaluate the statements
(1) tells us the total number of professors, but not how many are in each group: insufficient, eliminate A and D.
(2) tells us that there are twice as many profs in the philosophy department; in other words, the philosophy profs make up 2/3 of all of the professors. We now know the weight of each group: sufficient, eliminate C and E; choose B!
If we actually wanted to solve (say this were a problem solving question), we'd do so as follows:
Overall average of a group = (avg group 1)(weight group 1) + (avg group 2)(weight group 2) + ... + (avg group n)(weight group n)
Overall salary average = (avg salary phil profs)(weight of phil profs) + (avg salary bus profs)(weight of bus profs)
Overall salary average = (56000)(2/3) + (74000)(1/3) = 112000/3 + 74000/3 = 186000/3 = 62000
(There are other, quicker, ways to solve using weighted averages, but applying the formula is the "textbook" approach.)
Whenever you review a question (and you should review every one you do!), you always want to finish by asking "what can I take away from this exercise?" Here are our takeaways from this question:
1) to solve for an overall average, you don't necessarily need the numbers in each group - knowing the weights of each group is sufficient;
2) the better you understand the concepts, the less math you'll need to do on test day; and
3) it's better to be a business prof than a philosophy prof!
Verbal Forum Moderator
Joined: 31 Jan 2010
Posts: 499
WE 1: 4 years Tech
Followers: 12
Kudos [?]: 137 [1] , given: 149
### Show Tags
21 Nov 2010, 11:53
1
KUDOS
statement 2 is sufficient . cos it gives the weightage
_________________
My Post Invites Discussions not answers
Try to give back something to the Forum.I want your explanations, right now !
Current Student
Status: Up again.
Joined: 31 Oct 2010
Posts: 541
Concentration: Strategy, Operations
GMAT 1: 710 Q48 V40
GMAT 2: 740 Q49 V42
Followers: 21
Kudos [?]: 398 [0], given: 75
### Show Tags
04 Feb 2011, 01:54
At State University, the average (arithmetic mean) salary of philosophy department professors is $56,000 and the average annual salary of business department professors is$74,000. If at State University there are two departments, what is the average annual salary at State University?
(1) The two departments have a total of 42 professors.
(2) There are twice as many professors in the philosophy department than in the business department.
_________________
My GMAT debrief: http://gmatclub.com/forum/from-620-to-710-my-gmat-journey-114437.html
Math Forum Moderator
Joined: 20 Dec 2010
Posts: 2021
Followers: 162
Kudos [?]: 1669 [0], given: 376
### Show Tags
04 Feb 2011, 03:20
Number of Philosophy Dept. professors: P
Total annual(assumed) salary of Philosophy Dept. professors: $$S_P$$
Number of Business Dept. professors: B
Total annual salary of Business Dept. professors: $$S_B$$
Total number of professors in the university: P+B
Given:
Average salary of Philopsophy Profs $$A_P=S_P/P=56000$$
Average salary of Business Profs $$A_B=S_B/B=74000$$
Q: Average annual salary of professors of State University= ?
It is a weighted average question. We need to know the strength of the professors from each department.
(1)
$$\frac{S_P+S_B}{P+B}$$
$$\frac{S_P+S_B}{42}$$
$$\frac{(P*A_P)+(B*A_B)}{42}$$
$$\frac{(P*56000)+(B*74000)}{42}$$
We don't know anything about P and B.
Not sufficient.
(2) P=2B
$$\frac{S_P+S_B}{P+B}$$
$$\frac{(2B*A_P)+(B*A_B)}{2B+B}$$
$$\frac{B(2*A_P+*A_B)}{3B}$$
$$\frac{(2*A_P+*A_B)}{3}$$
We know both A_P and A_B and thus we know the average annual salary of all professors in the State university.
Sufficient.
Ans: "B"
_________________
Director
Joined: 07 Jun 2004
Posts: 612
Location: PA
Followers: 5
Kudos [?]: 688 [0], given: 22
### Show Tags
28 Feb 2011, 15:39
At State University, the average (arithmetic mean) salary of philosophy department professors is $56,000 and the average annual salary of business department professors is$74,000. If at State University there are two departments, what is the average annual salary at State University?
(1) The two departments have a total of 42 professors.
(2) There are twice as many professors in the philosophy department than in the business department.
_________________
If the Q jogged your mind do Kudos me : )
SVP
Joined: 16 Nov 2010
Posts: 1672
Location: United States (IN)
Concentration: Strategy, Technology
Followers: 33
Kudos [?]: 507 [0], given: 36
### Show Tags
28 Feb 2011, 18:08
Let the number of philosophy department professors = x
And the number of business department professors = y
So we have to find (56,000 * x + 74,000 * y)/(x+y)
From 1 we have :
x + y = 42, not enough
From 2 we have :
x = 2y, so by substituting the same in the above equation, x (or y) can cancel out and we can derive the value.
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
GMAT Club Premium Membership - big benefits and savings
Director
Status: Impossible is not a fact. It's an opinion. It's a dare. Impossible is nothing.
Affiliations: University of Chicago Booth School of Business
Joined: 03 Feb 2011
Posts: 920
Followers: 14
Kudos [?]: 331 [0], given: 123
### Show Tags
28 Feb 2011, 20:25
Law of balances. It has to be B. total number is immaterial to this question. A is insuff.
Manager
Joined: 14 Dec 2010
Posts: 218
Location: India
Concentration: Technology, Entrepreneurship
GMAT 1: 680 Q44 V39
Followers: 2
Kudos [?]: 35 [0], given: 5
### Show Tags
02 Mar 2011, 01:00
+1 B. Classic weighted averages problem.
Re: Average DS [#permalink] 02 Mar 2011, 01:00
Similar topics Replies Last post
Similar
Topics:
The average (arithmetic mean) of a professor's salary at University M 1 18 Oct 2016, 22:53
What is the average (arithmetic mean) monthly salary of a worker in Qu 3 14 Sep 2016, 05:04
23 Last year the average (arithmetic mean) salary of the 10 9 16 Jul 2012, 03:51
1 Last year the average (arithmetic mean) salary of the 10 6 05 Feb 2012, 08:58
1 The average (arithmetic mean) monthly balance in Company X's 2 09 Mar 2010, 14:32
Display posts from previous: Sort by | 2,644 | 9,611 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.96875 | 4 | CC-MAIN-2016-50 | latest | en | 0.902156 |
https://codeofcode.org/lessons/suffix-trees-in-cpp/ | 1,695,775,880,000,000,000 | text/html | crawl-data/CC-MAIN-2023-40/segments/1695233510238.65/warc/CC-MAIN-20230927003313-20230927033313-00740.warc.gz | 193,428,897 | 42,815 | Back to Course
0% Complete
0/0 Steps
Lesson 24 of 48
In Progress
# Suffix Trees in C++
##### Yasin Cakal
Suffix trees are a type of data structure and algorithm used to store and search strings. They are an incredibly efficient and powerful tool that can be used to quickly search and identify patterns in strings. In this article, we will discuss what a suffix tree is, why they are important, and how we can implement them in C++. We will also talk about the time and space complexity of the suffix tree operations and explore some coding exercises to help you better understand the concept.
## What is a Suffix Tree?
A suffix tree is a data structure and algorithm used to store and search strings. It is a compressed trie structure that stores all the suffixes of a given string in a tree. A suffix tree is composed of internal nodes and leaves, which represent the pattern of the substring. Suffix trees allow for a variety of operations, such as finding the longest common substring, finding all occurrences of a given string, and finding the shortest string that is not a substring of the given string.
The suffix tree structure is based on the concept of a suffix array, which is an array that contains all the suffixes of a given string in lexicographical order. The suffix array is then used to construct the suffix tree. The suffix tree is a compressed version of the suffix array, which allows for faster search times.
The main advantage of using a suffix tree is that it allows for very fast access, insertion, and deletion operations. This makes it an ideal data structure for string searching and pattern matching.
## Time Complexity of Suffix Tree Operations
The time complexity of the operations on a suffix tree is determined by the size of the string. On average, the time complexity of the operations on a suffix tree is O(m), where m is the size of the string. The worst-case time complexity of the operations on a suffix tree is O(m2).
The time complexity of the operations on a suffix tree is dependent on the structure of the tree. If the tree is well-structured, then the time complexity can be reduced.
## Space Complexity of Suffix Tree
The space complexity of a suffix tree is determined by the size of the string. The worst-case space complexity of a suffix tree is O(m2), where m is the size of the string. This means that the space complexity of a suffix tree is proportional to the square of the size of the string.
## Implementing Suffix Trees in C++
Now that we have discussed the basics of suffix trees and their time and space complexity, let’s look at how we can implement them in C++.
First, let’s start by declaring a structure to represent a node in the suffix tree:
struct Node {
char character;
int index;
vector<Node*> children;
};
The character field is used to store the character at the node. The index field is used to store the index of the character in the string. The children field is used to store the child nodes of the node.
Next, let’s define a function to create a node:
Node* create_node(char character, int index) {
Node* node = new Node;
node->character = character;
node->index = index;
return node;
}
This function takes a character and an index as parameters and creates a node with the given character and index.
Next, let’s define a function to insert a node into the suffix tree:
void insert_node(Node* root, Node* node) {
if (root->children.empty()) {
root->children.push_back(node);
}
else {
for (int i = 0; i < root->children.size(); i++) {
if (root->children[i]->character == node->character) {
root->children[i] = node;
break;
}
}
}
}
This function takes a root node and a node to be inserted as parameters and inserts the node into the root node’s children vector.
Finally, let’s define a function to build the suffix tree:
void build_suffix_tree(string str, Node* root) {
int n = str.length();
for (int i = 0; i < n; i++) {
Node* node = create_node(str[i], i);
insert_node(root, node);
}
}
This function takes a string and a root node as parameters and builds the suffix tree by creating and inserting nodes into the root node’s children vector.
## Conclusion
In this article, we discussed what a suffix tree is and how we can implement them in C++. We also discussed the time and space complexity of the suffix tree operations. Suffix trees are an incredibly efficient and powerful tool that can be used to quickly search and identify patterns in strings.
## Exercises
#### Write a program to build a suffix tree for the given string.
#include <iostream>
#include <string>
#include <vector>
struct Node {
char character;
int index;
std::vector<Node*> children;
};
Node* create_node(char character, int index) {
Node* node = new Node;
node->character = character;
node->index = index;
return node;
}
void insert_node(Node* root, Node* node) {
if (root->children.empty()) {
root->children.push_back(node);
}
else {
for (int i = 0; i < root->children.size(); i++) {
if (root->children[i]->character == node->character) {
root->children[i] = node;
break;
}
}
}
}
void build_suffix_tree(std::string str, Node* root) {
int n = str.length();
for (int i = 0; i < n; i++) {
Node* node = create_node(str[i], i);
insert_node(root, node);
}
}
int main() {
std::string str = "hello";
Node* root = new Node;
build_suffix_tree(str, root);
return 0;
}
The program builds a suffix tree for the given string. The create_node() function creates a node with the given character and index, the insert_node() function inserts the node into the root node’s children vector, and the build_suffix_tree() function builds the suffix tree by creating and inserting nodes into the root node’s children vector.
#### Write a program to find the longest common substring in two strings using a suffix tree.
#include <iostream>
#include <string>
#include <vector>
struct Node {
char character;
int index;
std::vector<Node*> children;
};
Node* create_node(char character, int index) {
Node* node = new Node;
node->character = character;
node->index = index;
return node;
}
void insert_node(Node* root, Node* node) {
if (root->children.empty()) {
root->children.push_back(node);
}
else {
for (int i = 0; i < root->children.size(); i++) {
if (root->children[i]->character == node->character) {
root->children[i] = node;
break;
}
}
}
}
void build_suffix_tree(std::string str, Node* root) {
int n = str.length();
for (int i = 0; i < n; i++) {
Node* node = create_node(str[i], i);
insert_node(root, node);
}
}
std::string longest_common_substring(Node* root, std::string str1, std::string str2) {
std::string longest = "";
for (int i = 0; i < str1.length(); i++) {
for (int j = 0; j < root->children.size(); j++) {
if (str1[i] == root->children[j]->character) {
std::string temp = "";
temp += str1[i];
int k = i+1;
while (k < str1.length() && k-i < str2.length()) {
if (str1[k] == str2[k-i]) {
temp += str1[k];
}
else {
break;
}
k++;
}
if (temp.length() > longest.length()) {
longest = temp;
}
}
}
}
return longest;
}
int main() {
std::string str1 = "hello";
std::string str2 = "world";
Node* root = new Node;
build_suffix_tree(str1, root);
std::string longest = longest_common_substring(root, str1, str2);
std::cout << longest << std::endl;
return 0;
}
The program finds the longest common substring in two strings using a suffix tree. The create_node() function creates a node with the given character and index, the insert_node() function inserts the node into the root node’s children vector, the build_suffix_tree() function builds the suffix tree by creating and inserting nodes into the root node’s children vector, and the longest_common_substring() function finds the longest common substring in two strings using the suffix tree. The program prints “lo” as the output.
#### Write a program to find all occurrences of a given string in a suffix tree.
#include <iostream>
#include <string>
#include <vector>
struct Node {
char character;
int index;
std::vector<Node*> children;
};
Node* create_node(char character, int index) {
Node* node = new Node;
node->character = character;
node->index = index;
return node;
}
void insert_node(Node* root, Node* node) {
if (root->children.empty()) {
root->children.push_back(node);
}
else {
for (int i = 0; i < root->children.size(); i++) {
if (root->children[i]->character == node->character) {
root->children[i] = node;
break;
}
}
}
}
void build_suffix_tree(std::string str, Node* root) {
int n = str.length();
for (int i = 0; i < n; i++) {
Node* node = create_node(str[i], i);
insert_node(root, node);
}
}
std::vector<int> find_all_occurrences(Node* root, std::string str) {
std::vector<int> indices;
for (int i = 0; i < str.length(); i++) {
for (int j = 0; j < root->children.size(); j++) {
if (str[i] == root->children[j]->character) {
int k = i+1;
while (k < str.length()) {
if (str[k] == root->children[j]->children[k-i]->character) {
k++;
}
else {
break;
}
}
if (k == str.length()) {
indices.push_back(root->children[j]->index);
}
}
}
}
return indices;
}
int main() {
std::string str = "hello";
Node* root = new Node;
build_suffix_tree(str, root);
std::string substring = "ll";
std::vector<int> indices = find_all_occurrences(root, substring);
for (int i = 0; i < indices.size(); i++) {
std::cout << indices[i] << std::endl;
}
return 0;
}
The program finds all occurrences of a given string in a suffix tree. The create_node() function creates a node with the given character and index, the insert_node() function inserts the node into the root node’s children vector, the build_suffix_tree() function builds the suffix tree by creating and inserting nodes into the root node’s children vector, and the find_all_occurrences() function finds all occurrences of a given string in the suffix tree. The program prints “2” and “3” as the output.
#### Write a program to find the shortest string that is not a substring of a given string using a suffix tree.
#include <iostream>
#include <string>
#include <vector>
struct Node {
char character;
int index;
std::vector<Node*> children;
};
Node* create_node(char character, int index) {
Node* node = new Node;
node->character = character;
node->index = index;
return node;
}
void insert_node(Node* root, Node* node) {
if (root->children.empty()) {
root->children.push_back(node);
}
else {
for (int i = 0; i < root->children.size(); i++) {
if (root->children[i]->character == node->character) {
root->children[i] = node;
break;
}
}
}
}
void build_suffix_tree(std::string str, Node* root) {
int n = str.length();
for (int i = 0; i < n; i++) {
Node* node = create_node(str[i], i);
insert_node(root, node);
}
}
std::string shortest_non_substring(Node* root, std::string str) {
std::string shortest = str;
for (int i = 0; i < str.length(); i++) {
for (int j = 0; j < root->children.size(); j++) {
if (str[i] == root->children[j]->character) {
std::string temp = "";
temp += str[i];
int k = i+1;
while (k < str.length()) {
if (str[k] == root->children[j]->children[k-i]->character) {
temp += str[k];
}
else {
if (temp.length() < shortest.length()) {
shortest = temp;
}
break;
}
k++;
}
}
}
}
return shortest;
}
int main() {
std::string str = "banana";
Node* root = create_node('$', -1); build_suffix_tree(str, root); std::cout << "The shortest non-substring of " << str << " is " << shortest_non_substring(root, str) << std::endl; return 0; } #### Write a program to find all the strings that are not substrings of a given string using a suffix tree. #include <iostream> #include <string> #include <vector> struct Node { char character; int index; std::vector<Node*> children; }; Node* create_node(char character, int index) { Node* node = new Node; node->character = character; node->index = index; return node; } void insert_node(Node* root, Node* node) { if (root->children.empty()) { root->children.push_back(node); } else { for (int i = 0; i < root->children.size(); i++) { if (root->children[i]->character == node->character) { root->children[i] = node; break; } } } } void build_suffix_tree(std::string str, Node* root) { int n = str.length(); for (int i = 0; i < n; i++) { Node* node = create_node(str[i], i); insert_node(root, node); } } std::vector<std::string> non_substrings(Node* root, std::string str) { std::vector<std::string> non_substrings; for (int i = 0; i < str.length(); i++) { for (int j = 0; j < root->children.size(); j++) { if (str[i] == root->children[j]->character) { std::string temp = ""; temp += str[i]; int k = i+1; while (k < str.length()) { if (str[k] == root->children[j]->children[k-i]->character) { temp += str[k]; } else { non_substrings.push_back(temp); break; } k++; } } } } return non_substrings; } int main() { std::string str = "banana"; Node* root = create_node('$', -1);
build_suffix_tree(str, root);
std::cout << "The non-substrings of " << str << " are: " << std::endl;
std::vector<std::string> non_substrings = non_substrings(root, str);
for (int i = 0; i < non_substrings.size(); i++) {
std::cout << non_substrings[i] << std::endl;
}
return 0;
} | 3,274 | 12,984 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.671875 | 4 | CC-MAIN-2023-40 | latest | en | 0.916829 |
https://socratic.org/questions/a-triangle-has-two-corners-with-angles-of-pi-12-and-pi-12-if-one-side-of-the-tri-3 | 1,623,575,480,000,000,000 | text/html | crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00267.warc.gz | 476,522,288 | 6,524 | # A triangle has two corners with angles of pi / 12 and pi / 12 . If one side of the triangle has a length of 5 , what is the largest possible area of the triangle?
Oct 16, 2017
Largest area possible $\ast 6.2506 \ast$
#### Explanation:
Three angles are $\frac{\pi}{12} , \frac{\pi}{12} , \frac{5 \pi}{6}$
$\frac{a}{\sin} a = \frac{b}{\sin} b = \frac{c}{\sin} c$
To obtain largest possible area,
Length 5 should be opposite to the angle with least value.
$\frac{5}{\sin} \left(\frac{\pi}{12}\right) = \frac{b}{\sin} \left(\frac{\pi}{12}\right) = \frac{c}{\sin} \left(\frac{5 \pi}{6}\right)$
Side b = 5.
Side $c = \frac{5 \cdot \sin \left(\frac{5 \pi}{6}\right)}{\sin} \left(\frac{\pi}{12}\right)$
Side $c = \frac{5 \cdot \sin \left(\frac{\pi}{6}\right)}{\sin} \left(\frac{\pi}{12}\right) = \frac{5}{2 \cdot \sin \left(\frac{\pi}{12}\right)}$
$c = 9.6593$
Area
$s = \frac{5 + 5 + 9.6593}{2} = 9.8297$
$A = \sqrt{s \left(s - a\right) \left(s - b\right) \left(s - c\right)}$
$= \sqrt{9.8297 \cdot 4.8297 \cdot 4.8297 \cdot 0.1704}$
Area $A = 6.2506$
Oct 16, 2017
Area = 6.25
#### Explanation:
Alternate method :
Area of $\Delta = \left(\frac{1}{2}\right) b h$
Given triangle is isosceles as two angles are $\frac{\pi}{12}$ each.
$\therefore h = 5 \cdot \sin \left(\frac{\pi}{12}\right)$
$\left(\frac{1}{2}\right) b . = 5 \cdot \cos \left(\frac{\pi}{12}\right)$
Area $= 5 \cdot 5 \cdot \sin \left(\frac{\pi}{12}\right) \cdot \cos \left(\frac{\pi}{12}\right)$
$= \left(\frac{25}{2}\right) \cdot 2 \sin \left(\frac{\pi}{12}\right) \cos \left(\frac{\pi}{12}\right)$
$= \left(\frac{25}{2}\right) \sin \left(\frac{\pi}{6}\right) = \left(\frac{25}{2}\right) \left(\frac{1}{2}\right) = \frac{25}{4} = 6.25$ | 687 | 1,709 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.59375 | 5 | CC-MAIN-2021-25 | latest | en | 0.490458 |
https://nethercraft.net/end-portal-calculator/ | 1,709,626,770,000,000,000 | text/html | crawl-data/CC-MAIN-2024-10/segments/1707948223038.94/warc/CC-MAIN-20240305060427-20240305090427-00686.warc.gz | 396,671,351 | 13,412 | # End Portal Calculator
If you are looking for end portal calculator, check the results below :
## 1. EnderVision: Calculator
https://weather.cod.edu/~anderson/endervision/calculator.php
Screenshot:
Stronghold Location Calculator Point A: X: … Use the “Eye of Ender” and quickly center your crosshair exactly on it once it reaches it’s peak position, then take a …
## 2. end portal calculator – Hotel Esplanad
Stronghold positions for consoles may not be 100% accurate. However, in Creative, the player can build an end portal. For technical reasons, you need to know …
## 3. Nether Portal Calculator – MaximumFX
https://maximumfx.nl/portal/en/
Coordinate calculator: Overworld Nether portals. Overworld coords, Nether coords. X, Y, Z, X …
## 4. Minecraft End Portal Finder – No Mods – Omni Calculator
https://www.omnicalculator.com/other/end-portal-finder
Use two Eyes of Ender to triangulate the location of the nearest End Portal. Works for both Bedrock and Java editions.
## 5. Minecraft Portal Calculator – MavenSpun
https://mavenspun.com/games/minecraft/
When you have played long enough to be at a stage where you are building nether portals you can use this quick calculator to find where to build your return gate …
## 6. Minecraft Stronghold Finder
http://strongholdfinder.com/
… PC Minecraft that is designed to help you findthe strongholds and end portals … It is good to use an open area so you can easily see where your Eye of Ender …
## 7. Minecraft Portal Calculator – When Life Gives You a SIGSEGV
https://www.wilgysef.com/mc-portal-calculator/
Minecraft Portal Calculator. Enter 3 coordinates to calculate the 4th coordinate: Overworld start coordinates: x: y: z: Overworld end coordinates: x: y: z:.
## 8. Nether Portal Calculator v1.0 by D3Phoenix
http://ilurker.rooms.cwal.net/portal.html
Choose a location for a portal in the Overworld and, without lighting it, build the frame. … Enter the coordinates into the Overworld to Nether calculator below. … Keep this in mind, as you might end up having to build a bunch of ladders in the …
## 9. end portal calculator – Olsen Communities
https://www.olsencommunities.com/docs/brhuhn.php?id=fc9571-end-portal-calculator
You can start by building the frame for your End Portal using 12 end portal frames. Nether. you can use this quick calculator to find where to build your return …
## 10. How to locate the EXACT location of an End Portal? : Minecraft
Hey, so I have been having trouble lately locating an End Portal. I found a Stronghold no problem, but finding the End Portal has been an issue. I …
## 11. Block Tools – Useful Tools for Minecraft Players
https://blocktools.deep-orbit.com/
Nether Calculator, Direction Finder, Block Path, Poison Potato Pack and more! … This is a great way to know the exact direction to travel when you know your start and end coordinates. … Find out where to place your Minecraft Nether portals.
## 12. 5 Ways to Find the End Portal in Minecraft – wikiHow
https://www.wikihow.com/Find-the-End-Portal-in-Minecraft
… cities in the sky. Before you can do this, however, you’ll need to find a rare End Portal… … Set the calculator to degrees, not radians. The x-coordinate of the … | 785 | 3,216 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.84375 | 3 | CC-MAIN-2024-10 | longest | en | 0.748172 |
https://www.edaboard.com/threads/design-of-microstrip-lowpass-filter.408752/ | 1,725,897,438,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00224.warc.gz | 695,770,036 | 34,849 | Continue to Site
# Design of microstrip lowpass filter
#### okik
##### Newbie level 6
So i have given circuit (link below) and my question is:
Is possible to somehow transform that LC part (L1-Cp1) of that given circuit into microstrip?
Last edited by a moderator:
Each LC in || has a magnitude of impedance that is equal but opposite phase at the resonant frequency. Thus the length of the 1/4 wavelength controls frequency and the width/depth ratio on gnd. plane controls impedance.
This is for HPF but shows all the variables for different common filters too.
Each LC in || has a magnitude of impedance that is equal but opposite phase at the resonant frequency. Thus the length of the 1/4 wavelength controls frequency and the width/depth ratio on gnd. plane controls impedance.
This is for HPF but shows all the variables for different common filters too.
Is this only way how to do it? I mean, i knew values of L=20nH, Cp=1.9pF, Cs=1.2pF, Z=75 ohm - from that can be calculated impedance and not from table as was shown in the video.
Do you have a specific tolerance or transfer function for this "almost" Inverse Chebychev 5-pole 1 GHz LPF?
The microstrip will be recursive, unlike fixed lumped elements so is not a good fit in the bandstop region.
Last edited:
you are using the WRONG TOPOLOGY for a microstrip Lowpass Filter!
THIS will be much easier way to realize an elliptical function lowpass filter
you are using the WRONG TOPOLOGY for a microstrip Lowpass Filter!
View attachment 186401
THIS will be much easier way to realize an elliptical function lowpass filter
Every lumped strip has an impedance of sqrt(L/C) with resistance. Every shunt capacitance has inductance.
So eventually it can start like your schematic with stripline and grow in complexity depending on your L/C ratios and stripline impedance.
For example this;
you are using the WRONG TOPOLOGY for a microstrip Lowpass Filter!
View attachment 186401
THIS will be much easier way to realize an elliptical function lowpass filter
thank you, sure this is better way how to make microstrip lp filter
i try to make project in CST from given example, but my simulated S-parameters dont equal expected S-parameters and everything looks right(scheme, lengths/widths of microstrip), so what could be wrong(i think potential problem could be somewhere in CST)?
#### Attachments
• 1701120826722.png
59.5 KB · Views: 119
This is how someone like me would approach your design which I know nothing about your "stepped impedance" filter.
It may have something to do with the assumptions. What happens if the stubs are put on alternate sides?
I don't know what substrate did you use in the simulation shown in #9 but I was able to replicate somehow the S11 and S21 plot (a) using Alumina Dk=9.8 with thickness 0.8mm.
i try to make project in CST from given example, but my simulated S-parameters dont equal expected S-parameters and everything looks right(scheme, lengths/widths of microstrip), so what could be wrong(i think potential problem could be somewhere in CST)?
Something is totally wrong with your CST model. The filter must have a through path at DC (0dB insertion loss), but your CST model blocks DC. There might be some gap in the layout.
Edit: Looking at your model screenshot I think the open stub is shorted to the simulation boundary, see arrow.
This is how someone like me would approach your design which I know nothing about your "stepped impedance" filter.
It may have something to do with the assumptions. What happens if the stubs are put on alternate sides?
if i put stubs on alternate side that result is still same,
dont know what else should i change
Something is totally wrong with your CST model. The filter must have a through path at DC (0dB insertion loss), but your CST model blocks DC. There might be some gap in the layout.
Edit: Looking at your model screenshot I think the open stub is shorted to the simulation boundary, see arrow.
View attachment 186520
how to solve it?
maybe larger substrate is needed to avoid short circuit?
EDIT: open stub shorted to the simulation boundary was the problem as you mention, so then i made substrate larger(to create gap above stub) and it works!
thank you so much for advice
Last edited:
I have obtained a very close result with Alumina Er=9.9 , H=1.2mm
Last edited:
Hello everyone, you help me a lot before and I have request for a help again.
I successfully repeated example in chapter 5.1.3 from this book: HONG, Jia-Shen G.; LANCASTER, Michael J. Microstrip filters for RF/microwave applications. You can see that example above in post #9. So I use this example as guide in my own design of microstrip filter. Everything seems to be fine, but one problem occurs. That is what I want to ask you - how to compensate the unwanted reactance and susceptance presented at T-junction? I mean that relation (5.11) is used in book, but i don't understand how to use it correctly to get correct S-parameter results.
Thanks in advance for any help.
What do you need to learn in order to do this?
The others can analyze this human radar detector.
i try to make project in CST from given example, but my simulated S-parameters dont equal expected S-parameters and everything looks right(scheme, lengths/widths of microstrip), so what could be wrong(i think potential problem could be somewhere in CST)?
View attachment 186510
View attachment 186509
I stay away from such structures, since the aspect ratio is so far off. The poor electrons do not now if it is a transmission line, or a rectangular resonator! And in any event, the fringing capacitances would be a huge factor!
It certainly can not be analyzed as a simple Microstrip transmission line open circuited end!
As an academic exercise, you CAN make structures like this, but require a full emag simulation to figure out how they will react. and they probably then show unwanted higher mode artifacts that make the stop band performance spotty
That's why this is better for parasitics
That's why this is better for parasitics
That is a patch antenna ... | 1,388 | 6,088 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.859375 | 3 | CC-MAIN-2024-38 | latest | en | 0.904023 |
https://praneshm.wordpress.com/2017/07/24/partial-differentiation/ | 1,539,702,583,000,000,000 | text/html | crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00407.warc.gz | 765,626,727 | 20,945 | # Partial Differentiation
• Introduction
So far, while studying calculus, we have dealt with functions of single variable, i.e. ${y=f(x)}$. ${sin (x^2), ln \ x, e^{cos \ (tan \ x)}}$ are few examples. Irrespective of their complexity, the variable ${y}$ always depended on the value of independent variable ${x}$. We also defined the derivatives and integrals of ${f(x)}$ and studied few applications of them.
More often than not, we encounter situations, where a function ${f}$ needs more than 1 independent variable for its definition. Such functions are known as functions of several variables. e.g. a function of 2 variables is
${f(x,y) = sin (x) e^y \times xy^{3/2}}$
Thus, without knowing values of both ${x}$ and ${y}$ simultaneously, we cannot get a unique value of ${f(x,y)}$.
One can define a function of as many variables as one wants. (Of course, it should make some sense.) In many of the problems in mechanical engineering, the functions are of at the most 4 independent variables; viz. 3 space variables, ${(x,y,z)}$ and a time variable ${t}$.
The partial differentiation involves obtaining the derivatives of functions of several variables.
• Definition and Rules
Let ${z}$ be a function of 2 independent variables ${x}$ and ${y}$. To differentiate ${z}$ partially w.r.t ${x}$, we treat ${y}$ as a constant and follow the usual process of differentiation. Thus,
${\dfrac {\partial z}{\partial x} = \lim \limits_{\delta x \to 0} \dfrac {f(x + \delta x, y) - f(x,y)}{\delta x}}$
Similarly,
${\dfrac {\partial z}{\partial y} = \lim \limits_{\delta y \to 0} \dfrac {f(x, y+ \delta y) - f(x,y)}{\delta y}}$
Thus, the definition is similar to that of ordinary differentiation. The condition of existence of the limit is necessary.
Note that we use the letter ${\partial}$ for partial derivatives and the letter ${d}$ for ordinary derivatives.
The rules of for differentiation of addition, subtraction, multiplication, division are same as ordinary differentiation.
• Derivatives of Higher Order
Having obtained the first order derivatives ${\dfrac {\partial z}{\partial x}}$ and ${\dfrac {\partial z}{\partial y}}$, we now define the second order derivatives, i.e.
${\frac {\partial}{ \partial x} \Big ( \frac {\partial z}{\partial x} \Big) , \frac {\partial}{ \partial y} \Big ( \frac {\partial z}{\partial x} \Big), \frac {\partial}{ \partial y} \Big ( \frac {\partial z}{\partial x} \Big), \frac {\partial}{ \partial y} \Big ( \frac {\partial z}{\partial y} \Big)}$
For a function of 2 variables, four 2nd order derivatives are possible. These are sometimes written as
${\dfrac {\partial^2 z}{ \partial x^2} = z_{xx}, \ \dfrac {\partial^2 z}{\partial y \partial x} = z_{yx},\ \dfrac {\partial^2 z}{\partial x \partial y} = z_{xy}, \ \dfrac {\partial^2 z}{\partial y^2} = z_{yy}}$
If the function and its derivatives are continuous, then we have
${\dfrac {\partial^2 z}{\partial y \partial x}= \dfrac {\partial^2 z}{\partial y \partial x}}$
One can define derivatives of order ${> 2}$ by following the same procedure.
• Types of Problems (Crucial from exam point of view)
I) Based on the definition and the commutative property of partial differentiation
II) Based on the concept of composite functions (Mostly involve the relations between cartesian and polar coordinates)
• Homogeneous Functions (Already encountered in M II , 1st unit)
When the sum of indices of the variables in a function is same for all terms, the function is said to be homogeneous of degree equal to the sum.
${6x^3y^2 + x^5 - xy^4}$
is an example. (Degree ${= 5}$)
Note that each term must be explicitly of the form ${a x^m y^n}$. Thus, ${sin (6x^3y^2 + x^5 - xy^4)}$ is NOT a homogeneous function.
• Euler’s Theorem (by Leonhard Euler)
For a homogeneous function ${z=f(x,y)}$ of degree ${n}$,
${x \dfrac {\partial z}{\partial x} + y \dfrac {\partial z}{\partial y} = nz}$
As a consequence of this,
${x^2 \dfrac {\partial^2 z}{ \partial x^2} + 2xy \dfrac {\partial^2 z}{\partial x \partial y} + y^2 \dfrac {\partial^2 z}{ \partial y^2} = n (n-1)z}$
Similarly, if ${u =f(x,y,z)}$ is a homogeneous function of 3 independent variables of degree ${n}$, then
${x \frac {\partial u}{\partial x} + y \frac {\partial u}{\partial y} + z \frac {\partial u}{\partial z}= nu}$
• Total Derivatives
Consider a function ${z = f(x,y)}$. If it so happens that ${x}$ and ${y}$ themselves are functions of another variable ${t}$, then the total derivative of ${z}$ w.r.t. ${t}$ is defined as
${\dfrac {dz}{dt} = \dfrac {\partial z}{\partial x} \times \dfrac {dx}{dt} + \dfrac {\partial z}{\partial y} \times \dfrac {dy}{dt}}$
Thus, if we are given a function ${z = g(t)}$, we would differentiate it w.r.t. ${t}$, thus getting ${\dfrac {dz}{dt}}$. Instead, if ${z}$ is expressed as ${f(x,y)}$ and ${x= \phi (t)}$ and ${y = \psi (t)}$, then obtaining the total derivative of ${f(x,y)}$ will be equivalent to getting ${\frac {d}{dt} g(t)}$
• Applications
We will discuss the applications of partial differentiation in the next unit. | 1,436 | 5,042 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.6875 | 5 | CC-MAIN-2018-43 | latest | en | 0.908374 |
https://www.airmilescalculator.com/distance/dwc-to-ias/ | 1,606,815,656,000,000,000 | text/html | crawl-data/CC-MAIN-2020-50/segments/1606141672314.55/warc/CC-MAIN-20201201074047-20201201104047-00302.warc.gz | 542,560,827 | 7,864 | # Distance between Jebel Ali (DWC) and Iași (IAS)
Flight distance from Jebel Ali to Iași (Al Maktoum International Airport – Iași International Airport) is 2155 miles / 3468 kilometers / 1873 nautical miles. Estimated flight time is 4 hours 34 minutes.
## Map of flight path from Jebel Ali to Iași.
Shortest flight path between Al Maktoum International Airport (DWC) and Iași International Airport (IAS).
## How far is Iași from Jebel Ali?
There are several ways to calculate distances between Jebel Ali and Iași. Here are two common methods:
Vincenty's formula (applied above)
• 2155.080 miles
• 3468.265 kilometers
• 1872.713 nautical miles
Vincenty's formula calculates the distance between latitude/longitude points on the earth’s surface, using an ellipsoidal model of the earth.
Haversine formula
• 2154.971 miles
• 3468.090 kilometers
• 1872.619 nautical miles
The haversine formula calculates the distance between latitude/longitude points assuming a spherical earth (great-circle distance – the shortest distance between two points).
## Airport information
A Al Maktoum International Airport
City: Jebel Ali
Country: United Arab Emirates
IATA Code: DWC
ICAO Code: OMDW
Coordinates: 24°53′46″N, 55°9′41″E
B Iași International Airport
City: Iași
Country: Romania
IATA Code: IAS
ICAO Code: LRIA
Coordinates: 47°10′42″N, 27°37′14″E
## Time difference and current local times
The time difference between Jebel Ali and Iași is 2 hours. Iași is 2 hours behind Jebel Ali.
+04
EET
## Carbon dioxide emissions
Estimated CO2 emissions per passenger is 235 kg (519 pounds).
## Frequent Flyer Miles Calculator
Jebel Ali (DWC) → Iași (IAS).
Distance:
2155
Elite level bonus:
0
Booking class bonus:
0
### In total
Total frequent flyer miles:
2155
Round trip? | 492 | 1,776 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2020-50 | latest | en | 0.793552 |
https://numbersworksheet.com/adding-subtracting-negative-and-positive-numbers-worksheets/ | 1,721,556,712,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763517663.24/warc/CC-MAIN-20240721091006-20240721121006-00494.warc.gz | 382,113,526 | 13,983 | # Adding Subtracting Negative And Positive Numbers Worksheets
The Unfavorable Numbers Worksheet is a wonderful way to commence training your young ones the thought of unfavorable amounts. A poor variety is any amount that is certainly below zero. It can be additional or subtracted. The minus signal suggests the negative number. You can even publish unfavorable amounts in parentheses. Under is a worksheet to help you began. This worksheet has an array of unfavorable phone numbers from -10 to 10. Adding Subtracting Negative And Positive Numbers Worksheets.
Adverse numbers are a number whoever benefit is lower than zero
A negative amount has a importance lower than zero. It might be conveyed with a variety line by two methods: with all the positive amount published as the first digit, along with the negative number composed because the previous digit. A positive amount is written with a as well as indicator ( ) before it, but it is optionally available to write it doing this. It is assumed to be a positive number if the number is not written with a plus sign.
## These are symbolized by way of a minus signal
In ancient Greece, negative amounts had been not used. They were disregarded, as his or her mathematics was based upon geometrical ideas. When European scholars began converting ancient Arabic texts from Northern Africa, they stumbled on understand adverse amounts and accepted them. Right now, bad phone numbers are represented by a minus sign. To learn more about the origins and history of negative figures, look at this post. Then, try out these illustrations to view how bad numbers have progressed as time passes.
## They may be included or subtracted
Positive numbers and negative numbers are easy to subtract and add because the sign of the numbers is the same, as you might already know. They are closer to than positive numbers are, though negative numbers, on the other hand, have a larger absolute value. They can still be added and subtracted just like positive ones, although these numbers have some special rules for arithmetic. You can even add and subtract adverse figures using a amount line and apply the identical regulations for addition and subtraction when you do for beneficial numbers.
## They can be represented by way of a quantity in parentheses
A negative amount is symbolized by a variety covered in parentheses. The negative sign is changed into its binary equal, as well as the two’s enhance is saved in the identical spot in memory space. The result is always negative, but sometimes a negative number is represented by a positive number. In these cases, the parentheses must be provided. You should consult a book on math if you have any questions about the meaning of negative numbers.
## They could be split with a beneficial variety
Unfavorable numbers may be multiplied and divided like positive figures. They can even be divided by other bad phone numbers. However, they are not equal to one another. At the first try you flourish a negative variety by a good quantity, you will get absolutely nothing for that reason. To produce the answer, you need to choose which indicator your solution ought to have. It can be simpler to recall a negative number after it is developed in mounting brackets. | 629 | 3,267 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.4375 | 4 | CC-MAIN-2024-30 | latest | en | 0.960466 |
iahelp.forumotion.com | 1,566,710,052,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00315.warc.gz | 95,948,895 | 11,389 | # Physics Forces Review
## Physics Forces Review
Definitions
• Newton's First Law - An object at rest tends to stay at rest and an object in motion tends to stay in motion at a constant velocity unless acted upon by an unbalanced force.
• Newton's Second Law - The acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and inversely proportional to the mass of the object
• Newton's Third Law - Whenever one object exerts a force on a second object, the second object exerts an equal and opposite force on the first.
Layman's Terms
• Newton's 1st Law - An object tends to stay at rest or stay in motion at the same speed, unless something else makes it do something else
• Newton's 2nd Law - F=ma
• Newton's 3rd Law - For every action there is an equal and opposite reaction
Forces Vocab Set
Old Motion Vocab Set
Formulas / Formulae
W = mg
FNET = ma
Ff = µW
FNET = Fobject - Ff
Δd = 0.5at² + Vit
Vf² = Vi² + 2aΔd
Vf = Vi + at²
a = ΔV / t
VAVG = Δd / t
g = 10ms-2 9.8ms-2
m = mass
Ff = Friction Force
For equations, sorry, you can't really have a Quizlet set for them, but here are some tips for memorizing equations. First, you can try writing down the equation several times. Also, you can try saying it out loud, or visualizing the shape of it. Lastly, this one works the best for me, try using the equation as much as possible in your practice problems so that you remember them.
Last edited by abq911 on Mon Dec 13, 2010 6:06 pm; edited 4 times in total
aqalieh95
Posts : 111
Points : 347
Reputation : 46
Join date : 2010-05-05
Age : 24
## Newton's Third Law
I know it's the same thing, but doesn't it make more sense to say
"For every action there is an equal and opposite reaction" ?
ncvaldivieso
Posts : 5
Points : 39
Reputation : 17
Join date : 2010-12-12
## Re: Physics Forces Review
Yea, I'll fix that
aqalieh95
Posts : 111
Points : 347
Reputation : 46
Join date : 2010-05-05
Age : 24
## The Flash Card
Flash cards for Motion: http://quizlet.com/3763512/motion-flash-cards/
Flash cards for Force: http://quizlet.com/3763038/force-flash-cards/
You can learn and quiz your self in those websites.
man987
Posts : 16
Points : 67
Reputation : 17
Join date : 2010-06-05 | 672 | 2,271 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.78125 | 4 | CC-MAIN-2019-35 | latest | en | 0.879408 |
https://www.doubtnut.com/question-answer/a-charge-q-is-distributed-over-two-concentric-holllow-spheres-of-radii-r-and-r-gt-r-such-that-the-su-141761553 | 1,680,439,441,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00171.warc.gz | 811,540,676 | 25,802 | # A charge Q is distributed over two concentric hollow spheres of radii r and R(>r) such that the surface charge densities are equal. Find the potential at the common centre.
Khareedo DN Pro and dekho sari videos bina kisi ad ki rukaavat ke!
##### Updated On: 27-06-2022
Text Solution
Solution
Let q and q' be the charges on the inner and outer sphere.
As surface charge densities are equal
q4πr2=q'4πR2
or qR2=q'r2
Also,q+q'=Q . This gives q=Qq'
Solving the two equations , we get q=Qr2R2+r2,q'=QR2R2+r2
Now potential at the centre is given by
Vc=q4πε0r+q'4πε0R
=Q(r+R)4πε0(R2+r2) | 200 | 585 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.578125 | 4 | CC-MAIN-2023-14 | latest | en | 0.808774 |
https://geo.libretexts.org/Courses/Diablo_Valley_College/Fundamentals_of_Oceanography_(Keddy)/12%3A_Waves/12.01%3A_Waves/12.1.01%3A_Wave_Basics | 1,670,536,498,000,000,000 | text/html | crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00607.warc.gz | 313,336,560 | 29,794 | # 12.1.1: Wave Basics
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
Waves generally begin as a disturbance of some kind, and the energy of that disturbance gets propagated in the form of waves. We are most familiar with the kind of waves that break on shore, or rock a boat at sea, but there are many other types of waves that are important to oceanography:
• Internal waves form at the boundaries of water masses of different densities (i.e. at a pycnocline), and propagate at depth. These generally move more slowly than surface waves, and can be much larger, with heights exceeding 100 m. However, the height of the deep wave would be unnoticeable at the surface.
• Tidal waves are due to the movement of the tides. What we think of as tides are basically enormously long waves with a wavelength that may span half the globe (see section 11.1). Tidal waves are not related to tsunamis, so don’t confuse the two.
• Tsunamis are large waves created as a result of earthquakes or other seismic disturbances. They are also called seismic sea waves (section 10.4).
• Splash waves are formed when something falls into the ocean and creates a splash. The giant wave in Lituya Bay that was described in the introduction to this chapter was a splash wave.
• Atmospheric waves form in the sky at the boundary between air masses of different densities. These often create ripple effects in the clouds (Figure $$\PageIndex{1}$$).
There are several components to a basic wave (Figure $$\PageIndex{2}$$):
• Still water level: where the water surface would be if there were no waves present and the sea was completely calm.
• Crest: the highest point of the wave.
• Trough: the lowest point of the wave.
• Wave height: the distance between the crest and the trough.
• Wavelength: the distance between two identical points on successive waves, for example crest to crest, or trough to trough.
• Wave steepness: the ratio of wave height to length (H/L). If this ratio exceeds 1/7 (i.e. height exceeds 1/7 of the wavelength) the wave gets too steep, and will break.
There are also a number of terms used to describe wave motion:
• Period: the time it takes for two successive crests to pass a given point.
• Frequency: the number of waves passing a point in a given amount of time, usually expressed as waves per second. This is the inverse of the period.
• Speed: how fast the wave travels, or the distance traveled per unit of time. This is also called celerity (c), where
c = wavelength x frequency
Therefore, the longer the wavelength, the faster the wave.
Although waves can travel over great distances, the water itself shows little horizontal movement; it is the energy of the wave that is being transmitted, not the water. Instead, the water particles move in circular orbits, with the size of the orbit equal to the wave height (Figure $$\PageIndex{3}$$). This orbital motion occurs because water waves contain components of both longitudinal (side to side) and transverse (up and down) waves, leading to circular motion. As a wave passes, water moves forwards and up over the wave crests, then down and backwards into the troughs, so there is little horizontal movement. This is evident if you have ever watched an object such as a seabird floating at the surface. The bird bobs up and down as the wave pass underneath it; it does not get carried horizontally by a single wave crest.
The circular orbital motion declines with depth as the wave has less impact on deeper water and the diameter of the circles is reduced. Eventually at some depth there is no more circular movement and the water is unaffected by surface wave action. This depth is the wave base and is equivalent to half of the wavelength (Figure $$\PageIndex{4}$$). Since most ocean waves have wavelengths of less than a few hundred meters, most of the deeper ocean is unaffected by surface waves, so even in the strongest storms marine life or submarines can avoid heavy waves by submerging below the wave base.
When the water below a wave is deeper than the wave base (deeper than half of the wavelength), those waves are called deep water waves. Most open ocean waves are deep water waves. Since the water is deeper than the wave base, deep water waves experience no interference from the bottom, so their speed only depends on the wavelength:
where g is gravity and L is wavelength in meters. Since g and π are constants, this can be simplified to:
Shallow water waves occur when the depth is less than 1/20 of the wavelength. In these cases, the wave is said to “touch bottom” because the depth is shallower than the wave base so the orbital motion is affected by the seafloor. Due to the shallow depth, the orbits are flattened, and eventually the water movement becomes horizontal rather than circular just above the bottom. The speed of shallow water waves depends only on the depth:
where g is gravity and d is depth in meters. This can be simplified to:
Intermediate or transitional waves are found in depths between ½ and 1/20 of the wavelength. Their behavior is a bit more complex, as their speed is influenced by both wavelength and depth. The speed of an intermediate wave is calculated as:
which contains both depth and wavelength variables.
This page titled 12.1.1: Wave Basics is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Paul Webb via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 1,563 | 6,484 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.59375 | 4 | CC-MAIN-2022-49 | latest | en | 0.695597 |
https://gmatclub.com/forum/depending-on-which-scholar-you-consult-either-danieldefoes-84475.html?fl=similar | 1,506,120,657,000,000,000 | text/html | crawl-data/CC-MAIN-2017-39/segments/1505818689373.65/warc/CC-MAIN-20170922220838-20170923000838-00320.warc.gz | 665,790,027 | 50,730 | It is currently 22 Sep 2017, 15:50
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Depending on which scholar you consult, either DanielDefoes
Author Message
TAGS:
### Hide Tags
Manager
Joined: 05 Jun 2009
Posts: 112
Kudos [?]: 288 [0], given: 4
Depending on which scholar you consult, either DanielDefoes [#permalink]
### Show Tags
28 Sep 2009, 00:49
00:00
Difficulty:
(N/A)
Question Stats:
68% (00:46) correct 32% (00:41) wrong based on 113 sessions
### HideShow timer Statistics
Depending on which scholar you consult, either DanielDefoe’s Robinson Crusoe, Henry Fielding’s JosephAndrews, or Samuel Richardson’s Pamela is believedto have been the first English novel ever written.
is believed to have been the first English novelever written.
is believed as being the first English novel everwritten.
are the English novels believed to be the firstwritten.
are the English novels which were believed asthe first written.
are the first English novels ever believed to bewritten.
i know the is/are split ,which out of the first two and what clues to choose the answer.
Kudos [?]: 288 [0], given: 4
Math Forum Moderator
Joined: 02 Aug 2009
Posts: 4912
Kudos [?]: 5234 [0], given: 112
### Show Tags
28 Sep 2009, 00:57
A
see what follows "or"...in this case it is singular 'Samuel Richardson’s Pamela' ... so 'is' is correct... believed to be is correct idiom.... so A stays
_________________
Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
Kudos [?]: 5234 [0], given: 112
Manager
Joined: 01 Jul 2009
Posts: 219
Kudos [?]: 31 [0], given: 39
### Show Tags
28 Sep 2009, 02:58
A. Since there is "either", then it's singular and we should choose the answers with "is". Since the option B has "being", which renders that option incorrect, I say it's A.
_________________
Consider giving Kudos if you like the post.
Kudos [?]: 31 [0], given: 39
Manager
Joined: 21 May 2009
Posts: 135
Kudos [?]: 43 [0], given: 50
### Show Tags
28 Sep 2009, 03:55
thanku for the explanation guys
choose A
Kudos [?]: 43 [0], given: 50
Manager
Joined: 05 Jun 2009
Posts: 112
Kudos [?]: 288 [0], given: 4
### Show Tags
28 Sep 2009, 05:47
thanks guys for precise and prompt replies
Kudos [?]: 288 [0], given: 4
Manager
Joined: 05 Jun 2009
Posts: 112
Kudos [?]: 288 [0], given: 4
### Show Tags
28 Sep 2009, 06:44
dont have the OA but it should be A
Kudos [?]: 288 [0], given: 4
Senior Manager
Joined: 17 May 2010
Posts: 289
Kudos [?]: 54 [0], given: 7
GMAT 1: 710 Q47 V40
### Show Tags
01 Jun 2011, 09:12
A. Presence of 'OR' means the sentence is singular. This rules out C,D and E. B has being which is wrong. So A.
_________________
If you like my post, consider giving me KUDOS!
Kudos [?]: 54 [0], given: 7
Intern
Joined: 27 Jun 2011
Posts: 19
Kudos [?]: [0], given: 11
Location: Chennai
WE 1: 2.10
Re: Depending on which scholar you consult, either DanielDefoes [#permalink]
### Show Tags
17 Jan 2012, 03:32
Can someone pls help me understand.. How Either is Plural in our context...
Thanks
Kudos [?]: [0], given: 11
e-GMAT Representative
Joined: 02 Nov 2011
Posts: 2231
Kudos [?]: 8800 [0], given: 328
Re: Depending on which scholar you consult, either DanielDefoes [#permalink]
### Show Tags
17 Jan 2012, 09:28
Expert's post
1
This post was
BOOKMARKED
Hi,
Depending on which scholar you consult, either Daniel Defoe’s Robinson Crusoe, Henry Fielding’s Joseph Andrews, or Samuel Richardson’s Pamela is believed to have been the first English novel ever written.
(A) is believed to have been the first English novel ever written.
(B) is believed as being the first English novel ever written.
(C) are the English novels believed to be the first written.
(D) are the English novels which were believed as the first written.
(E) are the first English novels ever believed to be written.
@navi19: Plural verb can be used with the idiom “either… or…” depending on the context of the sentence. Consider the following examples:
1. Either the teacher or the students are responsible for this indiscipline.
2. Either the students or the teacher is responsible for this indiscipline.
So, the noun that appears after “or” decides the number of the verb. If the noun preceding “or” is singular, then the verb should be singular and vice-versa.
In the sentence in question, the correct verb to be used is certainly “is” because “Samuel Richardson’s Pamela” is a singular noun.
Between A and B, A is correct because it used the correct idiom “believed to have been written”. The correct idiom is “believed” to be something.
Hope this helps.
_________________
| '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com
Kudos [?]: 8800 [0], given: 328
Intern
Joined: 11 Jun 2015
Posts: 9
Kudos [?]: 3 [0], given: 11
Re: Depending on which scholar you consult, either DanielDefoes [#permalink]
### Show Tags
02 Jul 2015, 13:19
There is a similar question in the Kaplan Books, that I don't understand:
Depending on which scholar you consult, Christopher Columbus, Leif Ericson, or the Chinese eunuch Zheng Ho is credited with being the first explorer from the Eurasian continent to have traveled to the New World by ship.
A) is credited with being the first explorer from the Eurasian continent to have traveled to the New World by ship
B) is credited to be the first explorer from the Eurasian continent to have traveled to the New World by ship
C) is credited to have been the first explorer from the Eurasian continent to have traveled to the New World by ship
D) are credited with being the first explorers from the Eurasian continent to have traveled to the New World by ship
E) are credited to be the first explorers from the Eurasian continent to have traveled to the New World by ship
I understand is credited is correct, so D and E are out. O/A states that that correct idiom is credited with (A) but the sentence still sounds wrong because it is written, "credited with being". Can someone shed some insight?
Kudos [?]: 3 [0], given: 11
Manager
Status: I am not a product of my circumstances. I am a product of my decisions
Joined: 20 Jan 2013
Posts: 131
Kudos [?]: 118 [1], given: 68
Location: India
Concentration: Operations, General Management
GPA: 3.92
WE: Operations (Energy and Utilities)
Re: Depending on which scholar you consult, either DanielDefoes [#permalink]
### Show Tags
03 Jul 2015, 00:09
1
KUDOS
stephyw wrote:
There is a similar question in the Kaplan Books, that I don't understand:
Depending on which scholar you consult, Christopher Columbus, Leif Ericson, or the Chinese eunuch Zheng Ho is credited with being the first explorer from the Eurasian continent to have traveled to the New World by ship.
A) is credited with being the first explorer from the Eurasian continent to have traveled to the New World by ship
B) is credited to be the first explorer from the Eurasian continent to have traveled to the New World by ship
C) is credited to have been the first explorer from the Eurasian continent to have traveled to the New World by ship
D) are credited with being the first explorers from the Eurasian continent to have traveled to the New World by ship
E) are credited to be the first explorers from the Eurasian continent to have traveled to the New World by ship
I understand is credited is correct, so D and E are out. O/A states that that correct idiom is credited with (A) but the sentence still sounds wrong because it is written, "credited with being". Can someone shed some insight?
Correct idiom is "credited with"
ex: your account iscredited with 200\$.
Use of "being" is not always wrong.
Being can be used as a participle or a noun depending on context.
Kudos [?]: 118 [1], given: 68
Re: Depending on which scholar you consult, either DanielDefoes [#permalink] 03 Jul 2015, 00:09
Display posts from previous: Sort by | 2,272 | 8,514 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.234375 | 3 | CC-MAIN-2017-39 | latest | en | 0.871439 |
http://www.hanspub.org/reference/ReferencePapers.aspx?ReferenceID=10495&PaperID=9810 | 1,487,728,250,000,000,000 | text/html | crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00507-ip-10-171-10-108.ec2.internal.warc.gz | 417,782,872 | 18,997 | ## 文章引用说明 更多>>(返回到该文章)
K. P. Panov. The orbital distances law in planetary sys-tems. The Open Astronomy Journal, 2009, 2(1): 90-94.
• 作者:
期刊名称: 《Astronomy and Astrophysics》, Vol.1 No.2, 2013-04-30
摘要: 太阳系形成演化过程中星云成盘状分布,在星云盘中星云绕太阳做开普勒运动。如果一个绕转体行星胎离中心天体的距离为R,它的公转周期为T,依据开普勒第三定律推导可证明位于1.5874R处的星云绕中心天体的公转周期恰好等于2T,因此:1) R处的行星胎与1.5874R处的星云发生“冲”、“上合”的位置将相对保持恒定;2) 两者之间的夹角180度时,轨道为R处的行星胎对轨道为1.5874R处星云摄动力总是在同一向径出现;3) 在相等的演化时间里,R处的行星胎与1.5874R处的星云在同一向径发生“冲”、“上合”的次数最多。在这三种因素的综合累积作用下,使星云盘在1.5874R处出现空隙并向轨道R处收缩,被行星胎吸积,因此两相邻行星、规则卫星距离比值在1.5874这个常数左右摆动。土星、天王星、海王星的光环缝是由于卫星的摄动形成,当一条光环缝距中心天体的距离为R,那么在1.5874R处存在一个对应的卫星。根据开普勒第三定律的准确表达式推导论证出太阳系行星、规则卫星距离规律由3个因子决定,1) 常数项1.5874;2) 行星、规则卫星轨道的偏心率(偏心率大者,则相邻外侧行星、规则卫星与其距离比值大);3) 行星、规则卫星自身质量(质量大者,则远离内侧行星、卫星,靠近外侧行星、卫星)。 In this paper, the nebula during the solar system formation period is assumed to be distributed in disk form, and the nebula material was rotating around the sun following Kepler’s laws. If the distance of a rotating planetary fetal from the sun is R and its period of revolution is T, the period of revolution of the nebula material at a distance of 1.5874R should be 2T following Kepler’s third law. Therefore, 1) the locations, where the collisions and the superior conjunctions for the planetary fetals at distances 1R and 1.5874R to occur, should keep invariant; 2) when their included angle is 180 degree, the disturbing force on the nebula at distance 1.5874R caused by the planetary fetal at distance R will appear at the same radius vector; 3) during the same evolution period, the number of collisions and superior conjunctions for them will be the most. Due to the integrated combining actions by the three factors mentioned above, the nebula disk at distance 1.5874R will form gaps to contract towards the planetary fetal at distance R, resulting to the fact that the distance ratios of two neighboring planets or satellites are usually around the constant 1.5874. The gaps in the rings of Saturn, Uranus and Neptune were just caused by disturbing forces of satellites based on the rules found, i.e., a ring gap at R usually corresponds to a satellite at distance 1.5874R. According to deduction from Kepler’s third law, the distance law of planets and regular satellites are decided by three factors, i.e., the constant 1.5874, the orbital eccentricity and the mass of the planet or satellite. The larger for the orbital eccentricity, the larger for the distance ratio; the larger mass for a planet or satellite, the larger distance from the planet or satellite inside and therefore closer to the ones outside. | 949 | 2,571 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2017-09 | latest | en | 0.576816 |
https://answersingenesis.org/mathematics/fractals/ | 1,701,475,098,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679100308.37/warc/CC-MAIN-20231201215122-20231202005122-00609.warc.gz | 136,265,788 | 20,227 | # Fractals
## Hidden Beauty Revealed in Mathematics
Featured in Answers Magazine
Did you know that amazing, beautiful shapes have been built into numbers? Believe it or not, numbers like 1, 2, 3, etc., contain a “secret code”—a hidden beauty embedded within them.
Did you know that amazing, beautiful shapes have been built into numbers? Believe it or not, numbers like 1, 2, 3, etc., contain a “secret code”—a hidden beauty embedded within them. Numbers have existed from the beginning of creation, yet researchers have only recently discovered the hidden shapes that the Lord placed within them.1 Such beauty defies a secular explanation but confirms biblical creation.
The strange shape in Figure 1 is a sort of “map.” Most maps that we think of are representations of something physical, like a roadmap or a map of a country. But the map in Figure 1 does not represent a physical object; instead it represents a set of numbers. In mathematics, the term “set” refers to a group of numbers that have a common property. For example, there is the set of positive numbers (4 and 7 belong to this set; -3 and 0 do not).
A few decades ago, researchers discovered a very strange and interesting set called “the Mandelbrot set.”2 Figure 1 is a map (a plot) that shows which numbers belong to the Mandelbrot set.
### What do these images mean?
Figures 1 & 2 (click to enlarge)
A “set” is a group of numbers that all have a common property. For example, the numbers 4 and 6 are part of the set of even numbers, whereas 3 and 7 do not belong to that set. The Mandelbrot set is a group of numbers defined by a simple formula which is explained in the In-Depth box in this article. Some numbers belong to the Mandelbrot set, and others don’t.
Figure 1 is a plot—a graph that shows which numbers are part of the Mandelbrot set. Points that are black represent numbers that are part of the set. So, the numbers, -1, -1/2, and 0 are part of the Mandelbrot set. Points that are colored (red and yellow) are numbers that do not belong to the Mandelbrot set, such as the number 1/2. Although the formula that defines the Mandelbrot set is extremely simple, the plotted shape is extremely complex and interesting. When we zoom in on this shape, we see that it contains beautiful spirals and streamers of infinite complexity. Such complexity has been built into numbers by the Lord.
The Mandelbrot set (Figure 1) is infinitely detailed. In Figure 2, we have zoomed in on the “tail” of the Mandelbrot set. And what should we find but another (smaller) version of the original. This new, smaller Mandelbrot set also has a tail containing a miniature version of itself, which has a miniature version of itself, etc.—all the way to infinity.
The way to find if a number belongs to the Mandelbrot set is to put it through a particular formula (the details are shown in the In-Depth box in this article). In this way, we can check every possible number to see if it belongs to the Mandelbrot set, and then plot the results on a graph. We color the point black if it does belong to the Mandelbrot set; we give it a different color if it does not. For example, in Figure 1 we can see that the numbers 0 and -1 are part of the Mandelbrot set, whereas the number 1/2 is not.
Evolution cannot account for fractals. These shapes have existed since creation and cannot have evolved since numbers cannot be changed.
The Mandelbrot set is a very complex and detailed shape; in fact it is infinitely detailed. If we zoom in on a graphed piece of the Mandelbrot set, we see that it appears even more complicated than the original. In Figure 2, we have zoomed in on the “tail” of the Mandelbrot set. And what should we find but another (smaller) version of the original; a “baby” Mandelbrot set is built into the tail of the “parent.” This new, smaller Mandelbrot set also has a tail containing a miniature version of itself, which has a miniature version of itself, etc.—all the way to infinity. The Mandelbrot set is called a “fractal”3 since it has an infinite number of its own shape built into itself.
In Figure 3, we have zoomed into a region called the “Valley of Seahorses.” By zooming in on one of these “seahorses” we can see that it is a very complex spiral (see Figure 4). If we continue to zoom in, the order and beauty continue to increase as shown in Figures 5 and 6. As we zoom in yet again, we see in Figure 7 another “baby” version of the original Mandelbrot set at the center of the intersecting spirals; it appears virtually the same as the original shape, but it is 5 million times smaller.
Where did this incredible organization and beauty come from? Some might say that a computer produced this organization and beauty. After all, a computer was used to produce the graphs in the figures. But the computer did not create the fractal. It only produced the map—the representation of the fractal. A graph of something is not the thing itself, just as a map of the United States is not the same thing as the United States. The computer was merely a tool that was used to discover a shape that is an artifact of the mathematics itself.4
God alone can take credit for mathematical truths, such as fractals. Such transcendent truths are a reflection of God’s thoughts. Therefore when we discover mathematical truths we are, in the words of the astronomer Johannes Kepler, “thinking God’s thoughts after Him.” The shapes shown in the figures have been built into mathematics by the Creator of mathematics. We could have chosen different color schemes for the graphs, but we cannot alter the shape—it is set by God and His nature.
Evolution cannot account for fractals. These shapes have existed since creation and cannot have evolved, since numbers cannot change—the number 7 will never be anything but 7. But fractals are perfectly consistent with biblical creation. The Christian understands that there are transcendent truths because the Bible states many of them.5 A biblical creationist expects to find beauty and order in the universe, not only in the physical universe,6 but in the abstract realm of mathematics as well. This order and beauty is possible because there is a logical God who has imparted order and beauty into His universe.
### Infinite Complexity?
This sequence of images (Figures 3–7) shows what happens as we continually zoom in on a very small region of the Mandelbrot set. We start by zooming in on the highlighted region of the Mandelbrot set called the “Valley of Seahorses” (Figure 3). By zooming in on one of these “seahorses” we can see that it is a very complex spiral (Figure 4). We continue to zoom in (the region is indicated by the grayscale inset) in Figures 5, 6 and 7. Figure 7 shows a “baby” Mandelbrot set; it is virtually identical to the original shape, but it is 5 million times smaller.
Figure 3 (click to enlarge)
Figure 4 (click to enlarge)
Figure 5 (click to enlarge)
Figure 6 (click to enlarge)
Figure 7 (click to enlarge)
### In-Depth
The formula for the Mandelbrot set is zn+1 = zn2 + c. In this formula, c is the number being evaluated, and z is a sequence of numbers (z0, z1, z2, z3…) generated by the formula. The first number z0 is set to zero; the other numbers will depend on the value of c. If the sequence of zn stays small (zn ≤ 2 for all n), c is then classified as being part of the Mandelbrot set. For example, let’s evaluate the point c = 1. Then the sequence of zn is 0, 1, 2, 5, 26, 677… . Clearly this sequence is not staying small, so the number 1 is not part of the Mandelbrot set. The different shades/colors in the figures indicate how quickly the z sequence grows when c is not a part of the Mandelbrot set.
The complex numbers are also evaluated. Complex numbers contain a “real” part and an “imaginary” part. The real part is either positive or negative (or zero), and the imaginary part is the square-root of a negative number. By convention, the real part of the complex number (RE[c]) is the x-coordinate of the point, and the imaginary part (IM[c]) is the y-coordinate. So, every complex number is represented as a point on a plane. Many other formulae could be substituted and would reveal similar shapes.
## Footnotes
1. We don’t normally think of God creating numbers because they are abstract, not physical. But of course, all things were made by God (John 1:3), even the abstract things.
2. Named after its discoverer Benoit Mandelbrot.
3. The term “fractal” was coined by Benoit Mandelbrot in the 1970s. A fractal contains an infinite number of copies of itself. In some fractals, the copies are exactly the same as the original. However, in other cases (such as the Mandelbrot set), they are slightly different.
4. It could be said that we selected the formula that generates the Mandelbrot set. Although this formula defines the set, it did not create it or its complexity. The Mandelbrot set existed long before humans discovered it. Moreover, many other formulae also reveal this complexity and beauty of numbers. So, the principle does not hinge on the exact formula. The complexity and beauty are built into mathematics itself.
5. Such as laws of morality.
6. The physical universe also contains “fractals” such as snow-flakes. This is not surprising since nature is built on mathematical principles. As such, physical reality mimics the nonphysical world of mathematics. However, unlike pure mathematical fractals, physical fractals (like crystals, clouds, etc.) do not repeat forever since they are comprised of atoms. | 2,242 | 9,520 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.28125 | 4 | CC-MAIN-2023-50 | latest | en | 0.934995 |
https://utaheducationfacts.com/how-to-write-a-quadratic-equation-in-standard-form/ | 1,713,811,142,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296818337.62/warc/CC-MAIN-20240422175900-20240422205900-00415.warc.gz | 509,704,347 | 15,110 | # How To Write A Quadratic Equation In Standard Form
Everyone learns (and some readers maybe still remember) the boxlike formula. It’s a colonnade of algebra and allows you to break equations like Ax2 Bx C=0. But aloof because you’ve acclimated it doesn’t beggarly you apperceive how to appear up with the blueprint itself. It’s a buck to acquire so the all-inclusive majority of us artlessly acquire the formula. A Carnegie Mellon mathematician alleged Po-Shen Loh didn’t apprehend to acquisition a new way to acquire the band-aid aback he was reviewing algebraic abstracts for average academy use to accomplish them easier to understand. Afterwards all, bodies accept been analytic that blueprint for about 4,000 years. But that’s absolutely what he did.
Before we attending at the new solution, let’s allocution about why you appetite to break boxlike equations. They are acclimated in abounding contexts. In age-old times you ability use them to actuate how abundant added crop to abound to awning pay tax payments afterwards bistro in to the crop you bare to subsist. In physics, it can call motion. There’s acutely no end to how abounding things you can call with a boxlike equation.
Babylonians, in particular, would break accompanying equations to acquisition the roots of a quadratic. Egyptians, Grecians, Indians, and Chinese peoples acclimated graphical methods to break the equations. The absolute history is a bit abundant to get into, but still a abundant read. For this article, let’s dig into how the new ancestry was discovered.
So what’s the method? You watch Loh explain it himself in the video, below. Accept you accept a accepted boxlike blueprint for whatever acumen that looks like this:
In this case, A=1, B=-6, and C=3. Note that for Loh’s adjustment to work, A should according 1, but if it doesn’t you can consistently bisect both abandon by A to accomplish that true. For example, accede this equation:
It will accept the aforementioned roots as the aboriginal one because if you bisect both abandon by 3, you get the aboriginal equation.
To agency a polynomial into two binomials we can use FOIL (First/Outer/Inner/Last). So we apperceive that:
Where we aloof fabricated up S and R. They exist, but we don’t apperceive what they are yet. However, for the acknowledgment to be zero, you can see that the blueprint will be aught aback x=S or X=R so that agency S and R are the roots of the equation.
It sounds silly, but let’s accumulate out the binomials into addition polynomial. Bethink FOIL:
If you attending at that for a second, you will allegedly apprehend that S times R charge be 3. We additionally apperceive they add up to 6. At this point, you can allegedly aloof assumption the roots, but let’s accomplish it formal.
The roots of a polynomial like this will be in the anatomy of U±z. So if S=U z and R=U-z, the alone way to accomplish (U z) (U-z) according to 6 is if U is 6/2. Aback B is -6 in this example, we can adjudge that U charge be -B/2 in the accepted case.
So we now apperceive the roots are -B/2 z and -B/2-z. We apperceive that B is -6. We additionally apperceive the acknowledgment is aback we accumulate those roots calm charge be C (3, for our example). So we can write:
Or:
Do the FOIL adjustment afresh and get:
The average agreement will consistently abolish out so you get:
Subtract 3 from both abandon and add z boxlike to both sides:
Since the basis charge be -B/2±z we apperceive that our two roots are 3 √6 and 3-√6.
The point is, none of this was adamantine to bethink or assignment out. Aloof bethink that you carbon the aught as (x-S)(x-R) and the blow follows absolute logically.
The alone absolute botheration is not abounding boxlike equations accept A=1. For example, accept you bandy a brawl beeline up from 5 meters aloft the arena with a acceleration of 15 m/s. We appetite to apperceive aback the brawl will hit the ground.
Gravity is activity to cull bottomward on the brawl at about 4.9 meters per additional boxlike (assuming force accelerates at -9.8 m/s2). The physics blueprint is (at2)/2 so we get our aboriginal appellation of -4.8t2. The additional appellation will represent the acceleration of the brawl which is artlessly 15t. We additionally started 5 meters aloft the ground, so we will add 5 and wind up with:
This blueprint will acquaint you area the brawl is bold you bandy at t=0. We appetite to apperceive aback it hits the arena so that will be one of the roots:
To use the new method, aloof bisect it all through by -4.8 to get:
So now B=-3.125 and C=-1.04. We apperceive the basis will be -B/2±z or 1.5625 and that z2 will according (1.5625)2 1.04.
That’s 3.48 (about) and the aboveboard basis is about 1.866. The roots, then, are 1.5625±1.866. Working that out, we get -0.30 and 3.4285. The abrogating basis is nonsense in this case but the brawl will acreage about 3.5 abnormal afterwards you bandy it.
If you don’t assurance our work, ask Wolfram Alpha or bung 3.4285 aback into the aboriginal formula. Wolfram says 3.43 and the analysis your own assignment says 0.005 meters at that time, so accustomed that I angled a few times, that’s appealing close. Besides, I’m blank things like air resistance. This specific calculator says 3.36 seconds, which is still appealing close.
If you adopt things in algebraic steps, actuality you go:
We get it. Abounding bodies may accept ample this out before. But if they did, they allegedly forgot to allotment it with anyone in any abiding form. If you apprehend the absolute paper, you’ll see how accessible it is to symbolically acquire the “old” accepted formula. Compare that to the acceptable method. If you apprehend the proof, it seems simple enough, but go aback in a few weeks and try to assignment it out yourself. Not so easy. This is additionally abundant easier to bethink alike if you don’t appetite to acquire it every time.
Of course, authoritative A=1 is allotment of the trick. It is able-bodied known, for example, that the artefact of the roots of a boxlike is C/A. Aback we apperceive A=1, it follows that the artefact of S and R are additionally C, application this method. Normalizing A to 1 is additionally an old ambush and is sometimes alleged a bargain boxlike equation. However, it doesn’t attending like anyone put all the pieces calm until now.
If you don’t like Wolfram Alpha, there’s consistently Mathics (hint: use Solve[-4.8t^2 15t 5==0,t]). There are additionally specific calculators. Or aloof chaw the ammo and accumulate aggregate in a Jupyter Notebook.
How To Write A Quadratic Equation In Standard Form – How To Write A Quadratic Equation In Standard Form
| Allowed to help my personal website, with this time period I’ll demonstrate concerning How To Clean Ruggable. And after this, this can be a 1st impression:
Why not consider impression over? is usually in which awesome???. if you think maybe so, I’l d show you several graphic again beneath: | 1,723 | 6,960 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.609375 | 4 | CC-MAIN-2024-18 | latest | en | 0.925933 |
https://astronomy.stackexchange.com/questions/35254/sun-constantly-converts-mass-into-energy-will-this-cause-its-gravity-to-decreas?noredirect=1 | 1,723,550,199,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722641076695.81/warc/CC-MAIN-20240813110333-20240813140333-00465.warc.gz | 84,377,037 | 42,094 | # Sun constantly converts mass into energy, will this cause its gravity to decrease?
If the sun is constantly converting the mass into energy, then will its gravitational field continue decreasing?
• You might be interested in this question astronomy.stackexchange.com/questions/18539/… -- the answer links to a paper which describes how comet orbits change as the solar wind carries mass away from the Sun; it turns out that the change in orbits due to solar mass changing are pretty small compared to other effects on orbits. Commented Feb 25, 2020 at 18:22
• A few points about relativity that don't seem to have been directly addressed by the existing answers: (1) Both photons and neutrinos are effectively massless (energy $\gg$ mass). (2) The source of gravity is not mass, nor is it mass+energy; it's the stress-energy tensor. (3) If the sun were only converting massive particles into massless particles such as photons, and retaining those massless particles, then its stress-energy tensor would not change. (The stress-energy is locally conserved.) So what matters is the rate at which mass-energy is escaping the sun.
– user15381
Commented Feb 27, 2020 at 23:40
If the sun is constantly converting the mass into energy, then will its gravitational field go on decreasing?
It's a very interesting question and the answer is yes!
The solar constant indicates the mean solar radiation of electromagnetic waves (mostly in visible and near infrared light and I'll answer based on that.
While the conversion of mass matter to energy in the Sun's core now represents a loss of mass proper matter, it turns out that that energy (trapped in the Sun and slowly diffusing towards the surface) will have the same gravitational attraction as the matter it came from until it actually escapes the Sun!
There is some prompt mass and energy loss via neutrinos and it's significant, perhaps several hundred keV per neutrino I simply don't know the number yet. I'll ask a separate question about it. I'm guessing that losses due to the stellar wind are small, but I'll update here as soon as the following is answered:
update: the answer there is that loss via neutrinos is only about 2.3% of the radiative loss, and on average loss via solar wind and coronal mass ejections is about 4E+16 kg/year, or about another 30% relative to the radiative loss described below.
The value $$I$$ is about 1360 Watts per square meter at $$R$$ = 1 AU which is about 150 million kilometers or 150 billion meters. So the total energy lost per second $$P$$ is
$$P = 4 \pi R^2 I$$
Taking the time derivative of $$E = m c^2$$ we get
$$\frac{dE}{dt} = P = \frac{dm}{dt} c^2$$
so
$$\frac{dm}{dt} = \frac{1}{c^2} \ 4 \pi R^2 I$$
That means that the value of the mass that we use to calculate the Sun's gravitational attraction changes by about 4.3E+09 kilograms per second, or 1.3E+17 kilograms per year.
The Sun's current mass is about 2.00E+30 kilograms, so this effect changes by a very tiny fraction per year, about 6.7E-14. Over the age of the Earth of 4.5 billion years, that's 3E-04, or about 0.03% if the Sun's output were constant. It has probably changed over this time of course, so this is just a rough estimate.
Thanks to @S.Melted's answer for clarifying this.
The Earth feels no torque from any force during this (the force from radiation is radial), which means its angular momentum is conserved. This means $$R_1 v_1 = R v$$ $$R_1 \sqrt{\frac{GM_1}{R_1}} = R \sqrt{\frac{GM}{R}}$$ $$M_1 R_1 = M R$$ We can see the Earth's orbital radius would change by a factor of 0.03% as well (M1 and M are solar masses).
• This answer is great imo, I just added the orbital radius part, hope you don't mind Commented Feb 25, 2020 at 9:35
• @Tosic not at all! But until I can stop and read it and think about it I'll label it as an edit. Thanks!
– uhoh
Commented Feb 25, 2020 at 9:37
• @Tosic If I read your edit right, decreases in the mass of the Sun mean Earth's orbital radius will increase, right? Commented Feb 25, 2020 at 11:49
• @Bilkokuya if $m$ is the mass of the Sun at some time, and the mass is $m - \Delta m$ one year later, then the fraction $\Delta m / m$ is 6.7E-14. $\Delta m$ = 6.7E-14 $m$. It's the fraction of the total mass that's lost in one year.
– uhoh
Commented Feb 25, 2020 at 13:22
• @James which is of course the reason I've asked exactly that question and have already indicated that I'll edit and update once that becomes available.
– uhoh
Commented Feb 25, 2020 at 13:26
I wanted to expand a bit on a point @uhoh made. (I would have made this a comment, but lack the reputation). The Sun is not converting mass to energy. The Sun is converting matter (not mass) into other forms of energy, such as light. As you noted, the energy still has gravitational attraction (i.e. mass).
If you use nuclear fuel and afterwards collect all the matter, you'll find that you have less. Some has been converted to other forms of energy and radiated away. However, if you had an impenetrable box that contained all forms of energy and use the fuel, you would find that mass is the same.
To expand, if you warm a cup of coffee, that same cup of coffee now weighs more. It has gained the mass of the energy of the heat it now contains.
• I don't know why all the hair-splitting over mass vs. matter. Any matter ("frozen energy", if you will) has mass, which is convertible to and from energy. In Einstein's famous equation $E=mc^2$, my understanding is that "m" is mass, measured in something like kilograms. Commented Feb 27, 2020 at 1:09
• @PhilPerry It's rest mass, which isn't something you could measure directly unless the object was completely still. The full equation is (from memory) $E^2 = m^2 c^4 + c^2 p^2$, which takes into account momentum too. Commented Feb 27, 2020 at 6:40 | 1,514 | 5,802 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.765625 | 3 | CC-MAIN-2024-33 | latest | en | 0.961582 |
https://www.reddit.com/r/pics/comments/1cvswd/i_won_every_single_one/ | 1,508,405,742,000,000,000 | text/html | crawl-data/CC-MAIN-2017-43/segments/1508187823260.52/warc/CC-MAIN-20171019084246-20171019104246-00875.warc.gz | 997,663,880 | 23,572 | This is an archived post. You won't be able to vote or comment.
[–] 20 points21 points (14 children)
It's \$100 for whoever doesn't want to count it.
[–] 8 points9 points (10 children)
actually you don't have to count. see the three seemingly random letters inside the play area? N, O, and H? you can use those letters to tell what you won. in this instance, HON = \$100
[–] 1 point2 points (0 children)
Interesting, I had no idea! That's quite neat actually.
[–][S] 1 point2 points (4 children)
You just blew my mind. Thanks!
[–] 1 point2 points (3 children)
no prob. back when i was in high school i worked at a convenience store, this was a handy trick because they didn't have the scanners back then, you had to process them manually
[–] 1 point2 points (2 children)
I miss when convenience stores processed things manually... Feels so informal with all the machines and whatnot these days.
[–] 1 point2 points (1 child)
yeah but it was definitely a security issue. i gotta confess i kinda cheated the system a bit back then for some extra cash
[–] 5 points6 points (0 children)
I lost my old job because i was replaced by a machine.
...
I was a microwave oven...
[–] 1 point2 points (0 children)
Happy cakeday.
[–] 1 point2 points (0 children)
The higher dollar amount are three random letters btw.
[–] 0 points1 point (1 child)
I would like to know more
[–] 0 points1 point (0 children)
really you just gotta be familiar with the codes. some are obvious: ONE = \$1, TWO = \$2, FIV = \$5, etc
[–] 1 point2 points (1 child)
TL; AC;
[–] 0 points1 point (0 children)
Me too :(
[–] 1 point2 points (0 children)
2nd Chance Drawing! 2nd Chance Drawing!! Scratch it!
[–][S] 0 points1 point (0 children)
The second chance is if you don't win you get a code to enter to get entered in a drawing for \$500k.
[–] 0 points1 point (3 children)
too bad it wasn't 20 x 100,000. But saying that you won \$100 and that is about 20x what I have ever won.
[–] 1 point2 points (2 children)
This says you have a chance to win 20 different times not 20x prize notice the 20 different boxes?
[–] 0 points1 point (1 child)
But couldn't it have been 20 100,000 prizes? I know it is not likely but if it were random, it could have somehow hit 20 100,000 spots.
[–] 1 point2 points (0 children)
[–] 0 points1 point (0 children)
How much was the ticket?
[–][S] 0 points1 point (0 children)
The ticket was only \$10. I hate to admit it but i have probably spent over \$100 this month on these. Your chances of winning are 1 in 3.54
[–] 0 points1 point (0 children)
[–] -2 points-1 points (1 child)
Scratch the "Void If Removed" so we can find out how this happened!
[–] -1 points0 points (0 children)
Dick. | 824 | 2,757 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2017-43 | latest | en | 0.938426 |
https://www.stem.org.uk/resources/search?f%5B0%5D=field_age_range%3A83&f%5B1%5D=field_subject%3A26&f%5B2%5D=field_type%3A11&%3Bf%5B1%5D=field_subject%3A116&%3Bamp%3Bf%5B1%5D=field_age_range%3A31&%3Bamp%3Bf%5B2%5D=field_type%3A49&%3Bamp%3Bf%5B3%5D=field_type%3A86&%3Bamp%3Bf%5B4%5D=field_type%3A62&%3Bamp%3Bf%5B5%5D=field_type%3A102 | 1,563,524,197,000,000,000 | text/html | crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00049.warc.gz | 846,363,251 | 11,949 | ## Listing all results (102)
### Wavy Edges
Due to problems in the manufacture of tinplate coils, the edge of the strip can be slightly longer than the centre. This causes a 'wave' on the wall of the coil but can be rectified by differentially stretching the strip to make the edges flat. Students are required to apply Pythagoras' theorem to find the radius...
### Tin Can Design
Tin cans come in a variety of shapes and sizes. In this activity students consider the net of a tin can, the formula for the total surface area and the formula for the volume of the can. The first problem requires students to express the total surface area as a function of r by eliminating h. The second problem...
### Silver Award: Future Travel
These materials look at three possible projects that relate to sustainable transport:
* Communications project - students gather information and produce a balanced view on the costs and benefits of using hydrogen as fuel for vehicles.
* Practical project - students investigate the factors that affect...
### Reduction Mill
The reduction mill reduces the thickness of a strip of steel using a series of rollers, each roller making the steel slightly thinner. The percentage reduction is constant on each pair of rollers. The mathematics used to calculate the actual reduction is similar to that used when calculating compound interest....
### Drinks Cans
Drinks cans are made by stamping out circular discs from a sheet of tin. Given the dimensions of the sheet of tin and the diameter of the circle stamped out, students are required to calculate the wastage and to investigate whether there is a more efficient method. The problem requires students to be able to...
### Coil Feed Line
This problem features a coil of tinplate being stored on a mandrel. In the first problem, students are presented with the diameter of the mandrel, the height from the floor, the width of the coil and the density of the steel. The problem is to calculate the maximum coil weight. A worked solution is included with...
### Structures Post-16
Produced by the Technology Enhancement Programme (TEP), this book contains five design and make challenges and seven study files. The materials and activities help students to investigate structures and the use of resistant materials. The study units look at, and help students to practise, skills and techniques...
### Manufacturing Post-16
This publication, from the Technology Enhancement Programme (TEP), helps students to understand the manufacturing process through a series of design challenges. Each has a specific brief and the topics include: • Designing and making a helping hand • Robotics: designing and making a walking robot • CAD/CAM:...
### Product impacts: life cycle analysis
Produced by Practical Action, these materials help students to consider the impacts of a product through its whole life cycle. A presentation describes the concept of lifecycle analysis (LCA) and gives some examples. Students are then challenged to consider the life cycle of familiar products, from raw materials,...
### Six R's
This resource from Practical Action helps students look at the 6Rs, their definitions and products which illustrate the 6Rs in action. These are: rethink, reuse, recycle, repair, reduce and refuse. Students look at the definitions carefully again and rank each of the Rs in terms of which contributes most and which... | 665 | 3,421 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3 | 3 | CC-MAIN-2019-30 | latest | en | 0.912642 |
https://www.tlm4all.com/2017/12/10th-class-maths-important-key-concepts.html | 1,680,255,055,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00392.warc.gz | 1,144,104,876 | 35,038 | Warning: No part of this site may be reproduced, stored in a retrieval system or transmitted in any form without the prior permission of the website owner is strictly prohibited
# 10th CLASS MATHS IMPORTANT KEY CONCEPTS
10th class Mathematics Important Key concepts
10th class Mathematics important key concepts prepared by I.V. MOHAN REDDY, HM, ZPHSCHOOL, KOTA NEMALI ,RAJUPALEM MD, GUNTUR DT.
This material contains chapter wise key concepts important formulas and important bit bank.Real Numbers,Sets, Polynomials, Pair of Linear Equations in Two Variables, Quadratic Equations, Progressions, Similar Triangles, Tangents and Secants to a Circle, Trigonometry, Statistics
A Maths concept is the 'why' or 'big idea' of maths. Knowing a math concept means you know the workings behind the answer. You know why you got the answer you got and you don't have to memorize answers or formulas to figure them out.
How to use this study material?
Here are some tips to use this study material while revision during the summative examinations.Go through the syllabus given in the beginning. Identify the units carrying more weightage. Suggestive blue print and design of question paper is a guideline for you to have clear picture about the form of the question paper. Revise each of the topic/unit , and consult the problem with your teacher. After revision of all the units, solve the sample paper and do self assessment with the value points.Must study the previous paper which will enable you to know the coverage of content under different questions.Underline or highlight key ideas to have bird eye view of all the units at the time of examination. Write down your own notes and make summaries with the help of this study material.
10th class maths important key concepts, 10th class important chapterwise key concepts and formulas,10th class mathematics study material, 10th class maths cce based study material, 10th class maths important formulas, 10th class maths chapter wise important topics
Previous
Next Post » | 431 | 2,022 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.578125 | 3 | CC-MAIN-2023-14 | longest | en | 0.909971 |
https://recruitmentresult.com/cbse-cgpa-calculator/ | 1,600,595,821,000,000,000 | text/html | crawl-data/CC-MAIN-2020-40/segments/1600400197946.27/warc/CC-MAIN-20200920094130-20200920124130-00318.warc.gz | 614,155,086 | 12,348 | # CBSE CGPA Calculator
Central Board of Secondary Education announces CBSE 10th Result in form of Cumulative Grade Points Average (CGPA). Many students don’t know how to How to Calculate Marks. Through CBSE CGPA Calculator 2019, students can easily calculate CGPA for CBSE Class 10. More details about CBSE Online CGPA Converter and how to calculate CBSE CGPA etc are given below.
recruitmentresult.com
## CBSE CGPA Calculator
What is CGPA?
Here are some points that will describe CGPA perfectly, have a look…
• CGPA is abbreviated as Cumulative Grade Points Average.
• Cumulative Grade Points Average is the average of Grade Points scored by student in 5 subjects excluding additional 6th subject as per Scheme of Studies.
• CBSE ranks 10thclass students on the basis of CGPA rather than marks.
• It is calculated in a scale of 10, where highest is 10 and lowest is 4.
• The grade points given to students always depend upon their performance in exam
How to Calculate CBSE CGPA Marks?
Here we have given an easy method which you can use to calculate your CGPA. This method can also be used in case of absence of CBSE CGPA Calculator 2019, have a look…
Reliable method to calculate score from CGPA:
1. For average CGPA:
Sum of all the Grade Points (GP) of each subject must be divided by 5.
Let’s take an example, if an examinee scores the GP for- Subject 1 is 8, Subject 2 is 9, Subject 3 is 9, Subject 4 is 9.5 and Subject 5 is 7
Then the sum of GPs will be: 8+9+9+9.5+7= 42.5
Dividing 42.5 by 5, we get 8.5 which is the aggregate CGPA.
1. For overall indicative percentage of marks:
We need to multiply CGPA with 9.5. (9.5*CGPA)
1. To calculate subject wise indicative percentage of marks
We need to multiply 9.5 with GP of the subject. (9.5x GP of the subject)
CBSE Online Registration CBSE Sample Papers for Class 10 CBSE Exam Pattern CBSE Topper CBSE 12th Time Table CBSE 12th Result CBSE 10th Time Table CBSE Books CBSE Syllabus CBSE Scholarship CBSE Board Results What is the Full Form of CBSE CBSE Class 10 Syllabus CBSE Class 12 Syllabus CBSE Class 10 Results CBSE Grading System CBSE Question Papers Class 12 CBSE Guess Papers CBSE Question Papers CBSE Sample Papers for Class 12th CBSE Previous Year Question Papers Class 10 CBSE Admit Card
What is CGPA Calculator CBSE?
CBSE Class 10 CGPA Calculator in an online application that can help students in calculating the scores they obtain in boards examination. As year by year the number of students for 10t boards examination getting increase.
For the same reason the CBSE authorities replace the marking system with grading system and it born the need of Class 10 CBSE CGPA Calculator.
How to use Online CGPA Calculator CBSE?
To use the CBSE CGPA Calculator Online you must know either your score or your Grade or Precentage.
Now there are there types of CBSE CGPA Calculator as:
• CGPA To Percentage Calculator CBSE
• CBSE Percentage To CGPA Calculator
• CGPA CBSE Calculator for each subject
To calculate the Percentage from CGPA you need to enter your CGPA Score in the CGPA Calculator CBSE 10th and then have to hit on show tab, resulting your Percentage score will appear in front of you.
To calculate the CGPA from percentage you need to enter your percentage score in the CGPA Calculator For CBSE and then have to hit on show tab, resulting your CGPA will appear in front of you
There is another use of CGPA Calculator CBSE 10th 2018 and that is you can also calculate your subject wise scored with the help of Grades obtained. All you need to do is to enter the subject number and also have to choose the grade from CGPA Calculator Online CBSE. Once you do the same your subject wise score will appear in front of you.
Press Here For: CBSE Official Website
How to calculate percentage from CGPA?
To calculate percentage from CGPA all you need to do is to multiply the CGPA with 9.5. Furthermore you may also use the CBSE CGPA Calculator 2019 or CGPA to Percentage Calculator for CBSE 10 that is available online.
How to calculate CGPA from subject grade points?
2. Now divide the number obtained by 5
3. The resultant number is your CGPA.
4. Further you may also use the CBSE CGPA Grade Calculator
Meaning of Abbreviations found in results:
QUAL Qualified for Admission to Higher Classes EIOP Eligible for Improvement of Performance NIOP Not Eligible for Improvement of Performance XXXX Appeared for Upgradation of Performance/ Additional Subject TRNS Transfer Case ABST Absent N.E Not Eligible N.R Not Registered R.W Result Withheld UFM Unfair Means SJD Subjudice
Get Best Answer: How You Can Choose Best Career
CGPA to Percentage Chart:
CGPA Percentage 10 95 9.9 94.05 9.8 93.1 9.7 92.15 9.6 91.2 9.5 90.25 9.4 89.3 9.3 88.35 9.2 87.4 9.1 86.45 9 85.5 8.9 84.55 8.8 83.6 8.7 82.65 8.6 81.7 8.5 80.75 8.4 79.8 8.3 78.85 8.2 77.9 8.1 76.95 8 76 7.9 75.05 7.8 74.1 7.7 73.15 7.6 72.2 7.5 71.25 7.4 70.3 7.3 69.35 7.2 68.4 7.1 67.45 7 66.5 6.9 65.55 6.8 64.6 6.7 63.65 6.6 62.7 6.5 61.75 6.4 60.8 6.3 59.85 6.2 58.9 6.1 57.95 6 57 5.9 56.05 5.8 55.1 5.7 54.15 5.6 53.2 5.5 52.25 5.4 51.3 5.3 50.35 5.2 49.4 5.1 48.45 5 47.5 4.9 46.55 4.8 45.6 4.7 44.65 4.6 43.7 4.5 42.75 4.4 41.8 4.3 40.85 4.2 39.9 4.1 38.95 4 38
How To Convert Percentage To CGPA?
If you want to convert Percentage to CGPA then all you need to do is to divide your Percentage by 9.5 and result will be your CGPA
For example, to convert 68 percent to CGPA, we divide it by 9.5 and the resulting number 7.2 is the CGPA. To calculate CGPA, add the grade-points of five subjects excluding the additional subject and divide the sum total by 5. The result will be your CGPA.
For example, if you got 10 in Maths, 9 in English, 10 in Science, 7 in Social Studies and 8 in Hindi, then your CGPA will be (10+9+10+7+8)/5 = 8.8
Get Best Answer: How to Choose Right Career
Reference Grade Table Used By CBSE CGPA Calculator:
Scholastic-A
Marks Range Grade Grade Point 91-100 A1 10.0 81-90 A2 9.0 71-80 B1 8.0 61-70 B2 7.0 51-60 C1 6.0 41-50 C2 5.0 33-40 D 4.0 21-32 E1 00-20 E2
Scholastic-B & Life Skills
Grade Grade Point A + 5 A 4 B + 3 B 2 C 1
Co-Scholastic Activities and Health & Physical Education | 1,919 | 6,183 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.140625 | 3 | CC-MAIN-2020-40 | longest | en | 0.918 |
https://tableau.toanhoang.com/creating-pointed-podium-bar-charts-in-tableau/ | 1,627,743,432,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046154089.68/warc/CC-MAIN-20210731141123-20210731171123-00307.warc.gz | 572,063,047 | 31,201 | A few weeks ago I created a tutorial for making a Podium Bar Chart. This week we are going to draw a Point Podium Bar Chart in Tableau using Polygons. I hope you enjoy this one.
Note: This is an alternative type of data visualisation, and sometimes pushed for by clients. Please always look at best practices for data visualisations before deploying this into production.
## Data
We will start by loading the Sample Superstore data into Tableau Desktop / Tableau Public.
Note: If you have Tableau Desktop, you can use the Sample data source, but if you are using Tableau Public, download and load the following data source.
## Calculated Fields
With our data set loaded into Tableau, we are going to create the following Bins and Calculated Fields:
@Point Factor Parameter
• Set Name to @Point Factor
• Set Data type to Integer
• Set Allowable values to Range
• Set Minimum to 10
• Set Maximum to 50
• Set Step size to 10
• Set Current value to 10
@Spacing Factor Parameter
• Set Name to @Spacing Factor
• Set Data type to Float
• Set Current value to 1.2
Path
``IF [Ship Mode] = "First Class" THEN 1 ELSE 5 END``
Path (bin)
• Right-click on Path, go to Create and select Bins…
• In the Edit Bins dialogue window:
• Set New field name to Path (bin)
• Set Size of bins to 1
• Click Ok
Index
``INDEX()``
TC_Sales
``WINDOW_SUM(SUM([Sales]))``
TC_Max Sales
``WINDOW_MAX(SUM([Sales]))``
TC_Rank
``RANK_UNIQUE([TC_Sales])``
TC_Middle
``INT(WINDOW_MEDIAN([TC_Rank]))``
TC_Order
``````IF [TC_Rank] = 1 THEN
[TC_Middle]
ELSEIF [TC_Rank]%2 <> 0 THEN
[TC_Middle]+([TC_Rank]/2)-0.5
ELSE
[TC_Middle]-([TC_Rank]/2)
END``````
X
``````IF [Index] = 1 OR [Index] = 5 THEN
0
ELSEIF [Index] = 3 THEN
[TC_Sales]+([TC_Max Sales]/[@Point Factor])
ELSE
[TC_Sales]
END``````
Y
``````IF [Index] = 1 OR [Index] = 2 THEN
1
ELSEIF [Index] = 3 THEN
0.5
ELSE
0
END + ([TC_Order]*[@Spacing Factor])``````
Label Rank
``````IF [Index] = 3 THEN
[TC_Rank]
ELSE
NULL
END``````
Label Sales
``````IF [Index] = 3 THEN
[TC_Sales]
ELSE
NULL
END``````
Label Sub-Category
``````IF [Index] = 3 THEN
WINDOW_MAX(MAX([Sub-Category]))
ELSE
NULL
END``````
With this done, let us start creating our data visualisation.
## Worksheet
We will now build our worksheet:
• Change the Mark Type to Polygon
• Drag Sub-Category to the Detail Mark
• Drag Path (bin) onto the Columns Shelf
• Right-click on this pill and ensure that Show Missing Values is selected
• Drag this pill onto the Path Mark
• Drag X onto the Columns Shelf
• Right-click on this pill, go to Compute Using and select Path (bin)
• Drag Y onto the Rows Shelf
• Right-click on this pill, go to Compute Using and select Path (bin)
• Right-click on this pill and go to Edit Table Calculations
• In Nested Calculations select TC_Rank and ensure that only Sub-Category is selected
• In Nested Calculations select TC_Middle and ensure that only Sub-Category is selected
If all goes well, you should now see the following:
We will now adjust our data visualisation to get closer to our end product
• Drag TC_Sales onto the Color Mark and adjust
• Ctrl (or Command) and Drag and Drop the X pill to the right (this should duplicate it)
• In the X (2) Marks Panel
• Change the Mark Type to Line
• Drag Label Rank onto the Label Mark
• Right-click on this pill, go to Compute Using and select Path (bin)
• In Nested Calculations select TC_Rank and ensure that only Sub-Category is selected
• Drag Sales Label onto the Label Mark
• Right-click on this pill, go to Compute Using and select Path (bin)
• Drag Sub-Category Label onto the Label Mark
• Right-click on this pill, go to Compute Using and select Path (bin)
You should now see the following:
We will now adjust the cosmetics to end up with our final data visualisation:
• Hide the Axis Headers
• Hide the Row and Column Dividers
• Hide the Grid Lines
• Adjust the Label
• Adjust the color
• Remove the Tooltips
and we should now see something like this:
Note: Add animations and filters to see this bad boy in action.
and boom, we are done! I hope you enjoyed creating this data visualization and learned some cool techniques as well. As always, you can find this data visualisation on Tableau Public at https://public.tableau.com/profile/toan.hoang#!/vizhome/PointedPodiumBarChart/PointedPodiumBarChart
## Summary
I hope you all enjoyed this article as much as I enjoyed writing it and as always do share the love. Do let me know if you experienced any issues recreating this Visualization, and as always, please leave a comment below or reach out to me on Twitter @Tableau_Magic. Do also remember to tag me in your work if you use this tutorial.
If you like our work, do consider supporting us on Patreon, and for supporting us, we will give you early access to tutorials, exclusive videos, as well as access to current and future courses on Udemy: https://www.patreon.com/tableaumagic
Toan Hoang, Tableau Zen Master 2020, has over 15 years of experience in Business Intelligence, Data Management, Big Data, Data Lakes, Internet of Things (IoT), Data Visualisation and the Data Analytics space; the last six years has been dedicated to delivering end-to-end solutions using Tableau.
1. Hi Magician,
I am not getting the output as compared to the tutorial in Tableau Magic, I have cross checked multiple times with the calculations and implementations. I have sent a mail with the workbook, could you pls check and let me know where I have gone wrong.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 1,457 | 5,547 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2021-31 | latest | en | 0.744841 |
https://kr.mathworks.com/matlabcentral/cody/problems/42700-find-elements-of-set-a-those-are-not-in-set-b/solutions/1735001 | 1,596,998,058,000,000,000 | text/html | crawl-data/CC-MAIN-2020-34/segments/1596439738562.5/warc/CC-MAIN-20200809162458-20200809192458-00441.warc.gz | 373,771,676 | 15,491 | Cody
# Problem 42700. Find elements of set A those are not in set B
Solution 1735001
Submitted on 24 Feb 2019 by Athi
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
A = [1 2 3 4 5]; B = [100 200 300 4 500]; y_correct = [1 2 3 5]; assert(isequal(sort(find_A_not_in_B(A,B)),sort(y_correct)))
2 Pass
A = [12 1 2 -10 5]; B = [12 5]; y_correct = [-10 1 2]; assert(isequal(sort(find_A_not_in_B(A,B)),sort(y_correct)))
3 Pass
A = [-5 -2]; B = [0 0 0 0 0 1 0]; y_correct = [-5 -2]; assert(isequal(sort(find_A_not_in_B(A,B)),sort(y_correct))) | 237 | 663 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.921875 | 3 | CC-MAIN-2020-34 | latest | en | 0.554985 |
https://electronics.stackexchange.com/questions/28404/what-is-the-voltage-range-of-a-standard-headphone-jack-from-a-phone/28520 | 1,713,311,461,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296817112.71/warc/CC-MAIN-20240416222403-20240417012403-00818.warc.gz | 200,008,858 | 46,090 | # What is the voltage range of a standard headphone jack from a phone?
I want to connect the output from the audio jack of an iPhone to an Arduino.
What voltage range can I expect to see on the audio lines from the iPhone? I assume that turning the volume up on the phone will produce a large AC voltage, but how large does it go up to?
I want to make sure that it wont exceed the voltage level that an Arduino can read on its input pins. Will I need to provide any circuitry between the iPhone and the Arduino?
• This question makes no sense without explaining what you want the arduino to do with the audio signal. In any case, you probably need to AC couple the audio signal and add 1/2 supply voltage on the arduino side. Mar 21, 2012 at 13:20
• I measured an iPod 3 at about a volt peak to peak. Sep 27, 2015 at 22:17
• It supplies 5V. at a low amp rate.
– Alex
Mar 22, 2016 at 20:40
• @Alex what does that mean? Mar 22, 2016 at 20:41
• What's a MP3 Jack? Line out (commercial spec, not broadcast spec) drives 1 milliwatt to 600 ohms load (0.77 volts RMS; 2.2 volts peak-to-peak )
– PkP
Mar 22, 2016 at 20:42
Commercial line out specification is to be able to drive 1 milliwatt to a 600 ohm load. For a sine wave, this means a voltage of 0.77 volts RMS (2.2 volts peak-to-peak) and a current of 1.3 milliamperes RMS (3.6 milliamperes peak-to-peak).
• Line out levels are very different from headphone levels; headphone impedances range from 600Ω to as low as 8Ω. Mar 22, 2016 at 20:59
• @uint, correct. And that's why there isn't a standard for a headphone out - if you don't take the European Norm EN60065 as such. That norm is for hearing protection and from memory I recall that it limits headphone output to something like 150 millivolts if the properties of the connected headphones are not known.
– PkP
Mar 22, 2016 at 21:09
• Good answer, but can you cite any source for this? Mar 4, 2017 at 6:52
• @ElliottB You might want to read en.wikipedia.org/wiki/Alignment_level but the most important thing is: what are you aiming to do? Because the ancient 0dBU (0.77VRMS) line out spec really is ancient and nowadays every manufacturer (outside the field of broadcasting anyway) does it at any of a multitude of semirandom ways, depending on what the analog power voltage level happens to be in that particular product. What do you want/need/like to be compatible with?
– PkP
Mar 6, 2017 at 7:36
Unfortunately there is a lot of "audiophile" nonsense around headphone amplifiers and headphone impedance. Probably the top 5 results for "headphone impedance" on Google are just wrong. This site contains some useful information (though a lot of it is wrong too).
But anyway if you look at the graphs which I assume are correct, you can see that in the audio frequency range most headphones have a fairly small reactance compared to their resistance. And most headphones have an impedance around 16-32 Ohms with some crazy "audiophile" headphones having higher impedance (e.g. 300 Ohms). He suggests that 5 mW is sufficiently loud for portable headphones. Audiophile headphones will require higher power.
Power is $P=V^2 / R$ so $V = \sqrt{R*P}$, so high impedance headphones will need a much higher output voltage because they require more power and have a higher impedance. Anyway, for the Sony MDR-EX51 headphones shown on the page linked above you can see that they are fairly close to a simple 17 Ohm resistor. At 5 mW that would mean a voltage of 0.3 V and a current of 16 mA.
An Arduino can supply this fairly easily but I don't think you can just hook it up to PWM since 5V across 17 Ohms gives 300 mA which is well above Arduino's 25 mA limit. A simple solution may be to insert a 4.7 V / 16 mA = 290 Ohm resistor in series with the pin.
I haven't tried any of this - you'll have to experiment!
• The OP wanted to go from phone to Arduino. Your answer is the other way around. Anyway, that was four years ago. He's probably married now and has ... Feb 20, 2016 at 16:29
• Ah yes I misread. But the information is the same. And who cares if it is 4 years old? There are no good answers and it is highly ranked in Google. Feb 20, 2016 at 18:16
• Indeed, this is useful answer. I measured similar ~0.2Vp-p from my phone's headphone output with oscilloscope and this answer gave me confirmation that it is a typical value.
– jpa
May 4, 2018 at 14:08
• +1, good answer. My biggest pet peeve is finding the exact answer I need on Google to read some guy in a forum saying "Why do you need this? This does not belong in the foo forum, it's more of a bar issue. Also, <I'm projecting my confusion onto strangers>." Jul 11, 2022 at 18:22
Check out: http://en.wikipedia.org/wiki/Line_level
The most common nominal level for consumer audio equipment is −10 dBV, ... Expressed in absolute terms, a signal at −10 dBV is equivalent to a sine wave signal with a peak amplitude of approximately 0.447 volts, or any general signal at 0.316 volts root mean square (VRMS). ... There is no absolute maximum, and it depends on the circuit design.
This is however for the "Line out" plug which, apparently, carries a signal at a fixed amplitude and lets the receiving end determine the volume.
In most cases changing the volume setting on the source equipment does not vary the strength of the line out signal.
For a speaker-driving headphone plug I believe things might get more complicated, since that signal is really rather a current signal (used to drive the coil of a speaker).
In contrast to line level, there are ... those used to drive headphones and loudspeakers. The strength of the various signals does not necessarily correlate with the output voltage of a device; it also depends on the source's output impedance which determines the amount of current available to drive different loads.
I guess your best bet might be to look at the wave with an oscilloscope, which should have a high-impedance input like the Arduino's analog input (ADC).
(I'm no expert, take with a grain of salt and feel free to edit)
Edit: The Wikipedia article I used as a source has been edited a lot since I originally posted this answer. Among other changes, the qouted pieces above have been removed/changed. Therefore I'm striking most of this answer out and recommend referring to the Wikipedia article linked at the top.
• Awesome answer! I didn't know it was called line level, nor the difference between a preamp and an amp :) Mar 22, 2012 at 10:18
• @clabacchio: Nor did I know line level "carries a signal at a fixed amplitude". Hmm... Sep 27, 2015 at 4:32
• Could you clean up your answer @GummiV? It's primarily a wall of strikethrough text Aug 6, 2017 at 17:00
• You could always link to the specific edit of the wiki page. Aug 14, 2020 at 18:46
While "line level" audio is typically 1 mW into 600 Ω, and this comes out to 1.1 V p for a sine, audio is far from a sine. Even if the spec is adhered to and you only get 775 mV RMS on average, the peaks can be considerably higher than 1.1 V. It is generally good to accept and handle without distortion peaks up to ± 5 V at least.
• Olin is correct. And for broadcast equipment you must accept even considerably higher levels.
– PkP
Mar 22, 2016 at 21:11
• @PkP: Yes. commercial gear typically uses +/- 15 V power supplies for line-level interfaces. Mar 22, 2016 at 22:37
There is no hard-and-fast rule for headphone jacks; be it a laptop, MP3 player or a regular stereo system.
I would say that a typical headphone output adheres to Line Level specifications, although for headphones they become more of a guideline than a stringent set of figures.
As you have already discovered, different devices have different output levels.
The power that can be provided by your PC is, for example, X milliwatts. As the PC power supply can give up 12V to the soundcard, the XmW could well be generated with an emphasis on the voltage rather than the current. Some top end motherboards (the latest Asus ROG boards, for example) boast a headphone-jack output of over 2V rms.
A portable MP3 player may only have a 3.7V lithium battery. Its output power could be the same XmW as the PC, but at a lower voltage therefore higher current - without some boost convertors it would be impossible to match the voltage of the aforementioned high-end motherboard.
A fundamental difference between a 'headphone output' and a 'line out' is that the latter isn't designed to power a low impedance load. I tend to assume that the input impedance of a generic audio device to be 50kOhms; if it's ever critical to know then it's typically stated by the device manufacturer. Headphones or earphones can be as low as 32 Ohms, meaning that plugging headphones into a Line Out socket could result in both poor volume and poor quality. There is not generally the same problem with connecting a line-level device to a headphone output unless you consider a dedicated headphone amplifier; an audiophile might argue that the output would become unbalanced.
Thus there is no correct answer. Perhaps start with 1.4V RMS as a maximum and then increase or decrease as you work through your prototype.
• Also, the output voltage of a heaphone jack will depend on the volume setting, and on the nature of the sound at the time you measure it. Mar 22, 2016 at 22:39
The arduino would need a higher voltage.
Use an non inverting OP amp on the line which should bring the voltage to about 2ish Volts, something which is better for the arduino.
:)
http://www.instructables.com/id/Arduino-Audio-Input/step3/Non-Inverting-Amplifier/
• The Arduino probably needs a DC offset added, but that is easily accomplished via passive means. Depending on what the functional goal is, there's likely to be enough voltage swing to measure substantial differences with the Arduino's ADC, or even digitally threshold for an NRZ protocol. However, yes, for highest analog fidelity a pre-amp could well be needed to utilize the entire ADC range, and is probably cheaper or at least easier to source than an audio transformer these days. Feb 11, 2014 at 20:06 | 2,494 | 10,067 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2024-18 | latest | en | 0.918241 |
https://www.tutorialspoint.com/inorder-traversal-of-a-threaded-binary-tree-in-cplusplus | 1,675,270,867,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00273.warc.gz | 1,051,800,247 | 11,502 | # Inorder Traversal of a Threaded Binary Tree in C++
Here we will see the threaded binary tree data structure. We know that the binary tree nodes may have at most two children. But if they have only one children, or no children, the link part in the linked list representation remains null. Using threaded binary tree representation, we can reuse that empty links by making some threads.
If one node has some vacant left or right child area, that will be used as thread. There are two types of threaded binary tree. The single threaded tree or fully threaded binary tree.
For fully threaded binary tree, each node has five fields. Three fields like normal binary tree node, another two fields to store Boolean value to denote whether link of that side is actual link or thread.
This is the fully threaded binary tree
## Algorithm
inorder():
Begin
temp := root
repeat infinitely, do
p := temp
temp = right of temp
if right flag of p is false, then
while left flag of temp is not null, do
temp := left of temp
done
end if
if temp and root are same, then
break
end if
print key of temp
done
End
## Example
Live Demo
#include <iostream>
#define MAX_VALUE 65536
using namespace std;
class N { //node declaration
public:
int k;
N *l, *r;
bool leftTh, rightTh;
};
private:
N *root;
public:
ThreadedBinaryTree() { //constructor to initialize the variables
root= new N();
root->r= root->l= root;
root->leftTh = true;
root->k = MAX_VALUE;
}
void insert(int key) {
N *p = root;
for (;;) {
if (p->k< key) { //move to right thread
if (p->rightTh)
break;
p = p->r;
}
else if (p->k > key) { // move to left thread
if (p->leftTh)
break;
p = p->l;
}
else {
return;
}
}
N *temp = new N();
temp->k = key;
temp->rightTh= temp->leftTh= true;
if (p->k < key) {
temp->r = p->r;
temp->l= p;
p->r = temp;
p->rightTh= false;
}
else {
temp->r = p;
temp->l = p->l;
p->l = temp;
p->leftTh = false;
}
}
void inorder() { //print the tree
N *temp = root, *p;
for (;;) {
p = temp;
temp = temp->r;
if (!p->rightTh) {
while (!temp->leftTh) {
temp = temp->l;
}
}
if (temp == root)
break;
cout<<temp->k<<" ";
}
cout<<endl;
}
};
int main() {
tbt.insert(56);
tbt.insert(23);
tbt.insert(89);
tbt.insert(85);
tbt.insert(20);
tbt.insert(30);
tbt.insert(12);
tbt.inorder();
cout<<"\n";
}
## Output
Threaded Binary Tree
12 20 23 30 56 85 89 | 678 | 2,307 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2023-06 | latest | en | 0.649212 |
https://raw.githubusercontent.com/facaiy/book_notes/master/Introduction_to_Algorithms/15_Dynamic_Programming/note.ipynb | 1,631,873,892,000,000,000 | text/plain | crawl-data/CC-MAIN-2021-39/segments/1631780055632.65/warc/CC-MAIN-20210917090202-20210917120202-00519.warc.gz | 526,445,377 | 5,568 | { "cells": [ { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Populating the interactive namespace from numpy and matplotlib\n" ] } ], "source": [ "%pylab inline\n", "import pandas as pd\n", "\n", "import numpy as np\n", "from __future__ import division\n", "import itertools\n", "\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "\n", "import logging\n", "logger = logging.getLogger()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "15 Dynamic Programming\n", "============\n", "\n", "for optimization problems: \n", "\n", "1. divide-and-conquer \n", " partition into **disjoint** subproblems, solve recursively, and then combine. \n", " \n", "2. dynamic programming \n", " subploems **overlap** \n", " developing by four steps: \n", " 1. Characterize the **structure** of an optimal solution. \n", " 2. **Recursively** define the value of an optimal solution. \n", " 3. Compute the value of an optimal solution, typically in a **bottom-up** fashion. \n", " 4. [opt] Construct an optimal solution from computed information.\n", " \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 15.1 Rod cutting\n", "\n", "##### rod-cutting problem\n", "Given: \n", "1. a rod of length $n$ inches. \n", "2. a table of prices $p_i$ for $i = 1, 2, \\cdots, n$ \n", "\n", "Problem: \n", "determine the __maximum revenue__ $r_n$ obtainable by cutting up the rod and selling the pieces.\n", "\n", "Solution: \n", "$$r_n = max_{1 \\leq i \\leq n} (p_i + r_{n-i})$$\n", "\n", "Implementation: \n", "1. recursive top-down \n", "2. dynamic programming \n", " + **optimal substructure**: optimal solutions to a problem incorporate optimal solutions to related subproblems. \n", " + uses additional memory to save computation time \n", " + two ways:\n", " 1. top-down with memorization \n", " + it “remembers” what results it has computed previously. \n", " 2. bottom-up method. \n", " + We sort the subproblems by size and solve them in size order, smallest first. \n", " + The bottom-up approach often has much better constant factors, since it has less overhead for procedure calls.\n", " + **subproblems graphs**, as shown in Fig (15.4). \n", " - A dynamic-programming approach runs in **polynomial** time when **the number of distinct subproblems** involved is **polynomial** in the input size and we can solve each such subproblem in **polynomial** time. \n", " - namely, the running time of dynamic programming is **linear** in the number of vertices and edges.\n", " + **Reconstructing a solution**: \n", " We can extend the dynamic-programming approach to record not only the optimal value computed for each subproblem, but also a choice that led to the optimal value. With this information, we can readily print an optimal solution.\n", "" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "length i: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n", "price p_i: [ 1 5 8 9 10 17 17 20 24 30]\n", "max price: [1.0, 5.0, 8.0, 10.0, 13.0, 17.0, 18.0, 22.0, 25.0, 30.0]\n" ] } ], "source": [ "price = np.array([1, 5, 8, 9, 10, 17, 17, 20, 24, 30])\n", "\n", "print('length i: {}'.format(range(len(price))))\n", "print('price p_i: {}'.format(price))\n", "\n", "def bottom_up_cut_rod(p_, n):\n", " \"\"\"Solve cut_rod problem by bottom-up dynamic programming.\n", " \n", " \"\"\"\n", " assert (len(p_) >= n), 'n should be less than the max length {}'.format(len(p))\n", " \n", " p = np.insert(p_, 0, 0) #set price_0 = 0 \n", " \n", " r = np.zeros(n+1)\n", " \n", " for k in range(1, n+1):\n", " q = -np.inf\n", " for m in range(1, k+1):\n", " q = max(q, p[m]+r[k-m])\n", " r[k] = q\n", " \n", " return r[-1]\n", "\n", "\n", "print('max price: {}'.format([bottom_up_cut_rod(price, n) for n in range(1, len(price)+1)]))" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "### 15.2 Martrix-chain multiplication\n", "\n", "##### matrix-chain multiplication problem\n", "**given** a chain $$of n matrices, where for i = 1, 2, \\cdots, n, matrix A_i has dimension p_{i-1} \\times p_i, \n", "**problem**: fully parenthesize the product A_1 A_2 \\cdots A_n in a way that minimizes the number of scalar multiplicatons.\n", "\n", "**solutions**: \n", "\n", "1. exhaustively checking all possible parenthesizations. \\Omega(2^n) \n", " \$$\n", " P(n) = \\begin{cases}\n", " 1 & \\quad \\text{if } n = 1 \\\\\n", " \\sum_{k=1}^{n-1} P(k) P(n-k) & \\quad \\text{if } n \\geq 2 \\\\\n", " \\end{cases}\n", " \$$\n", "\n", "2. Applying dynamic programming. \\Omega (n^3) \n", " define A_{i \\dotso j} = A_i A_{i+1} \\dotsm A_j \n", " \n", " 1. The structure of an optimal parenthesization. \n", " suppose A_{i \\dotso k} A_{k \\dotso j} is the optimal solution of$$, where $i \\leq k < j$. \n", " then $A_{i \\dotso k}$ must be the optimal solution of $$, as well as A_{k \\dots j}.\n", " \n", " Proof: \n", " if existing Cost(\\hat{A}_{i \\dotso k}) < Cost(A_{i \\dotso k}), \n", " then \\hat{A}_{i \\dotso k} A_{k \\dotso j} should be the optimal solution ==> contradiction. \n", " \n", " 2. A recursive solution. \n", " Let m[i,j] be the minimum number of scalar multiplications needed to compute the matrix A_{i \\dotso j}. \n", " \n", " \$$\n", " m[i,j] = \\begin{cases}\n", " 0 & \\quad \\text{if } i = j \\\\\n", " min_{i \\leq k < j} {m[i,k] + m[k+1,j] + p_{i-1} p_k p_j} & \\quad \\text{if } i < j\n", " \\end{cases}\n", " \$$ \n", " \n", " 3. Computing the optimal costs. \n", " m[i,j] depends on all m[i,k], m[k+1,j]. \n", " hence, the algorithm should fill in the talbe m in a manner that corresponds to solving the parenthesizaiton problem on matrix chains of incresing lenghth. \n", " \n", " 4. Constructing an optimal solution. \n", " construct table s whose rows is 1, \\dotsm, n-1 and columns is 2, \\dotsm, n. \n", " Each entry s[i,j] records the optimal solution k for A_{i \\dotso j}." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "#todo" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 15.3 Elements of dynamic programming\n", "\n", "Two keys: optimal substructure and overlapping subproblems.\n", "\n", "#### Optimal substructure\n", "##### common pattern\n", "1. A solution to the problem consists of making a choice.\n", "\n", "2. Suppose: for a given problem, you are given the choice \n", "\n", "3. Given the choice, you determine which subproblems ensue and how to best characterize the resulting space of subproblems.\n", "\n", "4. By using \"cut-and-paste\" technique, you show that the solutions to the subproblems used within an optimal solution to the problem must themselves be optimal.\n", "\n", "To characterize the space of subproblems, a good rule of thumb says to try to keep the space **as simple as possible** and then **expand it as necessary**.\n", "\n", "\n", "Optimal substructure **varies** across problem domains in two ways:\n", "\n", "1. how many subproblems an optimal solution to the original problem use.\n", "\n", "2. how many choices we have in detering which subproblem(s) to use in an optimal solution.\n", "\n", "\n", "the **running time** of a dynamic-programming algorithm depends on the product of two factors: \n", "\n", "1. the number of subproblems overall.\n", "\n", "2. how many choices we look at for each subproblem.\n", "\n", "Dynamic programming often uses optimal substructure in a **bottom-up fashion**.\n", "\n", "The **cost** of the problem solution is usually the subproblem costs plus a cost that is directly attributable to the choice itself.\n", "\n", "\n", "##### subtleites\n", "Be careful NOT to assume that the optimal substructure applies when it does not. \n", "eg: Unweighted shortest path VS Unweighted longest simple path.\n", "\n", "subproblems are **independent**: \n", "The solution to one subproblem does not affect the solution to another subproblem of the same problem. \n", "in another word, **the subproblems do not share resources**.\n", "\n", "\n", "#### Overlapping subproblems\n", "\n", "\n", "#### Reconstructing an optimal solution\n", "store which choice we made in each subproblem in a table.\n", "\n", "#### Memoization\n", "maintains an entry in a table for the solution to each subproblem.\n", "\n", "In general practice, we prefer to use bottom-up rather than top-down memoized algorithm. While if some subproblems need not be solved at all, the memoized solution has the advantage." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 15.4 Longest common subsequence\n", "\n", "Steps:\n", "\n", "1. Characterizing the optimal substructure: \n", " Let X = and Y = be sequences, and let Z = be any LCS of X and Y. \n", " \n", " 1. If x_m = y_n, then z_k = x_m = y_n, and Z_{k-1} is an LCS of X_{m-1} and Y_{n-1}.\n", " \n", " 2. If X_m \\neq y_n, then z_k \\neq x_m implies that Z is an LCS of X_{m-1} and Y.\n", " \n", " 3. If x_m \\neq y_n, then z_k \\neq y_n implies that Z is an LCS of X and Y_{n-1}.\n", " \n", "2. A recursive solution. \n", "\n", " \$$\n", " c[i,j] = \\begin{cases}\n", " 0 \\quad & \\text{if } i = 0 \\text{ or } j = 0 \\\\\n", " c[i-1,j-1] + 1 \\quad & \\text{if } i, j > 0 \\text{ and } x_i = y_j \\\\\n", " max(c[i,j-1], c[i-1,j]) \\quad & \\text{if } i, j > 0 \\text{ and } x_i \\neq y_j\n", " \\end{cases}\n", " \$$\n", " \n", "3. Computing the length of an LCS. \n", " code.\n", " \n", "4. Constructing. \n", " b[i,j] points to the table entry corresponding to the optimal subproblem solution chosen when computing c[i,j].\n", " \n", " \n", "##### improving the code\n", "1. eliminate the b table.\n", "\n", "2. leave only two rows of table c at a time." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "#todo" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 15.5 Optimal binary search trees\n", "\n", "**Given**: \n", "a sequence K = of n distinct keys in sorted order. For each k_i, we have a probability p_i that a serch will be for k_i. And we also have n+1 \"dummy keys\"$$ representing values not in $K$, and the probability $q_i$ for each $d_i$.\n", "\n", "$$\\sum_{i=1}^{n} p_i + \\sum_{i=0}^{n} q_i = 1$$\n", "\n", "We want to build a search trees $T$ for $K$, and define the expected cost of a search in $T$ is: \n", "\\begin{align}\n", " E[cost] &= \\sum_{i=1}^n (depth_T(k_i) + 1) p_i + \\sum_{i=0}^n (depth_T(d_i) + 1) q_i \\\\\n", " &= 1 + \\sum_{i=1}^n depth_T(k_i) p_i + \\sum_{i=0}^n depth_T(d_i) q_i\n", "\\end{align}\n", "\n", "**Problem**: \n", "We wish the expected cost of $T$ is the smallest.\n", "\n", "\n", "#### steps\n", "1. Optimal substructure \n", " If an optimal binary search tree $T$ has a subtree $T'$ containing keys $k_i, \\dotsc, k_j$, then this subtree $T'$ must be optimal as well for the subproblem with keys $k_i, \\dotsc, k_j$ and dummy keys $d_{i-1}, \\dotsc, d_j$.\n", " \n", "2. A recursive solution \n", " \n", " define: $w[i,j] = \\sum_{l=i}^{j} p_l + \\sum_{l=i-1}^{j} q_l$\n", " \n", " \$$\n", " e[i,j] = \\begin{cases}\n", " q_{i-1} \\quad & \\text{if } j = i - 1 \\\\\n", " min_{i \\leq r \\leq j} \\{e[i,r-1] + e[r+1,j] + w[i,j]\\} \\quad & \\text{if } i \\leq j\n", " \\end{cases}\n", " \$$\n", " \n", "3. Computing \n", " $$w[i,j] = w[i,j-1] + p_j + q_j$$" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": true }, "outputs": [], "source": [ "#todo" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" } }, "nbformat": 4, "nbformat_minor": 0 } | 3,985 | 11,963 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 4, "x-ck12": 0, "texerror": 0} | 3.390625 | 3 | CC-MAIN-2021-39 | latest | en | 0.713063 |
https://maker.pro/forums/threads/help-with-bldc-sensorless-motor-control.298293/ | 1,675,361,902,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00723.warc.gz | 402,447,446 | 62,332 | # Help with BLDC SENSORLESS motor control
#### MoHo98
Dec 6, 2022
3
Hey there!
we tend to drive a specific bldc sensorless motor using two different algorithms.
both of them are based on zero crossing detection, a vey simple approach.
the algorithm one detects ZCs by analog comparators and of course its corresponsive interrupts to apply the next step of commutation.
the algorithm two detects ZCs using ADC and then comparing the BEMF to VBus/2.
Let me explain how this algorithm works.
firstly, an arbitrary number of steps applies to the motor in order to drive it in open loop mode. After that the algorithm switches to the closed loop mode. then speed is estimated using a timer
the speed is estimated by measuring the time between the point of ZC detection and the instant of the previous commutation step.
here is my broblem. the max speed achieved by algo.1 differes from algo.2 while the applied duty cycle is the same(for both almost 100%)
how could it be possible? I know that the speed is related to duty cycle so I tought the max speed would be the same no matter which algo I used.
I would be grateful if you could help.
#### kellys_eye
Jun 25, 2010
5,296
What speed does your ADC work at - it probably isn't 'instantaneous'.
#### MoHo98
Dec 6, 2022
3
What speed does your ADC work at - it probably isn't 'instantaneous'.
The cpu frequency is 8MHz and the ADC prescaler is 8 so its clock is 1MHz.
The maximum speed I wanna reach is almost 3000RPM.
My Microcontroller is STM8s if it helps.
I use the single mode of its adc for sampling.
Could you please be more specific?What do you mean by "instantaneous"?
I suppose it's not possible to use the continuous mode of the ADC because of the way the algorithm works(avr444 application note)
#### kellys_eye
Jun 25, 2010
5,296
What do you mean by "instantaneous"?
It takes a finite amount of time to deliver a conversion - it may be that even the slightest delay will affect speed. Try using the ADC in it's fastest conversion mode and see what difference it makes.
#### MoHo98
Dec 6, 2022
3
It takes a finite amount of time to deliver a conversion - it may be that even the slightest delay will affect speed. Try using the ADC in it's fastest conversion mode and see what difference it makes.
Hey again.
Based on what you said I tried using the adc in it's fastest conversion mode. For instance I changed its clock from 1MHz to 2MHz
as a result the conversion takes a shorter amount of time(14 * 1/2 us) but it doesn't improve the performance it even escalated.
the motor goes into stall mode and is unable to start rotating.
the same happened when I decreased its clock! it seems that 1MHz is the optimized value.
I'm wondering if the low pass filter used for filtering phase BEMF voltage can cause a delay.
could it be the reason why I detect ZC points inaccurately?
Do you have any idea that can help me compare the true zero crossing point with the point I assume as zcp?
D
Replies
1
Views
2K
Jeroen
J
Replies
11
Views
823
Replies
16
Views
2K
P
Replies
4
Views
949
Tim Wescott
T
P
Replies
4
Views
1K
Paul Burke
P | 782 | 3,097 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2023-06 | longest | en | 0.928009 |
https://py4u.org/questions/75712036/ | 1,679,800,069,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00469.warc.gz | 532,330,767 | 7,195 | # Can i use yield to get the result of all the result in nested function?
## Question:
I found many code to verify a valid binary tree, but I tried to creat a simple one, however it rise true all the time!
Sorry I don’t understand the recursion very well.
``````class TreeNode(object):
def __init__(self, x):
self.val = x
self.left = None
self.right = None
def is_BST():
if root:
return all(is_BST1(root))
def is_BST1(curnode):
if curnode.right:
if curnode.right.val>curnode.val:
is_BST1(curnode.right)
else:
yield False
if curnode.left:
if curnode.left.val<curnode.val:
is_BST1(curnode.left)
else:
yield False
else:
yield True
#tree creation code
print(is_BST)
``````
## Answers:
Your main problem is that this line:
`````` is_BST1(curnode.right)
``````
only returns a generator, but does nothing with it. You should at least iterate on that generator:
`````` ...
if curnode.right:
if curnode.right.val>curnode.val:
# iterate on the generator built from right node
for i in is_BST1(curnode.right):
yield i
else:
...
``````
and of course do the same for the left child…
There are several issues in your code:
• The results from the recursive calls are ignored. As you have created `is_BST1` as a generator, you would do the recursive call with `yield from`, like so:
``````yield from is_BST1(curnode.right)
``````
• Even with the above fix, you will get false positives. For instance for this tree:
`````` 2
/
1
3
``````
Your code will always yield True, but it fails to see that node with value 3 is violating the BST property as it is greater than 2.
• Not a real problem, but your code keeps looking further even when it has found a violation of the BST property. This brings no benefit. Once you have a violation, the algorithm should not investigate any other nodes and quit with `False`
• It would be more useful if you would pass the root of the tree to the `is_BST` function, so the caller can indicate which tree to validate.
To validate a BST properly, you need to do more than just compare a parent value with the value of a direct child. There are several approaches possible, including these two:
• You could create a generator that produces the tree values in in-order sequence, and then verify that the produced values are yielded in non-decreasing order
• You could pass down a "window" to each recursive call which defines a lower and upper bound for all values in the visited subtree, as you go deeper in the tree this window becomes more narrow.
Either will work. Since you were working with the idea of a generator (using `yield`) I will provide the code for the first idea:
``````from itertools import tee
def is_BST(root):
current, lead = tee(inorder(root))
next(lead, None)
return all(a <= b for a, b in zip(current, lead))
def inorder(curnode):
if curnode:
yield from inorder(curnode.left)
yield curnode.val
yield from inorder(curnode.right)
``````
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner. | 769 | 3,076 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.953125 | 3 | CC-MAIN-2023-14 | latest | en | 0.838562 |
http://cow.physics.wisc.edu/~craigm/idl/archive/msg00869.html | 1,513,032,144,000,000,000 | text/html | crawl-data/CC-MAIN-2017-51/segments/1512948514113.3/warc/CC-MAIN-20171211222541-20171212002541-00086.warc.gz | 67,449,736 | 1,922 | # Re: Svdc_complex
• Subject: Re: Svdc_complex
• From: Javier Sanchez Gonzalez <jsanchez(at)gbt.tfo.upm.es>
• Date: Fri, 02 Jul 1999 09:56:14 +0100
• Newsgroups: comp.lang.idl-pvwave
• Organization: RedIRIS
• References: <377BA376.641047B8@gbt.tfo.upm.es>
• Xref: news.doit.wisc.edu comp.lang.idl-pvwave:15525
```
Hi all,
if somebody wants to test this I send the procedure to calculate the
singular value of a complex matrix.
pro svdc_complex, M, Ew2, Eu2, Ev2
xdim=(size(M))(1)
ydim=(size(M))(2)
Ew2=complexarr(xdim)
Eu2=complexarr(xdim,ydim)
Ev2=complexarr(xdim,ydim)
M2=dblarr(2*xdim,2*ydim)
M2(0:xdim-1,0:ydim-1)=double(M)
M2(xdim:2*xdim-1,0:ydim-1)=-imaginary(M)
M2(0:xdim-1,ydim:2*ydim-1)=imaginary(M)
M2(xdim:2*xdim-1,ydim:2*ydim-1)=double(M)
catch, Error_status
if Error_status ne 0 then begin
; print, !err_string
Minv=0.*transpose(M)
end else begin
svdc, m2,w,u,v,/double
for i=0, xdim-1 do begin
Ew2(i)=w(2*i)
Eu2(i,*)=u(2*i,0:ydim-1)+dcomplex(0,1)*u(2*i,ydim:2*ydim-1)
Ev2(i,*)=v(2*i,0:ydim-1)+dcomplex(0,1)*v(2*i,ydim:2*ydim-1)
endfor
end
end
Thanks all
Javier Sanchez Gonzalez
``` | 467 | 1,106 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2017-51 | latest | en | 0.300499 |
https://www.hammockforums.net/forum/showthread.php/34548-Math-help-for-a-DTY-project?s=20427a8089ef819ff053929a836a97c3&p=487006 | 1,430,913,280,000,000,000 | text/html | crawl-data/CC-MAIN-2015-18/segments/1430458535897.50/warc/CC-MAIN-20150501053535-00086-ip-10-235-10-82.ec2.internal.warc.gz | 878,278,235 | 16,425 | # Thread: Math help for a DTY project
1. ## Math help for a DIY project
I need to figure out a length of a cylinder
I know the volume (800ci) and the radius, (1.5")
I need to know the height/length
If:
V=3.14 x r2 x h
800ci = 3.14 x 2.25 x h
800 = 7.065 x h
or 800/7.065 = h
h = 113.23"
? ? ?
Thanks eh
2. 37.72562 ft
that sound right?
I need to figure out a length of a cylinder
I know the volume (800ci) and the radius, (1.5")
I need to know the height/length
If:
V=3.14 x r2 x h
800ci = 3.14 x 2.25 x h
800 = 7.065 x h
or 800/7.065 = h
h = 113.23"
? ? ?
Thanks eh
Checks out to me. if you use PI to a few more digits you get h=113.17" But maybe Pi = 3.14 in Canada....
4. no no 9.41 ft... about 113 inches long
5. yeah that's right... i messed up in my first one ;o) but i quickly saw my mistake
I need to figure out a length of a cylinder
I know the volume (800ci) and the radius, (1.5")
I need to know the height/length
If:
V=3.14 x r2 x h
800ci = 3.14 x 2.25 x h
800 = 7.065 x h
or 800/7.065 = h
h = 113.23"
? ? ?
Thanks eh
I'm no math scholar
Edited to let that statement stand by itself as ontological proof of it's truth.
7. DTY?
lengthen your message to at least 10 characters.
8. 113.177 Inches
Pi * r ^ 2 * h = v
Pi * 1.5 ^2 * h = 800
h = v / ( Pi * r ^2 )
h = 800 / (Pi * 1.5 ^2)
h = 800 / 7.06858
h = 113.177
( really roughly: )
h = ~ 113 3/16
9. I think it is a little more than 113, but I didn't use math. It came to me in a premonition.
.
10. What is this "math" you speak of?????? | 574 | 1,524 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.96875 | 4 | CC-MAIN-2015-18 | latest | en | 0.887461 |
https://replicationindex.com/category/neyman-pearson/ | 1,679,940,108,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00301.warc.gz | 514,166,756 | 40,210 | # Replicability 101: How to interpret the results of replication studies
Even statistically sophisticated psychologists struggle with the interpretation of replication studies (Maxwell et al., 2015). This article gives a basic introduction to the interpretation of statistical results within the Neyman Pearson approach to statistical inferences.
I make two important points and correct some potential misunderstandings in Maxwell et al.’s discussion of replication failures. First, there is a difference between providing sufficient evidence for the null-hypothesis (evidence of absence) and providing insufficient evidence against the null-hypothesis (absence of evidence). Replication studies are useful even if they simply produce absence of evidence without evidence that an effect is absent. Second, I point out that publication bias undermines the credibility of significant results in original studies. When publication bias is present, open replication studies are valuable because they provide an unbiased test of the null-hypothesis, while original studies are rigged to reject the null-hypothesis.
DEFINITION OF REPLICATING A STATISTICAL RESULT
Replicating something means to get the same result. If I make the first free throw, replicating this outcome means to also make the second free throw. When we talk about replication studies in psychology we borrow from the common meaning of the term “to replicate.”
If we conduct psychological studies, we can control many factors, but some factors are not under our control. Participants in two independent studies differ from each other and the variation in the dependent variable across samples introduces sampling error. Hence, it is practically impossible to get identical results, even if the two studies are exact copies of each other. It is therefore more complicated to compare the results of two studies than to compare the outcome of two free throws.
To determine whether the results of two studies are identical or not, we need to focus on the outcome of a study. The most common outcome in psychological studies is a significant or non-significant result. The goal of a study is to produce a significant result and for this reason a significant result is often called a success. A successful replication study is a study that also produces a significant result. Obtaining two significant results is akin to making two free throws. This is one of the few agreements between Maxwell and me.
“Generally speaking, a published original study has in all likelihood demonstrated a statistically significant effect. In the current zeitgeist, a replication study is usually interpreted as successful if it also demonstrates a statistically significant effect.” (p. 488)
The more interesting and controversial scenario is a replication failure. That is, the original study produced a significant result (success) and the replication study produced a non-significant result (failure).
I propose that a lot of confusion arises from the distinction between original and replication studies. If a replication study is an exact copy of the first study, the outcome probabilities of original and replication studies are identical. Otherwise, the replication study is not really a replication study.
There are only three possible outcomes in a set of two studies: (a) both studies are successful, (b) one study is a success and one is a failure, or (c) both studies are failures. The probability of these outcomes depends on whether the significance criterion (the type-I error probability) when the null-hypothesis is true and the statistical power of a study when the null-hypothesis is false.
Table 1 shows the probability of the outcomes in two studies. The uncontroversial scenario of two significant results is very unlikely, if the null-hypothesis is true. With conventional alpha = .05, the probability is .0025 or 1 out of 400 attempts. This shows the value of replication studies. False positives are unlikely to repeat themselves and a series of replication studies with significant results is unlikely to occur by chance alone.
2 sig, 0 ns 1 sig, 1 ns 0 sig, 2 ns H0 is True alpha^2 2*alpha*(1-alpha) (1-alpha^2) H1 is True (1-beta)^2 2*(1-beta)*beta beta^2
The probability of a successful replication of a true effect is a function of statistical power (1 – type-II error probability). High power is needed to get significant results in a pair of studies (an original study and a replication study). For example, if power is only 50%, the chance of this outcome is only 25% (Schimmack, 2012). Even with conventionally acceptable power of 80%, only 2/3 (64%) of replication attempts would produce this outcome. However, studies in psychology do not have 80% power and estimates of power can be as low as 37% (OSC, 2015). With 40% power, a pair of studies would produce significant results in no more than 16 out of 100 attempts. Although successful replications of true effects with low power are unlikely, they are still much more likely then significant results when the null-hypothesis is true (16/100 vs. 1/400 = 64:1). It is therefore reasonable to infer from two significant results that the null-hypothesis is false.
If the null-hypothesis is true, it is extremely likely that both studies produce a non-significant result (.95^2 = 90.25%). In contrast, it is unlikely that even a study with modest power would produce two non-significant results. For example, if power is 50%, there is a 75% chance that at least one of the two studies produces a significant result. If power is 80%, the probability of obtaining two non-significant results is only 4%. This means, it is much more likely (22.5 : 1) that the null-hypothesis is true than that the alternative hypothesis is true. This does not mean that the null-hypothesis is true in an absolute sense because power depends on the effect size. For example, if 80% power were obtained with a standardized effect size of Cohen’s d = .5, two non-significant results would suggest that the effect size is smaller than .5, but it does not warrant the conclusion that H0 is true and the effect size is exactly 0. Once more, it is important to distinguish between the absence of evidence for an effect and the evidence of absence of an effect.
The most controversial scenario assumes that the two studies produced inconsistent outcomes. Although theoretically there is no difference between the first and the second study, it is common to focus on a successful outcome followed by a replication failure (Maxwell et al., 2015). When the null-hypothesis is true, the probability of this outcome is low; .05 * (1-.05) = .0425. The same probability exists for the reverse pattern that a non-significant result is followed by a significant one. A probability of 4.25% shows that it is unlikely to observe a significant result followed by a non-significant result when the null-hypothesis is true. However, the low probability is mostly due to the low probability of obtaining a significant result in the first study, while the replication failure is extremely likely.
Although inconsistent results are unlikely when the null-hypothesis is true, they can also be unlikely when the null-hypothesis is false. The probability of this outcome depends on statistical power. A pair of studies with very high power (95%) is very unlikely to produce an inconsistent outcome because both studies are expected to produce a significant result. The probability of this rare event can be as low, or lower, than the probability with a true null effect; .95 * (1-.95) = .0425. Thus, an inconsistent result provides little information about the probability of a type-I or type-II error and is difficult to interpret.
In conclusion, a pair of significance tests can produce three outcomes. All three outcomes can occur when the null-hypothesis is true and when it is false. Inconsistent outcomes are likely unless the null-hypothesis is true or the null-hypothesis is false and power is very high. When two studies produce inconsistent results, statistical significance provides no basis for statistical inferences.
Meta-Analysis
The counting of successes and failures is an old way to integrate information from multiple studies. This approach has low power and is no longer used. A more powerful approach is effect size meta-analysis. Effect size meta-analysis was one way to interpret replication results in the Open Science Collaboration (2015) reproducibility project. Surprisingly, Maxwell et al. (2015) do not consider this approach to the interpretation of failed replication studies. To be clear, Maxwell et al. (2015) mention meta-analysis, but they are talking about meta-analyzing a larger set of replication studies, rather than meta-analyzing the results of an original and a replication study.
“This raises a question about how to analyze the data obtained from multiple studies. The natural answer is to use meta-analysis.” (p. 495)
I am going to show that effect-size meta-analysis solves the problem of interpreting inconsistent results in pairs of studies. Importantly, effect size meta-analysis does not care about significance in individual studies. A meta-analysis of a pair of studies with inconsistent results is no different from a meta-analysis of a pair of studies with consistent results.
Maxwell et al.’s (2015) introduced an example of a between-subject (BS) design with n = 40 per group (total N = 80) and a standardized effect size of Cohen’s d = .5 (a medium effect size). This study has 59% power to obtain a significant result. Thus, it is quite likely that a pair of studies produces inconsistent results (48.38%). However, a pair of studies with N = 80 has the power of a total sample size of N = 160, which means a fixed-effects meta-analysis will produce a significant result in 88% of all attempts. Thus, it is not difficult at all to interpret the results of pairs of studies with inconsistent results if the studies have acceptable power (> 50%). Even if the results are inconsistent, a meta-analysis will provide the correct answer that there is an effect most of the time.
A more interesting scenario are inconsistent results when the null-hypothesis is true. I turned to simulations to examine this scenario more closely. The simulation showed that a meta-analysis of inconsistent studies produced a significant result in 34% of all cases. The percentage slightly varies as a function of sample size. With a small sample of N = 40, the percentage is 35%. With a large sample of 1,000 participants it is 33%. This finding shows that in two-thirds of attempts, a failed replication reverses the inference about the null-hypothesis based on a significant original study. Thus, if an original study produced a false-positive results, a failed replication study corrects this error in 2 out of 3 cases. Importantly, this finding does not warrant the conclusion that the null-hypothesis is true. It merely reverses the result of the original study that falsely rejected the null-hypothesis.
In conclusion, meta-analysis of effect sizes is a powerful tool to interpret the results of replication studies, especially failed replication studies. If the null-hypothesis is true, failed replication studies can reduce false positives by 66%.
DIFFERENCES IN SAMPLE SIZES
We can all agree that, everything else being equal, larger samples are better than smaller samples (Cohen, 1990). This rule applies equally to original and replication studies. Sometimes it is recommended that replication studies should use much larger samples than original studies, but it is not clear to me why researchers who conduct replication studies should have to invest more resources than original researchers. If original researchers conducted studies with adequate power, an exact replication study with the same sample size would also have adequate power. If the original study was a type-I error, the replication study is unlikely to replicate the result no matter what the sample size. As demonstrated above, even a replication study with the same sample size as the original study can be effective in reversing false rejections of the null-hypothesis.
From a meta-analytic perspective, it does not matter whether a replication study had a larger or smaller sample size. Studies with larger sample sizes are given more weight than studies with smaller samples. Thus, researchers who invest more resources are rewarded by giving their studies more weight. Large original studies require large replication studies to reverse false inferences, whereas small original studies require only small replication studies to do the same. Nevertheless, failed replications with larger samples are more likely to reverse false rejections of the null-hypothesis, but there is no magical number about the size of a replication study to be useful.
I simulated a scenario with a sample size of N = 80 in the original study and a sample size of N = 200 in the replication study (a factor of 2.5). In this simulation, only 21% of meta-analyses produced a significant result. This is 13 percentage points lower than in the simulation with equal sample sizes (34%). If the sample size of the replication study is 10 times larger (N = 80 and N = 800), the percentage of remaining false positive results in the meta-analysis shrinks to 10%.
The main conclusion is that even replication studies with the same sample size as the original study have value and can help to reverse false positive findings. Larger sample sizes simply give replication studies more weight than original studies, but it is by no means necessary to increase sample sizes of replication studies to make replication failures meaningful. Given unlimited resources, larger replications are better, but these analysis show that large replication studies are not necessary. A replication study with the same sample size as the original study is more valuable than no replication study at all.
CONFUSING ABSENCE OF EVIDENCE WITH EVIDENCE OF ABSENCE
One problem in Maxwell et al’s (2015) article is to conflate two possible goals of replication studies. One goal is to probe the robustness of the evidence against the null-hypothesis. If the original result was a false positive result, an unsuccessful replication study can reverse the initial inference and produce a non-significant result in a meta-analysis. This finding would mean that evidence for an effect is absent. The status of a hypothesis (e.g., humans have supernatural abilities; Bem, 2011) is back to where it was before the original study found a significant result and the burden of proof is shifted back to proponents of the hypothesis to provide unbiased credible evidence for it.
Another goal of replication studies can be to provide conclusive evidence that an original study reported a false positive result (i..e, humans do not have supernatural abilities). Throughout their article, Maxwell et al. assume that the goal of replication studies is to prove the absence of an effect. They make many correct observations about the difficulties of achieving this goal, but it is not clear why replication studies have to be conclusive when original studies are not held to the same standard.
This makes it easy to produce (potentially false) positive results and very hard to remove false positive results from the literature. It also creates a perverse incentive to conduct underpowered original studies and to claim victory when a large replication study finds a significant result with an effect size that is 90% smaller than the effect size in an original study. The authors of the original article may claim that they do not care about effect sizes and that their theoretical claim was supported. To avoid this problem that replication researchers have to invest large amount of resources for little gain, it is important to realize that even a failure to replicate an original finding with the same sample size can undermine original claims and force researchers to provide stronger evidence for their original ideas in original articles. If they are right and the evidence is strong, others will be able to replicate the result in an exact replication study with the same sample size.
THE DIRTY BIG SECRET
The main problem of Maxwell et al.’s (2015) article is that the authors blissfully ignore the problem of publication bias. They mention publication bias twice to warn readers that publication bias inflates effect sizes and biases power analyses, but they completely ignore the influence of publication bias on the credibility of successful original results (Schimmack, 2012; Sterling; 1959; Sterling et al., 1995).
It is hard to believe that Maxwell is unaware of this problem, if only because Maxwell was action editor of my article that demonstrated how publication bias undermines the credibility of replication studies that are selected for significance (Schimmack, 2012).
I used Bem’s infamous article on supernatural abilities as an example, which appeared to show 8 successful replications of supernatural abilities. Ironically, Maxwell et al. (2015) also cites Bem’s article to argue that failed replication studies can be misinterpreted as evidence of absence of an effect.
“Similarly, Ritchie, Wiseman, and French (2012) state that their failure to obtain significant results in attempting to replicate Bem (2011) “leads us to favor the ‘experimental artifacts’ explanation for Bem’s original result” (p. 4)”
This quote is not only an insult to Ritchie et al.; it also ignores the concerns that have been raised about Bem’s research practices. First, Ritchie et al. do not claim that they have provided conclusive evidence against ESP. They merely express their own opinion that they “favor the ‘experimental artifacts’ explanation. There is nothing wrong with this statement, even if it is grounded in a healthy skepticism about supernatural abilities.
More important, Maxwell et al. ignore the broader context of these studies. Schimmack (2012) discussed many questionable practices in Bem’s original studies and I presented statistical evidence that the significant results in Bem’s article were obtained with the help of questionable research practices. Given this wider context, it is entirely reasonable to favor the experimental artifact explanation over the alternative hypothesis that learning after an exam can still alter the exam outcome.
It is not clear why Maxwell et al. (2015) picked Bem’s article to discuss problems with failed replication studies and ignores that questionable research practices undermine the credibility of significant results in original research articles. One reason why failed replication studies are so credible is that insiders know how incredible some original findings are.
Maxwell et al. (2015) were not aware that in the same year, the OSC (2015) reproducibilty project would replicate only 37% of statistically significant results in top psychology journals, while the apparent success rate in these journals is over 90%. The stark contrast between the apparent success rate and the true power to produce successful outcomes in original studies provided strong evidence that psychology is suffering from a replication crisis. This does not mean that all failed replications are false positives, but it does mean that it is not clear which findings are false positives and which findings are not. Whether this makes things better is a matter of opinion.
Publication bias also undermines the usefulness of meta-analysis for hypothesis testing. In the OSC reproducibility project, a meta-analysis of original and replication studies produced 68% significant results. This result is meaningless because publication bias inflates effect sizes and the probability of obtaining a false positive result in the meta-analysis. Thus, when publication bias is present, unbiased replication studies provide the most credible evidence and the large number of replication failures means that more replication studies with larger samples are needed to see which hypothesis predict real effects with practical significance.
DOES PSYCHOLOGY HAVE A REPLICATION CRISIS?
Maxwell et al.’s (2015) answer to this question is captured in this sentence. “Despite raising doubts about the extent to which apparent failures to replicate necessarily reveal that psychology is in crisis,we do not intend to dismiss concerns about documented methodological flaws in the field.” (p. 496). The most important part of this quote is “raising doubt,” the rest is Orwellian double-talk.
The whole point of Maxwell et al.’s article is to assure fellow psychologists that psychology is not in crisis and that failed replication studies should not be a major concern. As I have pointed out, this conclusion is based on some misconceptions about the purpose of replication studies and by blissful ignorance about publication bias and questionable research practices that made it possible to publish successful replications of supernatural phenomena, while discrediting authors who spend time and resources on demonstrating that unbiased replication studies fail.
The real answer to Maxwell et al.’s question was provided by the OSC (2015) finding that only 37% of published significant results could be replicated. In my opinion that is not only a crisis, but a scandal because psychologists routinely apply for funding with power analyses that claim 80% power. The reproducibilty project shows that the true power to obtain significant results in original and replication studies is much lower than this and that the 90% success rate is no more meaningful than 90% votes for a candidate in communist elections.
In the end, Maxwell et al. draw the misleading conclusion that “the proper design and interpretation of replication studies is less straightforward than conventional practice would suggest.” They suggest that “most importantly, the mere fact that a replication study yields a nonsignificant statistical result should not by itself lead to a conclusion that the corresponding original study was somehow deficient and should no longer be trusted.”
As I have demonstrated, this is exactly the conclusion that readers should draw from failed replication studies, especially if (a) the original study was not preregistered, (b) the original study produced weak evidence (e.g., p = .04), the original study was published in a journal that only publishes significant results, (d) the replication study had a larger sample, (e) the replication study would have been published independent of outcome, and (f) the replication study was preregistered.
We can only speculate why the American Psychologists published a flawed and misleading article that gives original studies the benefit of the doubt and casts doubt on the value of replication studies when they fail. Fortunately, APA can no longer control what is published because scientists can avoid the censorship of peer-reviewed journals by publishing blogs and by criticize peer-reviewed articles in open post-publication peer review on social media.
Long life the replicability revolution. !!!
REFERENCES
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304-1312.
http://dx.doi.org/10.1037/0003-066X.45.12.1304
Maxwell, S.E, Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does ‘failure to replicate’ really mean? American Psychologist, 70, 487-498. http://dx.doi.org/10.1037/a0039400.
Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods, 17(4), 551-566. http://dx.doi.org/10.1037/a0029487
# On the Definition of Statistical Power
D1: In plain English, statistical power is the likelihood that a study will detect an effect when there is an effect there to be detected. If statistical power is high, the probability of making a Type II error, or concluding there is no effect when, in fact, there is one, goes down (first hit on Google)
D2: The power or sensitivity of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H0) when the alternative hypothesis (H1) is true. (Wikipedia)
D3: The probability of not committing a Type II error is called the power of a hypothesis test. (Stat Trek)
The concept of statistical power arose from Neyman and Pearson’s approach to statistical inferences. Neyman and Pearson distinguished between two types of errors that could occur when a researcher draws conclusions about a population from observations in a sample. The first error (type-I error) is to infer a systematic relationship (in tests of causality this is an effect) when no relationship (no effect) exists. This error is also known as a false-positive as in a pregnancy test that shows a positive result (pregnant) when a women is not pregnant. The second error (type-II error) is to fail to detect a systematic relationship that actually exists. This error is also known as a false negative as when a pregnancy shows a negative result (not pregnant) when a woman is actually pregnant.
Ideally researchers would never make type-I or type-II errors, but it is inevitable that researchers will make both types of mistakes. However, researchers have some control over the probability of making these two mistakes. Statistical power is simply the probability of not making a type-II mistake; that is to avoid negative results when effects are present.
Many definitions of statistical power imply that the probability of avoiding a type-II error is equivalent to the long-run frequency of statistical significant results because statistical significance is used to decide whether an effect is present or not. By definition statistically non-significant results are negative results when an effect exists in the population. However, it does not automatically follow that all significant results are positive results when an effect is present. Significant results and positive results are only identical in one-sided hypotheses tests. For example, if the hypothesis is that men are taller than women and a one-sided statistical tests is used only significant results that show a greater mean for men than for women will be significant. A study that shows a large difference in the opposite direction would not produce a significant result no matter how large the difference is.
The equivalence between significant results and positive results no longer holds in the more commonly used two-tailed tests of statistical significance. In this case, the relationship in the population is either positive or negative. It cannot be both positive or negative. Only significant results that also show the correct direction of the effect (either as predicted by a correct prediction or as demonstrated by consistency with the majority of other significant results) are positive results. Significant results in the other direction are false positive results in that they show a false effect, which becomes only visible in a two-tailed test when the sign of the effect is taken into account.
How important is the distinction between the rate of positive results and the rate of significant results in a two-tailed test? Actually it is not very important. The largest number of false positive results is obtained when no effect exists at all. If the 5% significance criterion is used, no more than 5% of tests will produce false positive results. It will also become apparent after some time that there is no effect because half the studies will show a positive effect and the other half will show a negative effect. The inconsistency in the sign of the effect shows that significant results are not caused by a systematic relation. As the power of a test increases, more and more significant results will have the correct sign and fewer and fewer results will be false positives. The picture on top shows an example with 13% power. As can be seen most of this percentage comes from the fat right tail of the blue distribution. However, a small portion comes from the left tail that is more extreme than the criterion for significance (the green line).
For a study with 50% power to produce a true positive result (a significant result with the correct sign) is 50%. The probability of a false-positive result (a significant result with the wrong sign) is 0 to the second decimal, but not exactly zero (~0.05%). In other words, even in studies with modes power, false positive results have a negligible effect. A much bigger concern is that 50% of results are expected to be false negative results.
In conclusion, the sign of an effect matters. Two-tailed significance testing ignores the sign of an effect. Power is the long-run probability of obtaining a significant result with the correct sign. This probability is identical to the probability of a statistically significant result in a one-tailed test. It is not identical to the probability of a statistically significant results in a two-tailed test, but for practical purposes the difference is negligible. Nevertheless, it is probably most accurate to use a definition that is equally applicable to one-tailed and two-tailed tests.
D4: Statistical power is the probability of drawing the correct conclusion from a statistically significant result when an effect is present. If the effect is positive, the correct inference is that a positive effect exists. If an effect is negative, the correct inference is that a negative effect exists. When the inference is that the effect is negative (positive), but the effect is positive (negative), a statistically significant result does not count towards the power of a statistical test.
This definition differs from other definitions of power because it distinguishes between true positive and false positive results. Other definitions of power treat all non-negative results (false positive and true positive) as equivalent.
# Why Psychologists Should Not Change The Way They Analyze Their Data: The Devil is in the Default Prior
The scientific method is well-equipped to demonstrate regularities in nature as well as human behaviors. It works by repeating a scientific procedure (experiment or natural observation) many times. In the absence of a regular pattern, the empirical data will follow a random pattern. When a systematic pattern exists, the data will deviate from the pattern predicted by randomness. The deviation of an observed empirical result from a predicted random pattern is often quantified as a probability (p-value). The p-value itself is based on the ratio of the observed deviation from zero (effect size) and the amount of random error. As the signal-to-noise ratio increases, it becomes increasingly unlikely that the observed effect is simply a random event. As a result, it becomes more likely that an effect is present. The amount of noise in a set of observations can be reduced by repeating the scientific procedure many times. As the number of observations increases, noise decreases. For strong effects (large deviations from randomness), a relative small number of observations can be sufficient to produce extremely low p-values. However, for small effects it may require rather large samples to obtain a high signal-to-noise ratio that produces a very small p-value. This makes it difficult to test the null-hypothesis that there is no effect. The reason is that it is always possible to find an effect size that is so small that the noise in a study is too large to determine whether a small effect is present or whether there is really no effect at all; that is, the effect size is exactly zero (1 / infinity).
The problem that it is impossible to demonstrate scientifically that an effect is absent may explain why the scientific method has been unable to resolve conflicting views around controversial topics such as the existence of parapsychological phenomena or homeopathic medicine that lack a scientific explanation, but are believed by many to be real phenomena. The scientific method could show that these phenomena are real, if they were real, but the lack of evidence for these effects cannot rule out the possibility that a small effect may exist. In this post, I explore two statistical solutions to the problem of demonstrating that an effect is absent.
Neyman-Pearson Significance Testing (NPST)
The first solution is to follow Neyman-Pearsons’s orthodox significance test. NPST differs from the widely practiced null-hypothesis significance test (NHST) in that non-significant results are interpreted as evidence for the null-hypothesis. Thus, using the standard criterion of p = .05 as the criterion for significance, a p-value below .05 is used to reject the null-hypothesis and to infer that an effect is present. Importantly, if the p-value is greater than .05 the results are used to accept the null-hypothesis; that is, the hypothesis that there is no effect is true. As all statistical inferences, it is possible that the evidence is misleading and leads to the wrong conclusion. NPST distinguishes between two types or errors that are called type-I and type-II error. Type-I errors are errors when a p-value is below the criterion value (p < .05), but the null-hypothesis is actually true; that is there is no effect and the observed effect size was caused by a rare random event. Type-II errors are made when the null-hypothesis is accepted, but the null-hypothesis is false; there actually is an effect. The probability of making a type-II error depends on the size of the effect and the amount of noise in the data. Strong effects are unlikely to produce a type-II error even with noise data. Studies with very little noise are also unlikely to produce type-II errors because even small effects can still produce a high signal-to-noise ratio and significant results (p-values below the criterion value). Type-II error rates can be very high in studies with small effects and a large amount of noise. NPST makes it possible to quantify the probability of a type-II error for a given effect size. By investing a large amount of resources, it is possible to reduce noise to a level that is sufficient to have a very low type-II error probability for very small effect sizes. The only requirement for using NPST to provide evidence for the null-hypothesis is to determine a margin of error that is considered acceptable. For example, it may be acceptable to infer that a weight-loss-medication has no effect on weight if weight loss is less than 1 pound over a one month period. It is impossible to demonstrate that the medication has absolutely no effect, but it is possible to demonstrate with high probability that the effect is unlikely to be more than 1 pound.
Bayes-Factors
The main difference between Bayes-Factors and NPST is that NPST yields type-II error rates for an a priori effect size. In contrast, Bayes-Factors do not postulate a single effect size, but use an a priori distribution of effect sizes. Bayes-Factors are based on the probability that the observed effect sizes is based on a true effect size of zero relative to the probability that the observed effect size was based on a true effect size within a range of a priori effect sizes. Bayes-Factors are the ratio of the probabilities for the two hypotheses. It is arbitrary, which hypothesis is in the numerator and which hypothesis is in the denominator. When the null-hypothesis is placed in the numerator and the alternative hypothesis is placed in the denominator, Bayes-Factors (BF01) decrease towards zero the more the data suggest that an effect is present. In this way, Bayes-Factors behave very much like p-values. As the signal-to-noise ratio increases, p-values and BF01 decrease.
There are two practical problems in the use of Bayes-Factors. One problem is that Bayes-Factors depend on the specification of the a priori distribution of effect sizes. It is therefore important that results can never be interpreted as evidence for the null-hypothesis or against the null-hypothesis per se. A Bayes-Factor that favors the null-hypothesis in the comparison to one a priori distribution can favor the alternative hypothesis for another a priori distribution of effect sizes. This makes Bayes-Factors impractical for the purpose of demonstrating that an effect does not exist (e.g., a drug does not have positive treatment effects). The second problem is that Bayes-Factors only provide quantitative information about the two hypotheses. Without a clear criterion value, Bayes-Factors cannot be used to claim that an effect is present or absent.
Selecting a Criterion Value for Bayes-Factors
A number of criterion values seem plausible. NPST always leads to a decision depending on the criterion for p-values. An equivalent criterion value for Bayes-Factors would be a value of 1. Values greater than 1 favor the null-hypothesis over the alternative, whereas values less than 1 favor the alternative hypothesis. This criterion avoids inconclusive results. The disadvantage with this criterion is that Bayes-Factors close to 1 are very variable and prone to have high type-I and type-II error rates. To avoid this problem, it is possible to use more stringent criterion values. This reduces the type-I and type-II error rates, but it also increases the rate of inconclusive results in noisy studies. Bayes-Factors of 3 (a 3 to 1 ratio in favor of the null over an alternative hypothesis) are often used to suggest that the data favor one hypothesis over another, and Bayes-Factors of 10 or more are often considered strong support. One problem with these criterion values is that there have been no systematic studies of the type-I and type-II error rates for these criterion values. Moreover, there have been no systematic sensitivity studies; that is, the ability of studies to reach a criterion value for different signal-to-noise ratios.
Wagenmakers et al. (2011) argued that p-values can be misleading and that Bayes-Factors provide more meaningful results. To make their point, they investigated Bem’s (2011) controversial studies that seemed to demonstrate the ability to anticipate random events in the future (time –reversed causality). Using a significance criterion of p < .05 (one-tailed), 9 out of 10 studies showed evidence of an effect. For example, in Study 1, participants were able to predict the location of erotic pictures 54% of the time, even before a computer randomly generated the location of the picture. Using a more liberal type-I error rate of p < .10 (one-tailed), all 10 studies produced evidence for extrasensory perception.
Wagenmakers et al. (2011) re-examined the data with Bayes-Factors. They used a Bayes-Factor of 3 as the criterion value. Using this value, six tests were inconclusive, three provided substantial support for the null-hypothesis (the observed effect was just due to noise in the data) and only one test produced substantial support for ESP. The most important point here is that the authors interpreted their results using a Bayes-Factor of 3 as criterion. If they had used a Bayes-Factor of 10 as criterion, they would have concluded that all studies were inconclusive. If they had used a Bayes-Factor of 1 as criterion, they would have concluded that 6 studies favored the null-hypothesis and 4 studies favored the presence of an effect.
Matzke, Nieuwenhuis, van Rijn, Slagter, van der Molen, and Wagenmakers used Bayes-Factors in a design with optional stopping. They agreed to stop data-collection when the Bayes-Factor reached a criterion value of 10 in favor of either hypothesis. The implementation of a decision to stop data collection suggests that a Bayes-Factor of 10 was considered decisive. One reason for this stopping rule would be that it is extremely unlikely that a Bayes-Factor might swing to favoring the alternative hypothesis if more data were collected. By the same logic, a Bayes-Factor of 10 that favors the presence of an effect in an ESP effect would suggest that further data collection would be unnecessary because the evidence already shows rather strong evidence that an effect is present.
Tan, Dienes, Jansari, and Goh, (2014) report a Bayes-Factor of 11.67 and interpret as being “greater than 3 and strong evidence for the alternative over the null” (p. 19). Armstrong and Dienes (2013) report a Bayes-Factor of 0.87 and state that no conclusion follows from this finding because the Bayes-Factor is between 3 and 1/3. This statement implies that Bayes-Factors that meet the criterion value are conclusive.
In sum, a criterion-value of 3 has often been used to interpret empirical data and a criterion of 10 has been used as strong evidence in favor of an effect or in favor of the null-hypothesis.
Meta-Analysis of Multiple Studies
As sample sizes increase, noise decreases and the signal-to-noise ratio increases. Rather than increasing the sample size of a single study, it is also possible to conduct multiple smaller studies and to combine the evidence of studies in a meta-analysis. The effect is the same. A meta-analysis based on several original studies reduces random noise in the data and can produce higher signal-to-noise ratios when an effect is present. On the flip side, a low signal-to-noise ratio in a meta-analysis implies that the signal is very weak and that the true effect size is close to zero. As the evidence in a meta-analysis is based on the aggregation of several smaller studies, the results should be consistent. That is, the effect size in the smaller studies and the meta-analysis is the same. The only difference is that aggregation of studies reduces noise, which increases the signal-to-noise ratio. A meta-analysis therefore can highlight the problem of interpreting a low signal-to-noise ratio (BF10 < 1, p > .05) in small studies as evidence for the null-hypothesis. In NPST this result would be flagged as not trustworthy because the type-II error probability is high. For example, a non-significant result with a type-II error of 80% (20% power) is not particularly interesting and nobody would want to accept the null-hypothesis with such a high error probability. Holding the effect size constant, the type-II error probability decreases as the number of studies in a meta-analysis increases and it becomes increasingly more probable that the true effect size is below the value that was considered necessary to demonstrate an effect. Similarly, Bayes-Factors can be misleading in small samples and they become more conclusive as more information becomes available.
A simple demonstration of the influence of sample size on Bayes-Factors comes from Rouder and Morey (2011). The authors point out that it is not possible to combine Bayes-Factors by multiplying Bayes-Factors of individual studies. To address this problem, they created a new method to combine Bayes-Factors. This Bayesian meta-analysis is implemented in the Bayes-Factor r-package. Rouder and Morey (2011) applied their method to a subset of Bem’s data. However, they did not use it to examine the combined Bayes-Factor for the 10 studies that Wagenmakers et al. (2011) examined individually. I submitted the t-values and sample sizes of all 10 studies to a Bayesian meta-analysis and obtained a strong Bayes-Factor in favor of an effect, BF10 = 16e7, that is, 16 million to 1 in favor of ESP. Thus, a meta-analysis of all 10 studies strongly suggests that Bem’s data are not random.
Another way to meta-analyze Bem’s 10 studies is to compute a Bayes-Factor based on the finding that 9 out of 10 studies produced a significant result. The p-value for this outcome under the null-hypothesis is extremely small; 1.86e-11, that is p < .00000000002. It is also possible to compute a Bayes-Factor for the binomial probability of 9 out of 10 successes with a probability of 5% to have a success under the null-hypothesis. The alternative hypothesis can be specified in several ways, but one common option is to use a uniform distribution from 0 to 1 (beta(1,1). This distribution allows for the power of a study to range anywhere from 0 to 1 and makes no a priori assumptions about the true power of Bem’s studies. The Bayes-Factor strongly favors the presence of an effect, BF10 = 20e9. In sum, a meta-analysis of Bem’s 10 studies strongly supports the presence of an effect and rejects the null-hypothesis.
The meta-analytic results raise concerns about the validity of Wagenmakers et al.’s (2011) claim that Bem presented weak evidence and that p-values misleading information. Instead, Wagenmakers et al.’s Bayes-Factors are misleading and fail to detect an effect that is clearly present in the data.
The Devil is in the Priors: What is the Alternative Hypothesis in the Default Bayesian t-test?
Wagenmakers et al. (2011) computed Bayes-Factors using the default Bayesian t-test. The default Bayesian t-test uses a Cauchy distribution centered over zero as the alternative hypothesis. The Cauchy distribution has a scaling factor. Wagenmakers et al. (2011) used a default scaling factor of 1. Since then, the default scaling parameter has changed to .707.Figure 1 illustrates Cauchi distributions with scaling factors .2, .5, .707, and 1.
The black line shows the Cauchy distribution with a scaling factor of d = .2. A scaling factor of d = .2 implies that 50% of the density of the distribution is in the interval between d = -.2 and d = .2. As the Cauchy-distribution is centered over 0, this specification also implies that the null-hypothesis is considered much more likely than many other effect sizes, but it gives equal weight to effect sizes below and above an absolute value of d = .2. As the scaling factor increases, the distribution gets wider. With a scaling factor of 1, 50% of the density distribution is within the range from -1 to 1 and 50% covers effect sizes greater than 1. The choice of the scaling parameter has predictable consequences on the Bayes-Factor. As long as the true effect size is more extreme than the scaling parameter, Bayes-Factors will favor the alternative hypothesis and Bayes-Factors will increase towards infinity as sampling error decreases. However, for true effect sizes that are below the scaling parameter, Bayes-Factors may initially favor the null-hypothesis because the alternative hypothesis includes effect sizes that are more extreme than the alternative hypothesis. As sample sizes increase, the Bayes-Factor will change from favoring the null-hypothesis to favoring the alternative hypothesis. This can explain why Wagenmakers et al. (2011) found no support for ESP when Bem’s studies were examined individually, but a meta-analysis of all studies shows strong evidence in favor of an effect.
The effect of the scaling parameter on Bayes-Factors is illustrated in the following Figure.
The straight lines show Bayes-Factors (y-axis) as a function of sample size for a scaling parameter of 1. The black line shows Bayes-Factors favoring an effect of d = .2 when the effect size is actually d = .2 (BF10) and the red line shows Bayes-Factor favoring the null-hypothesis when the effect size is actually 0. The green line implies a criterion value of 3 to suggest “substantial” support for either hypothesis (Wagenmakers et al., 2011). The figure shows that Bem’s sample sizes of 50 to 150 participants could never produce substantial evidence for an effect when the observed effect size is d = .2. In contrast, an effect size of 0 would produce provide substantial support for the null-hypothesis. Of course, actual effect sizes in samples will deviated from these hypothetical values, but sampling error will average out. Thus, for studies that occasionally show support for an effect there will also be studies that underestimate support for an effect. The dotted lines illustrate how the choice of the scaling factor influences Bayes-Factors. With a scaling factor of d = .2, Bayes-Factors would never favor the null-hypothesis. They would also not support the alternative hypothesis in studies with less than 150 participants and even in these studies the Bayes-Factor is likely to be just above 3.
Figure 2 explains why Wagenmakers et al.’s (2011) did mainly find inconclusive results. On the one hand, the effect size was typically around d = .2. As a result, the Bayes-Factor did not provide clear support for the null-hypothesis. On the other hand, an effect size of d = .2 in studies with 80% power is insufficient to produce Bayes-Factors favoring the presence of an effect, when the alternative hypothesis is specified as a Cauchy distribution centered over 0. This is especially true when the scaling parameter is larger, but even for a seemingly small scaling parameter Bayes-Factors would not provide strong support for a small effect. The reason is that the alternative hypothesis is centered over 0. As a result, it is difficult to distinguish the null-hypothesis from the alternative hypothesis.
A True Alternative Hypothesis: Centering the Prior Distribution over a Non-Null Effect Size
A Cauchy-distribution is just one possible way to formulate an alternative hypothesis. It is also possible to formulate alternative hypothesis as (a) a uniform distribution of effect sizes in a fixed range (e.g., the effect size is probably small to moderate, d = .2 to .5) or as a normal distribution centered over an effect size (e.g., the effect is most likely to be small, but there is some uncertainty about how small, d = 2 +/- SD = .1) (Dienes, 2014).
Dienes provided an online app to compute Bayes-Factors for these prior distributions. I used the posted r-code by John Christie to create the following figure. It shows Bayes-Factors for three a priori uniform distributions. Solid lines show Bayes-Factors for effect sizes in the range from 0 to 1. Dotted lines show effect sizes in the range from 0 to .5. The dot-line pattern shows Bayes-Factors for effect sizes in the range from .1 to .3. The most noteworthy observation is that prior distributions that are not centered over zero can actually provide evidence for a small effect with Bem’s (2011) sample sizes. The second observation is that these priors can also favor the null-hypothesis when the true effect size is zero (red lines). Bayes-Factors become more conclusive for more precisely formulate alternative hypotheses. The strongest evidence is obtained by contrasting the null-hypothesis with a narrow interval of possible effect sizes in the .1 to .3 range. The reason is that in this comparison weak effects below .1 clearly favor the null-hypothesis. For an expected effect size of d = .2, a range of values from 0 to .5 seems reasonable and can produce Bayes-Factors that exceed a value of 3 in studies with 100 to 200 participants. Thus, this is a reasonable prior for Bem’s studies.
It is also possible to formulate alternative hypotheses with normal distributions around an a priori effect size. Dienes recommends setting the mean to 0 and to set the standard deviation of the expected effect size. The problem with this approach is again that the alternative hypothesis is centered over 0 (in a two-tailed test). Moreover, the true effect size is not known. Like the scaling factor in the Cauchy distribution, using a higher value leads to a wider spread of alternative effect sizes and makes it harder to show evidence for small effects and easier to find evidence in favor of H0. However, the r-code also allows specifying non-null means for the alternative hypothesis. The next figure shows Bayes-Factors for three normally distributed alternative hypotheses. The solid lines show Bayes-Factors with mean = 0 and SD = .2. The dotted line shows Bayes-Factors for d = .2 (a small effect and the effect predicted by Bem) and a relatively wide standard deviation of .5. This means 95% of effect sizes are in the range from -.8 to 1.2. The broken (dot/dash) line shows Bayes-Factors with a mean of d = .2 and a narrower SD of d = .2. The 95% CI still covers a rather wide range of effect sizes from -.2 to .6, but due to the normal distribution effect sizes close to the expected effect size of d = .2 are weighted more heavily.
The first observation is that centering the normal distribution over 0 leads to the same problem as the Cauchy-distribution. When the effect size is really 0, Bayes-Factors provide clear support for the null-hypothesis. However, when the effect size is small, d = .2, Bayes-Factors fail to provide support for the presence for samples with fewer than 150 participants (this is a ones-sample design, the equivalent sample size for between-subject designs is N = 600). The dotted line shows that simply moving the mean from d = 0 to d = .2 has relatively little effect on Bayes-Factors. Due to the wide range of effect sizes, a small effect is not sufficient to produce Bayes-Factors greater than 3 in small samples. The broken line shows more promising results. With d = .2 and SD = .2, Bayes-Factors in small samples with less than 100 participants are inconclusive. For sample sizes of more than 100 participants, both lines are above the criterion value of 3. This means, a Bayes-Factor of 3 or more can support the null-hypothesis when it is true and it can show that a small effect is present when an effect is present.
Another way to specify the alternative hypothesis is to use a one-tailed alternative hypothesis (a half-normal). The mode (the center of the normal-distribution) of the distribution is 0. The solid line shows a standard deviation of .8. The dotted line shows results with standard deviation = .5 and the broken line shows results for a standard deviation of d = .2. The solid line favors the null-hypothesis and it requires sample sizes of more than 130 participants before an effect size of d = .2 produces a Bayes-Factor of 3 or more. In contrast, the broken line discriminates against the null-hypothesis and practically never supports the null-hypothesis when it is true. The dotted line with a standard deviation of .5 works best. It always shows support for the null-hypothesis when it is true and it can produce Bayes-Factors greater than 3 with a bit more than 100 participants.
In conclusion, the simulations show that Bayes-Factors depend on the specification of the prior distribution and sample size. This has two implications. Unreasonable priors will lower the sensitivity/power of Bayes-Factors to support either the null-hypothesis or the alternative hypothesis when these hypotheses are true. Unreasonable priors will also bias the results in favor of one of the two hypotheses. As a result, researchers need to justify the choice of their priors and they need to be careful when they interpret results. It is particularly difficult to interpret Bayes-Factors when the alternative hypothesis is diffuse and the null-hypothesis is supported. In this case, the evidence merely shows that the null-hypothesis fits the data better than the alternative, but the alternative is a composite of many effect sizes and some of these effect sizes may fit the data better than the null-hypothesis.
Comparison of Different Prior Distributions with Bem’s (2011) ESP Experiments
To examine the influence of prior distributions on Bayes-Factors, I computed Bayes-Factors using several prior distributions. I used a d~Cauchy(1) distribution because this distribution was used by Wagenmakers et al. (2011). I used three uniform prior distributions with ranges of effect sizes from 0 to 1, 0 to .5, and .1 to .3. Based on Dienes recommendation, I also used a normal distribution centered on zero with the expected effect size as the standard deviation. I used both two-tailed and one-tailed (half-normal) distributions. Based on a twitter-recommendation by Alexander Etz, I also centered the normal distribution on the effect size, d = .2, with a standard deviation of d = .2.
The d~Cauchy(1) prior used by Wagenmakers et al. (2011) gives the weakest support for an effect. The table also includes the product of Bayes-Factors. The results confirm that the product is not a meaningful statistic that can be used to conduct a meta-analysis with Bayes-Factors. The last column shows Bayes-Factors based on a traditional fixed-effect meta-analysis of effect sizes in all 10 studies. Even the d~Cauchy(1) prior now shows strong support for the presence of an effect even though it often favored the null-hypotheses for individual studies. This finding shows that inferences about small effects in small samples cannot be trusted as evidence that the null-hypothesis is correct.
Table 1 also shows that all other prior distributions tend to favor the presence of an effect even in individual studies. Thus, these priors show consistent results for individual studies and for a meta-analysis of all studies. The strength of evidence for an effect is predictable from the precision of the alternative hypothesis. The uniform distribution with a wide range of effect sizes from 0 to 1, gives the weakest support, but it still supports the presence of an effect. This further emphasizes how unrealistic the Cauchy-distribution with a scaling factor of 1 is for most studies in psychology. For most studies in psychology effect sizes greater than 1 are rare. Moreover, effect sizes greater than one do not need fancy statistics. A simple visual inspection of a scatter plot is sufficient to reject the null-hypothesis. The strongest support for an effect is obtained for the uniform distribution with a range of effect sizes from .1 to .3. The advantage of this range is that the lower bound is not 0. Thus, effect sizes below the lower bound provide evidence for H0 and effect sizes above the lower bound provide evidence for an effect. The lower bound can be set by a meaningful consideration of what effect sizes might be theoretically or practically so small that they would be rather uninteresting even if they are real. Personally, I find uniform distributions appealing because they best express uncertainty about an effect size. Most theories in psychology do not make predictions about effect sizes. Thus, it seems impossible to say that an effect is expected to be small (d = .2) or moderate (d = .5). It seems easier to say that an effect is expected to be small (d = .1 to .3) or moderate (.3 to .6) or large (.6 to 1). Cohen used fixed values only because power analysis requires a single value. As Bayesian statistics allows the specification of ranges, it makes sense to specify a range of values with the need to make predictions which values in this range are more likely. However, results for the normal distribution provide similar results. Again, the strength of evidence of an effect increases with the precision of the predicted effect. The weakest support for an effect is obtained with a normal distribution centered over 0 and a two-tailed test. This specification is similar to a Cauchy distribution but it uses the normal distribution. However, by setting the standard deviation to the expected effect sizes, Bayes-Factors show evidence for an effect. The evidence for an effect becomes stronger by centering the distribution over the expected effect size or by using a half-normal (one-tailed) test that makes predictions about the direction of the effect.
To summarize, the main point is that Bayes-Factors depend on the choice of the alternative distribution. Bayesian statisticians are of course well aware of this fact. However, in practical applications of Bayesian statistics, the importance of the prior distribution is often ignored, especially when Bayes-Factors favor the null-hypothesis. Although this finding only means that the data support the null-hypothesis more than the alternative hypothesis, the alternative hypothesis is often described in vague terms as a hypothesis that predicted an effect. However, the alternative hypothesis does not just predict that there is an effect. It makes predictions about the strength of effects and it is always possible to specify an alternative that predicts an effect that is still consistent with the data by choosing a small effect size. Thus, Bayesian statistics can only produce meaningful results if researchers specify a meaningful alternative hypothesis. It is therefore surprising how little attention Bayesian statisticians have devoted to the issue of specifying the prior distribution. The most useful advice comes from Dienes recommendation to specify the prior distribution as a normal distribution centered over 0 and to set the standard deviation to the expected effect size. If researchers are uncertain about the effect size, they could try different values for small (d = .2), moderate (d = .5), or large (d = .8) effect sizes. Researchers should be aware that the current default setting of .707 in Rouder’s online app implies an expectation of a strong effect and that this setting will make it harder to show evidence for small effects and inflates the risk of obtaining false support for the null-hypothesis.
Why Psychologists Should not Change the Way They Analyze Their Data
Wagenmakers et al. (2011) did not simply use Bayes-Factors to re-examine Bem’s claims about ESP. Like several other authors, they considered Bem’s (2011) article an example of major flaws in psychological science. Thus, they titled their article with the rather strong admonition that “Psychologists Must Change The Way They Analyze Their Data.” They blame the use of p-values and significance tests as the root cause of all problems in psychological science. “We conclude that Bem’s p values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data” (p. 426). The crusade against p-values starts with the claim that it is easy to obtain data that reject the null-hypothesis even when the null-hypothesis is true. “These experiments highlight the relative ease with which an inventive researcher can produce significant results even when the null hypothesis is true” (p. 427). However, this statement is incorrect. The probability of getting significant results is clearly specified by the type-I error rate. When the null-hypothesis is true, a significant result will emerge only 5% of the time; that is in 1 out of 20 studies. The probability of making a type-I error repeatedly decrease exponentially. For two studies, the probability to obtain two type-I errors is only p = .0025 or 1 out of 400 (20 * 20 studies). If some non-significant results are obtained, the binomial probability gives the probability that the frequency of significant results that could have been obtained if the null-hypothesis were true. Bem obtained 9 out of 10 significant results. With a probability of p = .05, the binomial probability is 18e-10. Thus, there is strong evidence that Bem’s results are not type-I errors. He did not just go in his lab and run 10 studies and obtained 9 significant results by chance alone. P-values correctly quantify how unlikely this event is in a single study and how this probability decrease as the number of studies increases. The table also shows that all Bayes-Factors confirm this conclusion when the results of all studies are combined in a meta-analysis. It is hard to see how p-values can be misleading when they lead to the same conclusion as Bayes-Factors. The combined evidence presented by Bem cannot be explained by random sampling error. The data are inconsistent with the null-hypothesis. The only misleading statistic is provided by a Bayes-Factor with an unreasonable prior distribution of effect sizes in small samples. All other statistics agree that the data show an effect.
Wagenmakers et al. (2011) next argument is that p-values only consider the conditional probability when the null-hypothesis is true, but that it is also important to consider the conditional probability if the alternative hypothesis is true. They fail to mention, however, that this alternative hypothesis is equivalent to the concept of statistical power. A p-values of less than .05 means that a significant result would be obtained only 5% of the time when the null-hypothesis is true. The probability of a significant result when an effect is present depends on the size of the effect and sampling error and can be computed using standard tools for power analysis. Importantly, Bem (2011) actually carried out an a priori power analysis and planned his studies to have 80% power. In a one-sample t-test, standard error is defined as 1/sqrt(N). Thus, with 100 participants, the standard error is .1. With an effect size of d = .2, the signal-to-noise ratio is .2/.1 = 2. Using a one-tailed significance test, the criterion value for significance is 1.66. The implied power is 63%. Bem used an effect size of d = .25 to suggest that he has 80% power. Even with a conservative estimate of 50% power, the likelihood ratio of obtaining a significant is .50/.05 = 10. This likelihood ratio can be interpreted like Bayes-Factors. Thus, in a study with 50% power, it is 10 times more likely to obtain a significant result when an effect is present than when the null-hypothesis is true. Thus, even in studies with modest power, favors the alternative hypothesis much more than the null-hypothesis. To argue that p-values provide weak evidence for an effect implies that a study had very low power to show an effect. For example, if a study has only 10% power, the likelihood ratio is only 2 in favor of an effect being present. Importantly, low power cannot explain Bem’s results because low power would imply that most studies produced non-significant results. However, he obtained 9 significant results in 10 studies. This success rate is itself an estimate of power and would suggest that Bem had 90% power in his studies. With 90% power, the likelihood ratio is .90/.05 = 18. The Bayesian argument against p-values is only valid for the interpretation of p-values in a single study in the absence of any information about power. Not surprisingly, Bayesians often focus on Fisher’s use of p-values. However, Neyman-Pearson emphasized the need to also consider type-II error rates and Cohen has emphasized the need to conduct power analysis to ensure that small effects can be detected. In recent years, there has been an encouraging trend to increase power of studies. One important consequence of high powered studies is that significant results increase the evidential value of significant results because a significant result is much more likely to emerge when an effect is present than when it is not present. However, it is important to note that the most likely outcome in underpowered studies is a non-significant result. Thus, it is unlikely that a set of studies can produce false evidence for an effect because a meta-analysis would reveal that most studies fail to show an effect. The main reason for the replication crisis in psychology is the practice not to report non-significant results. This is not a problem of p-values, but a problem of selective reporting. However, Bayes-Factors are not immune to reporting biases. As Table 1 shows, it would have been possible to provide strong evidence for ESP using Bayes-Factors as well.
To demonstrate the virtues of Bayesian statistics, Wagenmakers et al. (2011) then presented their Bayesian analyses of Bem’s data. What is important here, is how the authors explain the choice of their priors and how the authors interpret their results in the context of the choice of their priors. The authors state that they “computed a default Bayesian t test” (p. 430). The important word is default. This word makes it possible to present a Bayesian analysis without a justification of the prior distribution. The prior distribution is the default distribution, a one-size-fits-all prior that does not need any further elaboration. The authors do note that “more specific assumptions about the effect size of psi would result in a different test.” (p. 430). They do not mention that these different tests would also lead to different conclusions because the conclusion is always relative to the specified alternative hypothesis. Even less convincing is their claim that “we decided to first apply the default test because we did not feel qualified to make these more specific assumptions, especially not in an area as contentious as psi” (p. 430). It is true that the authors are not experts on PSI, but that is hardly necessary when Bem (2011) presented a meta-analysis and made an a prior prediction about effect size. Moreover, they could have at least used a half-Cauchy given that Bem used one-tailed tests.
The results of the default t-test are then used to suggest that “a default Bayesian test confirms the intuition that, for large sample sizes, one-sided p values higher than .01 are not compelling” (p. 430). This statement ignores their own critique of p-values that the compelingness of p-values depends on the power of a study. A p-value of .01 in a study with 10% power is not compelling because it is very unlikely outcome no matter whether an effect is present or not. However, in a study with 50% power, a p-value of .01 is very compelling because the likelihood ratio is 50. That is, it is 50 times more likely to get a significant result at p = .01 in a study with 50% power when an effect is present than when an effect is not present.
The authors then emphasize that they “did not select priors to obtain a desired result” (p. 430). This statement can be confusing to non-Bayesian readers. What this statement means is that Bayes-Factors do not entail statements about the probability that ESP exists or does not exist. However, Bayes-Factors do require specification of a prior distribution. Thus, the authors did select a prior distribution, namely the default distribution, and Table 1 shows that their choice of the prior distribution influenced the results.
The authors do directly address the choice of the prior distribution and state “we also examined other options, however, and found that our conclusions were robust. For a wide range of different non-default prior distributions on effect sizes, the evidence for precognition is either non-existent or negligible” (p. 430). These results are reported in a supplementary document. In these materials., the authors show how the scaling factor clearly influences results and that small scaling factors suggest an effect is present whereas larger scaling factors favor the null-hypothesis. However, Bayes-Factors in favor of an effect are not very strong. The reason is that the prior distribution is centered over 0 and a two-tailed test is being used. This makes it very difficult to distinguish the null-hypothesis from the alternative hypothesis. As shown in Table 1, priors that contrast the null-hypothesis with an effect provide much stronger evidence for the presence of an effect. In their conclusion, the authors state “In sum, we conclude that our results are robust to different specifiications of the scale parameter for the effect size prior under H1 “ This statement is more correct than the statement in the article, where they claim that they considered a wide range of non-default prior distributions. They did not consider a wide range of different distributions. They considered a wide range of scaling parameters for a single distribution; a Cauchy-distribution centered over 0. If they had considered a wide range of prior distributions, like I did in Table 1, they would have found that Bayes-Factors for some prior distributions suggest that an effect is present.
The authors then deal with the concern that Bayes-Factors depend on sample size and that larger samples might lead to different conclusions, especially when smaller samples favor the null-hypothesis. “At this point, one may wonder whether it is feasible to use the Bayesian t test and eventually obtain enough evidence against the null hypothesis to overcome the prior skepticism outlined in the previous section.” The authors claimed that they are biased against the presence of an effect by a factor of 10e-24. Thus, it would require a Bayes-Factor greater than 10e24 to sway them that ESP exists. They then point out that the default Bayesian t-test, a Cauchi(0,1) prior distribution, would produce this Bayes-Factor in a sample of 2,000 participants. They then propose that a sample size of N = 2,000 is excessive. This is not a principled robustness analysis. A much easier way to examine what would happen in a larger sample, is to conduct a meta-analysis of the 10 studies, which already included 1,196 participants. As shown in Table 1, the meta-analysis would have revealed that even the default t-test favors the presence of an effect over the null-hypothesis by a factor of 6.55e10. This is still not sufficient to overcome prejudice against an effect of a magnitude of 10e-24, but it would have made readers wonder about the claim that Bayes-Factors are superior than p-values. There is also no need to use Bayesian statistics to be more skeptical. Skeptical researchers can also adjust the criterion value of a p-value if they want to lower the risk of a type-I error. Editors could have asked Bem to demonstrate ESP with p < .001 rather than .05 in each study, but they considered 9 out of 10 significant results at p < .05 (one-tailed) sufficient. As Bayesians provide no clear criterion values when Bayes-Factors are sufficient, Bayesian statistics does not help editors in the decision process how strong evidence has to be.
Does This Mean ESP Exists?
As I have demonstrated, even Bayes-Factors using the most unfavorable prior distribution favors the presence of an effect in a meta-analysis of Bem’s 10 studies. Thus, Bayes-Factors and p-values strongly suggest that Bem’s data are not the result of random sampling error. It is simply too improbable that 9 out of 10 studies produce significant results when the null-hypothesis is true. However, this does not mean that Bem’s data provide evidence for a real effect because there are two explanations for systematic deviations from a random pattern (Schimmack, 2012). One explanation is that a true effect is present and that a study had good statistical power to produce a signal-to-noise ratio that produces a significant outcome. The other explanation is that no true effect is present, but that the reported results were obtained with the help of questionable research practices that inflate the type-I error rate. In a multiple study article, publication bias cannot explain the result because all studies were carried out by the same researcher. Publication bias can only occur when a researcher conducts a single study and reports a significant result that was obtained by chance alone. However, if a researcher conducts multiple studies, type-I errors will not occur again and again and questionable research practices (or fraud) are the only explanation for significant results when the null-hypothesis is actually true.
There have been numerous analyses of Bem’s (2011) data that show signs of questionable research practices (Francis, 2012; Schimmack, 2012; Schimmack, 2015). Moreover, other researchers have failed to replicate Bem’s results. Thus, there is no reason to believe in ESP based on Bem’s data even though Bayes-Factors and p-values strongly reject the hypothesis that sample means are just random deviations from 0. However, the problem is not that the data were analyzed with the wrong statistical method. The reason is that the data are not credible. It would be problematic to replace the standard t-test with the default Bayesian t-test because the default Bayesian t-test gives the right answer with questionable data. The reason is that it would give the wrong answer with credible data, namely it would suggest that no effect is present when a researcher conducts 10 studies with 50% power and honestly reports 5 non-significant results. Rather than correctly inferring from this pattern of results that an effect is present, the default-Bayesian t-test, when applied to each study individually, would suggest that the evidence is inconclusive.
Conclusion
There are many ways to analyze data. There are also many ways to conduct Bayesian analysis. The stronger the empirical evidence is, the less important the statistical approach will be. When different statistical approaches produce different results, it is important to carefully examine the different assumptions of statistical tests that lead to the different conclusions based on the same data. There is no superior statistical method. Never trust a statistician who tells you that you are using the wrong statistical method. Always ask for an explanation why one statistical method produces one result and why another statistical method produces a different result. If one method seems to make more reasonable assumptions than another (data are not normally distributed, unequal variances, unreasonable assumptions about effect size), use the more reasonable statistical method. I have repeatedly asked Dr. Wagenmakers to justify his choice of the Cauchi(0,1) prior, but he has not provide any theoretical or statistical arguments for this extremely wide range of effect sizes.
So, I do not think that psychologists need to change the way they analyze their data. In studies with reasonable power (50% or more), significant results are much more likely to occur when an effect is present than when an effect is not present, and likelihood ratios will show similar results as Bayes-Factors with reasonable priors. Moreover, the probability of a type-I errors in a single study is less important for researchers and science than long-term rate of type-II errors. Researchers need to conduct many studies to build up a CV, get jobs, grants, and take care of their graduate students. Low powered studies will lead to many non-significant results that provide inconclusive results. Thus, they need to conduct powerful studies to be successful. In the past, researchers often used questionable research practices to increase power without declaring the increased risk of a type-I error. However, in part due to Bem’s (2011) infamous article, questionable research practices are becoming less acceptable and direct replication attempts more quickly reveal questionable evidence. In this new culture of open science, only researchers who carefully plan studies will be able to provide consistent empirical support for a theory because the theory actually makes correct predictions. Once researchers report all of the relevant data, it is less important how these data are analyzed. In this new world of psychological science, it will be problematic to ignore power and to use the default Bayesian t-test because it will typically show no effect. Unless researches are planning to build a career on confirming the absence of effects, they should conduct studies with high-power and control type-I error rates by replicating and extending their own work. | 16,394 | 79,233 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2023-14 | longest | en | 0.892119 |
microclockparts.wordpress.com | 1,669,753,293,000,000,000 | text/html | crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00411.warc.gz | 420,469,889 | 18,778 | # On Clock Movements and Motors
Clock activities and motors are the core of any kind of watch. Clock activities and also electric motors might be mechanical or electronic, analog or electronic, yet they all determine the angle subtended by each of the hands at any kind of provided moment. Having existed for centuries in one type or one more, these interesting tools possess an abundant background.
Clock motors and also movements are in fact compatible terms describing the same thing, though “motion” is a trade term while laymen tend to favor “electric motor.” Originally they were strictly mechanical, utilizing the force of a hanging weight or coiled springtime to rotate a flywheel. Pendulums as well as escapements transformed the turning right into an oscillating motion with an especially obtained regularity.
Modern clock motions are digital rather than mechanical. Timing pulses originate from a quartz crystal that shakes at a precise regularity representing the crystal’s geometry. Mathematical registers partition the frequency right into standard timekeeping values.
The basic timing pulse is equated into motion of the hands. Mechanically this is achieved via a network of gears such that the shaft holding the used rotates 6 levels of arc every second, as well as the minute-hand shaft and hour-hand shaft remain in turn 60 times slower. Electronically this activity can be achieved by transforming numerical worths in electronic collectors straight right into shaft position.
These are the basics of timekeeping. In addition, every little thing cycles back to the start state ultimately, a curtailing to zero, as it were. A lot of commonly this worldwide reset takes place every 12 hrs, though a 24-hour cycle is typically frequently utilized.
replacement clock inserts
Nevertheless, these cycles are inning accordance with convention and can be expanded out regarding desired. It’s just a matter of setting the clock motion accordingly. Motors are easily available that perform weekly as well as regular monthly cycles with a concomitant positioning of an additional shaft to direct a fourth hand at the day of the week or date.
Some clock motors utilize a 4th hand shaft for displaying tide degree (a more complex calculation however still well within their capacity). Still others are made to report weather condition details solely (i.e., without incorporating it into a watch).
Yet temperature level, humidity, as well as barometric pressure are values obtained from sensors as opposed to stemmed from cyclical behavior. This means that climate movements call for additional inputs as well as need to develop limitations to the values. Dials need to be adjusted as well as hands turn a part of a full circle.
It ought to now be clear that clock activities and motors supply a broad range of attributes. A few of these are applied in off-the-shelf watches, but many are not. Nonetheless, it is not difficult to locate the movement you desire from an on the internet provider and also build the wanted clocks yourself.
The size of a lot of watches is 12 to 14 inches at most, as well as most of motors are created to accommodate hands up to the equivalent maximum length (i.e., minute hands no longer compared to 6 or 7 inches). However, some clocks are bigger as well as occupy a whole wall surface! In such a situation, one have to utilize a high-torque activity to make sure that the heavier hands can be rotated with enough power.
Speaking of power, almost all of the modern movements run on batteries varying in dimension from AA to C (and in some cases special choices). Due to the fact that replacing the batteries is needed annually or less frequently, individuals as a whole locate this appropriate. Nevertheless, you will certainly find motors out there that plug straight right into a wall socket.
Novelties are often incorporated right into or affixed to the system. Examples are pendulums (that oscillate for show just) and chimes (with various sound patterns offered). Which finishes our short article on clock motions and also electric motors. | 794 | 4,093 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2022-49 | latest | en | 0.935876 |
https://aptitude.gateoverflow.in/4830/made-easy-cbt-1-2019 | 1,674,769,763,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00746.warc.gz | 129,085,048 | 20,488 | 555 views
A train after travelling for 50 km meets with an accident and then proceeds at 3/4th of its former speed and arrives the destination 35 minutes late. Had the accident occured 24 km farther, it would have reached the destination only 25 minutes late. what is the speed of train in kmph
Let $s$ and $d$ be the original speed of train and total distance. Therefore,
$\left ( 50/s \right ) + \left ( d-50 \right )/\left ( 3s/4 \right ) = \left ( d/s \right )+35/60$ ---(1)
$\left ( 74/s \right ) + \left ( d-74 \right )/\left ( 3s/4 \right ) = \left ( d/s \right )+25/60$ ---(2)
Solving Equation (1) and (2), we get $s$ $=$ $48 km/hr$.
40 points
1
477 views
1 vote | 218 | 675 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.15625 | 4 | CC-MAIN-2023-06 | latest | en | 0.838007 |
http://runxinzhi.com/jbelial-p-2119617.html | 1,716,182,522,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971058222.5/warc/CC-MAIN-20240520045803-20240520075803-00873.warc.gz | 27,599,881 | 5,739 | # Square Coins
Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)
Total Submission(s): 3562 Accepted Submission(s): 2428
Problem Description
People in Silverland use square coins. Not only they have square shapes but also their values are square numbers. Coins with values of all square numbers up to 289 (=17^2), i.e., 1-credit coins, 4-credit coins, 9-credit coins, ..., and 289-credit coins, are available in Silverland.
There are four combinations of coins to pay ten credits:
ten 1-credit coins,
one 4-credit coin and six 1-credit coins,
two 4-credit coins and two 1-credit coins, and
one 9-credit coin and one 1-credit coin.
Your mission is to count the number of ways to pay a given amount using coins of Silverland.
Input
The input consists of lines each containing an integer meaning an amount to be paid, followed by a line containing a zero. You may assume that all the amounts are positive and less than 300.
Output
For each of the given amount, one line containing a single integer representing the number of combinations of coins should be output. No other characters should appear in the output.
Sample Input
2 10 30 0
Sample Output
1 4 27
Source
Recommend
Ignatius.L
这里的变化是函数的指数的变化是i*i,则k += i * i ;
` 1 #include<stdio.h> 2 #include<string.h> 3 int val[330] , val_[330] ; 4 int main () 5 { 6 int n ; 7 while ( scanf ( "%d" , &n ) != EOF , n ) 8 { 9 for ( int i = 0 ; i <= n ; ++ i )10 {11 val[i] = 1 ;12 val_[i] = 0 ;13 }14 for ( int i = 2 ; (i*i) <= n ; ++ i )15 {16 for ( int j = 0 ; j <= n ; ++ j)17 for ( int k = 0 ; ( k + j )<= n ; k +=(i*i) )18 {19 val_[k+j] += val[j] ;20 }21 for ( int j = 0 ; j <= n ; ++ j )22 {23 val[j] = val_[j] ;24 val_[j] = 0 ;25 }26 }27 printf ( "%d\n" , val[n] ) ;28 }29 return 0 ;30 }`
• 相关阅读:
Jenkins tomcat 一键发布 (三)
Jenkins docker 一键发布 (二)
Jenkins docker 一键发布 (一)
jenkins构建maven项目:找不到本地依赖包的解决办法
Linux socket编程示例
Linux虚拟机环境搭建
Linux vim 配置
vs2013 Qt5.7.0环境安装搭建
Linux下如何生成core dump 文件
QT5新建工程错误->无法打开源文件QtWidgets/QApplication
• 原文地址:https://www.cnblogs.com/jbelial/p/2119617.html | 727 | 2,360 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.109375 | 3 | CC-MAIN-2024-22 | latest | en | 0.780889 |
https://betterlesson.com/lesson/resource/3182927/all-about-ratios-ppt | 1,498,746,657,000,000,000 | text/html | crawl-data/CC-MAIN-2017-26/segments/1498128329344.98/warc/CC-MAIN-20170629135715-20170629155715-00114.warc.gz | 741,010,679 | 22,229 | ## All About Ratios.ppt - Section 3: Instruction & Teacher Modeling
All About Ratios.ppt
All About Ratios.ppt
# Using Ratios to Solve Problems
Unit 2: Ratios & Proportions
Lesson 6 of 20
## Big Idea: So, how are these two groups related? Understanding the multiplicative relationship between groups.
Print Lesson
2 teachers like this lesson
Standards:
Subject(s):
Math, proportions, problem solving, ratio of whole numbers, comparison statements, Ratios and Proportions
70 minutes
### Michelle Braggs
##### Similar Lessons
###### Part to Part Ratios Using Tape Diagrams and Tables
6th Grade Math » Rates and Ratios
Big Idea: There are multiple ways to represent proportional relationships and reason about solutions to problems.
Favorites(53)
Resources(25)
New Haven, CT
Environment: Urban
###### Review 3: Who's faster? Comparing Ratios & Rates
6th Grade Math » Review Unit
Big Idea: Who's faster? What is the better deal? How do you know? Students apply their knowledge of ratios and rates in order to determine how they relate to each other.
Favorites(4)
Resources(16)
Somerville, MA
Environment: Urban
###### Finding Equivalent Ratios
6th Grade Math » Equivalent Ratios
Big Idea: Students use tape diagrams to find equivalent ratios.
Favorites(5)
Resources(11)
Brooklyn, NY
Environment: Urban
sign up or
Something went wrong. See details for more info
Nothing to upload details close | 328 | 1,398 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.140625 | 3 | CC-MAIN-2017-26 | longest | en | 0.833678 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.