content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Quick question regardin bit shifting
11-03-2005 #1
Registered User
Join Date
Jun 2005
Quick question regardin bit shifting
im trying to shift two bytes of "binary" to OR them together and allow a carry from byte1 to be placed into byte2.
at the moment, im trying to shift byte1 by 7 places to the RIGHT and byte2 by 1 place to the LEFT.
i need to use the logical shift and not the arithmetic one so could someone look at the couple of lines of code below to tell me why, when i or together two bytes i get the wrong answer.
firstbyte[0] = (squashdum[1] >>= 7);
secondbyte[0] = squashdum[2] <<= 1;
result[1] = (firstbyte[0] | secondbyte[0]);
imagine first byte is 01100101
and second byte is 10000111
the result should be "00001110". = e0
but actually comes through as "00001111"? = f0
So what value does 'squashed' have in it first? At any rate:
01100101 >> 1 = 00110010
00110010 >> 1 = 00011001
00011001 >> 1 = 00001100
00001100 >> 1 = 00000110
00000110 >> 1 = 00000011
00000011 >> 1 = 00000001
Seven shifts.
10000111 << 1 = 00001110
One shift.
00001110 | 00000001 = 00001111
Hope is the first step on the road to disappointment.
Seven shifts.
Looks like 6.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
Hm. Could be.
So yeah, anyway, your:
a) Input is wrong, or...
b) Your output is wrong, as to what you're telling is you're getting.
Because if you did shift that seven places, and not 6, your result would be zero for the first portion.
Last edited by quzah; 11-03-2005 at 08:41 AM.
Hope is the first step on the road to disappointment.
im inputting a string of hexcharacters "1234567890" into a function which "squashes" them so that
squashdum[0] = "00010010" = "12"
squashdum[1] = "11000010" = "34"
and pairs of numbers/hex thereafter.
the MSB in squashdum[0] must be Or'd with squashdum[1]. at the moment, im doing only ONE shift.
im starting to think that its maybe to do with the numbering of the indexes of the array and the arrangement of the binary stack within it.
i.e. im thinking that the string in contains the values as...
squashdum[2] squashdum[1] squashdum[0]
by my logic would equate to a "binary" of within the array
01100101 ||| 01000011 ||| 00100001
the aim is to insert a zero or maybe 5 zero's at the start of the string "squashdum[0]" and watch the bits move along. if they fall off the edge then they just move to the next byte.
is this correct or have i confused myself?
Last edited by shoobsie; 11-03-2005 at 09:19 AM.
I'm having trouble with:
squashdum[1] = "11000010" = "34"
Wouldn't that binary number represent the two 4-bit integers 12 (0xC) and 2?
1 and 2 in squashdum[0] looks right, but that one looks weird. 3 and 4 should be 00110100.
If you understand what you're doing, you're not learning anything.
i just spoke to a friend and he told me the order of bytes and power of bits is:
within which is
[0]MSB.........LSB[0] [1]MSB.........LSB[1]
hope this is right now!
You can express binary numbers either way, but you have to express them consistently or you're going to get confused. squashdum[0] shows MSB -> LSB, but squashdum[1] shows LSB -> MSB. Typically
I've seen MSB -> LSB so you have:
1 - 0001
2 - 0010
3 - 0011
If you understand what you're doing, you're not learning anything.
You can express binary numbers either way, but you have to express them consistently or you're going to get confused. squashdum[0] shows MSB -> LSB, but squashdum[1] shows LSB -> MSB. Typically
I've seen MSB -> LSB so you have:
1 - 0001
2 - 0010
3 - 0011
its just that in my earlier post, i had them the wrong way round.
i was actually asking if i was correct!....which i wasnt
thanks alot though.
i've found that the problem is that when i shift the following string left by 1 position or multiply it by two:
i get the answer F0 when it should be E0. i think the program is adding and an extra 1 instead of a 0 at the least significant bit, after the shift.
should be
after 1 shift left.
but i get
how do i set the LSB to zero? was thinking of an AND mask with 0xFE but would that work?
I get the right answer. I'm not sure what you're doing wrong...
itsme@itsme:~/C$ ./shift
10000111 - 87
00001110 - 0E
itsme@itsme:~/C$ cat shift.c
#include <stdio.h>
void show_it(int num)
int i;
for(i = 7;i >= 0;--i)
printf("%c", (num >> i) & 1 ? '1' : '0');
printf(" - %02X\n", num);
int main(void)
unsigned char num = 0x87;
num <<= 1;
return 0;
itsme@itsme:~/C$ ./shift
10000111 - 87
00001110 - 0E
how do i set the LSB to zero? was thinking of an AND mask with 0xFE but would that work?
Yes, but you shouldn't need it. I don't see how that 1 is slipping in.
Last edited by itsme86; 11-04-2005 at 09:48 AM.
If you understand what you're doing, you're not learning anything.
11-03-2005 #2
11-03-2005 #3
11-03-2005 #4
11-03-2005 #5
Registered User
Join Date
Jun 2005
11-03-2005 #6
Gawking at stupidity
Join Date
Jul 2004
Oregon, USA
11-03-2005 #7
Registered User
Join Date
Jun 2005
11-03-2005 #8
Gawking at stupidity
Join Date
Jul 2004
Oregon, USA
11-03-2005 #9
Registered User
Join Date
Jun 2005
11-04-2005 #10
Registered User
Join Date
Jun 2005
11-04-2005 #11
Gawking at stupidity
Join Date
Jul 2004
Oregon, USA
|
{"url":"http://cboard.cprogramming.com/c-programming/71740-quick-question-regardin-bit-shifting.html","timestamp":"2014-04-17T22:16:59Z","content_type":null,"content_length":"83232","record_id":"<urn:uuid:aa0cf917-339a-4ca5-9e2c-d5d39d757f2b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
You are looking at historical revision 21817 of this page. It may differ significantly from its current revision.
Some simple randomness tests for a sequence of numbers.
(require-extension random-test)
The random-test library provides a procedure that applies various statistical tests to a sequence of random numerical values, and a procedure to reports the results of those tests in convenient form.
The library is useful for evaluating pseudorandom number generators for statistical sampling applications, compression algorithms, and other applications where the properties of a random sequence are
of interest. The code in the library is based on the ent program by John Walker.
[procedure] make-random-test:: [CAR CDR NULL?] -> (SEQ -> RANDOM-STATS)
This procedure creates a procedure that reads in a sequence of numerical values, and performs statistical tests to tests the randomness of the elements of the sequence.
By default, the sequence is expected to be a list; however, if a different sequential data structure is used (e.g. a stream), the optional arguments CAR, CDR, NULL? may be used to specify procedures
that perform the corresponding operations on the input sequence.
The returned procedure is of the form SEQ -> RANDOM-STATS, where SEQ is the sequence and the returned value is an alist with the following fields:
the result of the Chi-Square test
the calculated probability of the Chi-Square test
the mean of the values in the input sequence</td></tr>
the minimum of the values in the input sequence</td></tr>
the maximum of the values in the input sequence</td></tr>
Monte Carlo value of pi
the serial correlation coefficient
See the following section for explanation of the different fields.
[procedure] format-random-stats:: OUT * RANDOM-STATS -> UNDEFINED
Given an output port, and the value returned by the random test procedure, this procedure outputs a human readable interpretation of the test results.
Chi-Square Test
In general, the Chi-Square distribution for an experiment with k possible outcomes, performed n times, in which Y1, Y2,... Yk are the number of experiments which resulted in each possible outcome,
and probabilities of each outcome p1, p2,... pk, is given as:
\chi^{2} = \sum_{1 <= i <= k}{\frac{(Y_{i} - m p_{i})^{2}}{np_{i}}}
\Chi^2 will grow larger as the measured results diverge from those expected by pure chance. The probability Q that a Chi-Square value calculated for an experiment with d degrees of freedom (where d=
k-1, one less the number of possible outcomes) is due to chance is:
Q(\Chi^2,d) = [2^{d/2} * \Gamma(d/2)]^{-1} * \int_{\Chi^2}^{\infty}(t)^{d/2-1} * exp(-t/2) * dt
Where Gamma is the generalization of the factorial function to real and complex arguments:
\Gamma(x) = \int_{0}^{\infty} t^{x-1} * exp(-t) * dt
There is no closed form solution for Q, so it must be evaluated numerically. Note that the probability calculated from the \Chi^2 is an approximation which is valid only for large values of n, and is
therefore only meaningful when calculated from a large number of independent experiments.
In this implementation, the Chi-Square distribution is calculated for the list of values given as argument to the random-test procedure and expressed as an absolute number and a percentage which
indicates how frequently a truly random sequence would exceed the value calculated.
The percentage can be interpreted as the degree to which the sequence tested is suspected of being non-random. If the percentage is greater than 99% or less than 1%, the sequence is almost certainly
not random. If the percentage is between 99% and 95% or between 1% and 5%, the sequence is suspect. Percentages between 90% and 95% and 5% and 10% indicate the sequence is almost suspect.
Arithmetic Mean
This is simply the result of summing the all the values in the sequence and dividing by the sequence length. If the data are close to random, the mean should be about (2^b - 1)/2 where b is the
number of bits used to represent a value. If the mean departs from this value, the values are consistently high or low.
Monte Carlo Value for Pi
Each pair of two values in the input sequence is used as X and Y coordinates within a square with side N (the length of the input sequence). If the distance of the randomly-generated point is less
than the radius of a circle inscribed within the square, the pair of values is considered a hit. The percentage of hits can be used to calculate the value of pi:
# points within circle 1/4 * pi * r^2
---------------------- = -------------- = 1/4 * pi
# points within square r^2
# points within circle
pi = 4 * ----------------------
# points within square
For very long sequences (this approximation converges very slowly), the value will approach the correct value of Pi if the sequence is close to random.
Serial Correlation Coefficient
This quantity measures the extent to which each value in the sequence depends upon the previous value. For random sequences, this metric (which can be positive or negative) will be close to zero. A
non-random sequence such as a text file will yield a serial correlation coefficient of about 0.5. Predictable data will exhibit serial correlation coefficients approaching 1.
(use random-test srfi-1)
(randomize (current-milliseconds))
(define random-test (make-random-test))
(define lst (list-tabulate 1000000 (lambda (x) (random 1000000))))
(random-test lst 1000000)
(define stats (random-test lst 1000000))
(format-random-stats stats)
About this egg
Version history
Documentation converted to wiki format
Ported to Chicken 4
Removed testeez dependency
Build script updated for better cross-platform compatibility
eggdoc documentation fix
License upgrade to GPL v3.
Simplified the interface of random-test procedure
Bug fix in the bin update code
Initial release
Copyright 2007-2010 Ivan Raikov.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
A full copy of the GPL license can be found at
|
{"url":"http://wiki.call-cc.org/eggref/4/random-test?rev=21817","timestamp":"2014-04-21T09:36:25Z","content_type":null,"content_length":"10964","record_id":"<urn:uuid:254b7706-0b4c-4210-b149-6cae891bf55f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
moduli problem for flag varieties?
up vote 1 down vote favorite
Suppose $G$ is a reductive group over an algebraiclly closed field $k$ (suppose $k$ of char zero if you want at first). Let $X$ be its flag variety.
Question: What is the moduli problem that $X$ represents?
EDIT (to clarify): What is the functor of points of $X$?
reductive-groups ag.algebraic-geometry rt.representation-theory flag-varieties
1 Can you clarify the meaning of "the" moduli problem here? In any case, $X$ can be identified with the set of all Borel subgroups of $G$. – Jim Humphreys Apr 22 '12 at 17:23
add comment
1 Answer
active oldest votes
Let $X$ be a space and $H$ be a group such that $X\rightarrow X/H$ is a principal bundle. Then $Hom(Y,X/H)$ is in bijection with $H$ torsors over $Y$ equipped with an equivariant map
from their total space to $X$. So maps from $Y$ into the flag variety $G/P$ are in bijection with $P$ torsors on $Y$ equipped with a $P$-equivariant map to $G$. Or equally $P$
"subtorsors" of the trivial torsor $G\times Y$.
For example if we take $G=GL_{n}(\mathbb C)$ the data of a $P$ "subtorsor" of $G$ is equivalent to giving a flag (whose type is determined by $P$) of sub-bundles inside the trivial
n-dimensional bundle on $Y$.
up vote 4 down If for example $P$ consist of all matrices whose first column is zero everywhere except in the upper left corner, we have $G/P=\mathbb P^{n-1}$. Our description says maps into $\
vote accepted mathbb P^{n-1}$ are the same thing as linebundles inside of $Y\times \mathbb C^n$.
Taking the dual of such a linebundle and restricting the coordinate functions of $\mathbb C^n$ to it gives the usual universal property of $\mathbb P^{n-1}$.
All this should work over any field.
I think $Hom(Y,X)$ should be $Hom(Y,X/H)$ in your second sentence. – S. Carnahan♦ Apr 23 '12 at 2:03
Thanks, I fixed it. – Jan Weidner Apr 23 '12 at 19:04
add comment
Not the answer you're looking for? Browse other questions tagged reductive-groups ag.algebraic-geometry rt.representation-theory flag-varieties or ask your own question.
|
{"url":"http://mathoverflow.net/questions/94857/moduli-problem-for-flag-varieties","timestamp":"2014-04-20T18:30:23Z","content_type":null,"content_length":"54394","record_id":"<urn:uuid:2b67b725-ac47-49ac-9eee-1133c8a17a03>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Lance on Tuesday, September 18, 2012 at 6:01pm.
What is the greatest common factors of 9 and 27
• mth - Ms. Sue, Tuesday, September 18, 2012 at 6:01pm
• mth - Lance, Tuesday, September 18, 2012 at 6:07pm
1.What is he greates common factor of 36 and 24
Erin needs to find the greatest commo factor of 24 and 18 to help her find it answer each part below.
A. list the factor pairs of 24
B.Factor pairs for 18
C.Common factors of 24 and 18
The greatest common factor of 24 and 18
Am I missing and factor pairs for B and C?
• mth - Ms. Sue, Tuesday, September 18, 2012 at 6:12pm
All are correct, except you missed 3 as a common factor of 24 and 18.
Related Questions
Greatest Common factors - I dont know these answers. Can you help me? Greatest ...
Algebra - I need a step by step explanation of finding the GCF of (6ysquare -3y...
coomon factors - I don't understand common factors. can someone show me how to ...
MATH 117 - What is the greatest common factor (HINT: It may help for you to show...
Math - If a is the greatest common factor of 72 and 48, and if b is the greatest...
math 156 - how are concepts of the greatest common factors divisibility, and ...
Math. - 1. Tyler has 45 baseball cards and 54 basketball cards. He organized ...
Greatest Common Factors - In the lunch room, 36 fith-graders and 27 fourth-...
math 156 - Post your response to the following: How are the concepts of the ...
math - 1. What is the greatest common factor of: 2x2x5x7x7 2x5x5x7x7 a. 2x5 b. ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1348005674","timestamp":"2014-04-20T02:37:23Z","content_type":null,"content_length":"9043","record_id":"<urn:uuid:cb1a9009-8d69-4288-b053-e02cb2a75627>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Building Permits Survey
1. What is an economic time series?
An economic time series is a sequence of successive measurements of an economic activity (that is, variable) obtained at regular time intervals (such as every month or every quarter). The data must
be comparable over time, so they must be consistent in the concept being measured and the way that concept is measured.
2. What is seasonal adjustment?
Seasonal adjustment is the process of estimating and removing seasonal effects from a time series in order to better reveal certain non-seasonal features. Examples of seasonal effects include a July
drop in automobile production as factories retool for new models and increases in heating oil production during September in anticipation of the winter heating season.(Seasonal effects are defined
more precisely in 5. and 6. below.) Sometimes we also estimate and remove trading day effects and moving holiday effects (see 7. below) during the seasonal adjustment process.
3. Why do you seasonally adjust data?
Seasonal movements are often large enough that they mask other characteristics of the data that are of interest to analysts of current economic trends. For example, if each month has a different
seasonal tendency toward high or low values it can be difficult to detect the general direction of a time series' recent monthly movement (increase, decrease, turning point, no change, consistency
with another economic indicator, etc.). Seasonal adjustment produces data in which the values of neighboring months are usually easier to compare. Many data users prefer seasonally adjusted data
because they want to see those characteristics that seasonal movements tend to mask, especially changes in the direction of the series.
4. In the original (unadjusted) series, this year's April value is larger than the March value. But the seasonally adjusted series shows a decrease from March to April this year. What does this
discrepancy mean?
This difference in direction can happen only when the seasonal factor for April is larger than the seasonal factor for March, indicating that when the underlying level of the series isn't changing,
the April value will typically be larger than the March value. This year, the original series' April increase over the March value must be smaller than usual, either because the underlying level of
the series is decreasing or because some special event or events abnormally increased the March value somewhat, or decreased the April value somewhat. (When trading day or moving holiday effects are
present and are being adjusted out, other explanations are possible.)
5. What kinds of seasonal effects are removed during seasonal adjustment?
Seasonal adjustment procedures for monthly time series estimate effects that occur in the same calendar month with similar magnitude and direction from year to year. In series whose seasonal effects
come primarily from weather (rather than from, say, Christmas sales or economic activity tied to the school year or the travel season), the seasonal factors are estimates of average weather effects
for each month, for example, the average January decrease in new home construction in the Northeastern region of the U.S. due to cold and storms. Seasonal adjustment does not account for abnormal
weather conditions or for year-to-year changes in weather. It is important to note that seasonal factors are estimates based on present and past experience and that future data may show a different
pattern of seasonal factors.
6. What is the seasonal adjustment process?
The mechanics of seasonal adjustment involve breaking down a series into trend-cycle, seasonal, and irregular components.
Trend-Cycle: Level estimate for each month (quarter) derived from the surrounding year-or-two of observations.
Seasonal Effects: Effects that are reasonably stable in terms of annual timing, direction, and magnitude. Possible causes include natural factors (the weather), administrative measures (starting and
ending dates of the school year), and social/cultural/religious traditions (fixed holidays such as Christmas). Effects associated with the dates of moving holidays like Easter are not seasonal in
this sense, because they occur in different calendar months depending on the date of the holiday.
Irregular Component: Anything not included in the trend-cycle or the seasonal effects (or in estimated trading day or holiday effects). Its values are unpredictable as regards timing, impact, and
duration. It can arise from sampling error, non-sampling error, unseasonable weather, natural disasters, strikes, etc.
7. What are trading day effects and trading day adjustments?
Monthly (or quarterly) time series that are totals of daily activities can be influenced by each calendar month's weekday composition. This influence is revealed when monthly values consistently
depend on which days of the week occur five times in the month. For example, building permit offices are usually closed on Saturday and Sunday. Thus, the number of building permits issued in a given
month is likely to be higher if the month contains a surplus of weekdays and lower if the month contains a surplus of weekend days. Recurring effects associated with individual days of the week are
called trading-day effects.
Trading-day effects can make it difficult to compare series values or to compare movements in one series with movements in another. For this reason, when estimates of trading-day effects are
statistically significant, we adjust them out of the series. The removal of such estimates is called trading day adjustment.
8. How is the seasonal adjustment derived?
We use a computer program called X-13ARIMA-SEATS to derive our seasonal adjustment and produce seasonal factors.
It is difficult to estimate seasonal effects when the underlying level of the series changes over time. For this reason, the program starts by detrending the series with a crude estimate of the
trend-cycle. It then derives crude seasonal factors from the detrended series. It uses these to obtain a better trend-cycle and detrended series from which a more refined seasonal component is
obtained. This iterative procedure, involving successive improvements, is used because seasonal effects make it difficult to determine the underlying level of the series required for the first step.
Crude and more refined irregular components are used to identify and compensate for data that are so extreme that they can distort the estimates of trend-cycle and seasonal factors.
The seasonal factors are divided into the original series to get the seasonally adjusted series. For example, suppose for a particular January, a series has a value of 100,000 and a seasonal factor
of 0.80. The seasonally adjusted value for this January is 100,000/0.80=125,000.
If trading day or moving holiday effects are detected, their estimated factors are divided out of the series before seasonal factor estimation begins. The resulting seasonally adjusted series is
therefore the result of dividing by the product of the trading day, holiday, and seasonal factors. The product factors are usually called the combined factors, although some tables refer to them as
the seasonal factors for simplicity.
9. What is X-13ARIMA-SEATS?
X-13ARIMA-SEATS is a seasonal adjustment program developed at the U.S. Census Bureau. The program is based on the Bureau's earlier X-12-ARIMA program.
The X-13ARIMA-SEATS software improves upon the X-12-ARIMA seasonal adjustment software by providing enhanced diagnostics as well as incorporating an enhanced version of the Bank of Spain's SEATS
(Signal Extraction in ARIMA Time Series) software, which uses an ARIMA model-based procedure instead of the X-11 filter-based approach to estimate seasonal factors. The SEATS routines are included
due to collaboration with the developers of the software (Agustin Maravall, Chief Economist of the Bank of Spain, and Gianluca Caporello).
Improvements in X-13ARIMA-SEATS as compared to X-12ARIMA include:
13. What is an annual rate? Why are seasonally adjusted data often shown as annual rates?
Very generally, what we call the seasonally adjusted annual rate for an individual month (quarter) is an estimate of what the annual total would be if non-seasonal conditions were the same all year.
This "rate" is not a rate in a technical sense but is a level estimate.
The seasonally adjusted annual rate is the seasonally adjusted monthly value multiplied by 12 (4 for quarterly series). For example,
Seasonally Adjusted Annual Rate=(Unadjusted Monthly Survey Estimate)/(Seasonal Factor)*12
The benefit of the annual rate is that we can compare one month's data or one quarter's data to an annual total, and we can compare a month to a quarter.
The Bureau of Economic Analysis (BEA) publishes quarterly estimates of the United States gross domestic product (GDP) at an annual rate, and many of the Census Bureau data series are inputs to GDP.
Annual rates for input series help users see the data at the same level as GDP estimates.
14. Why can't I get the annual total by summing the seasonally adjusted monthly values (or by summing the annual rates for each month (quarter) of the year and dividing by 12 (4))?
When seasonal adjustment is done by dividing the time series by seasonal factors (or combined seasonal-trading day-holiday factors) it is arithmetically impossible for the adjusted series to have the
same annual totals as the unadjusted series (except in the uninteresting case in which the time series values repeat perfectly from year to year). "Benchmarking" procedures can be used to modify the
adjusted series so as to force the adjusted series to have the same totals as the unadjusted series, but these procedures do not account for evolving seasonal effects or for trading day differences
due to the differing weekday compositions of different years.
15. For the Construction series, how do I get seasonally adjusted quarterly values when you publish monthly seasonal adjustments (or rates)?
For monthly Construction series (Permits, Starts, Completions, Houses Sold, Construction Spending, and Manufactured Homes Shipments and Placements), which are flow series, averaging values for each
month in a quarter will produce a corresponding seasonally adjusted quarterly rate.
For the monthly Construction series which are stock (inventory) series (Houses For Sale, Under Construction, and Dealers' Inventory of Manufactured Homes), the monthly seasonally adjusted value for
the last month of the quarter is the seasonally adjusted quarterly value.
Note that these methods will not produce the same result as directly seasonally adjusting the quarterly series.
16. What is an indirect adjustment? Why is it used?
If an aggregate time series is a sum (or other composite) of component series that are seasonally adjusted, then the sum of the adjusted component series provides a seasonal adjustment of the
aggregate series that is called the indirect adjustment. This adjustment is usually different from the direct adjustment that is obtained by applying the seasonal adjustment program to the aggregate
(or composite) series. When the component series have quite distinct seasonal patterns and have adjustments of good quality, indirect seasonal adjustment is usually of better quality. Indirect
seasonal adjustments are preferred by many data users because they are consistent with the adjustments of the component series. For example,
United States = Northeast Region + Midwest Region + South Region + West Region
Because seasonal patterns are different in the different regions of the country, we can estimate the seasonality better by adjusting at the regional level and summing the results to obtain the
seasonal adjustment for the U.S. total.
17. For the Construction series, are the February estimates adjusted for leap-year effects prior to seasonal adjustment?
For monthly Construction series (Permits, Starts, Completions, Houses Sold, Construction Spending, and Manufactured Homes Shipments and Placements), which are flow series, we handle multiplicative
leap-year effects as part of the trading-day adjustment in X-13ARIMA-SEATS, which is performed prior to seasonal adjustment. A given February estimate is re-scaled, prior to applying a log
transformation, by multiplying the estimate by the ratio of the average length of February (28.25 days) to the length of the given February (either 28 or 29 days).
For the monthly Construction series which are stock (inventory) series (Houses For Sale, Under Construction, and Dealers' Inventory of Manufactured Homes), no adjustment for leap-year effects is made
in X-13ARIMA-SEATS prior to seasonal adjustment. The U.S. Months' Supply series is derived from the ratio of the seasonally adjusted U.S. Total Houses For Sale estimate (directly adjusted) and the
seasonally adjusted U.S. Total Houses Sold estimate (indirectly adjusted by summing up the four regions). Therefore, only part of the U.S. Months' Supply series (i.e. the Houses Sold estimate) is
adjusted for leap-year effects prior to seasonal adjustment.
|
{"url":"http://www.census.gov/construction/bps/faqs/faqs_seas.html","timestamp":"2014-04-18T10:57:51Z","content_type":null,"content_length":"148467","record_id":"<urn:uuid:d9eb8289-4391-465a-920b-822358b3e4f7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User Andrew D. King
bio website andrewdouglasking.com
location Vancouver
age 33
visits member for 4 years, 1 month
seen Apr 10 at 21:04
stats profile views 1,140
I am interested in graph theory and combinatorial optimization.
24 answered A k-1 edge connected k regular graph is matching covered
Oct Has anyone seen this graph?
18 comment In particular this graph is the smallest simple cubic graph with no perfect matching.
Oct What is this subclass of $k$-colorable graphs called?
17 comment I would call such a graph edge-maximal $k$-colourable. This property can be useful in induction on the number of non-edges in a graph.
31 awarded Necromancer
Jul Can you prove that hypergraphs with n-1 edges are partially 2 colorable?
27 comment Good! I was a little worried that Hall's theorem was the theorem you wanted to avoid.
27 answered Can you prove that hypergraphs with n-1 edges are partially 2 colorable?
7 awarded Critic
Jun Fast removal of weighted edges in a graph in a way such that all shortest paths are preserved
15 comment So is this equivalent to computing all-pairs shortest path, then deleting all edges not contained in some shortest path? And I don't really understand the question... use the
Floyd-Warshall algorithm if you want it to be simple, and use this Sudakov result you mention if you want it to be fast. I highly doubt that you would easily be able to construct the
edge set faster than that, but I may be wrong.
13 answered Combinatorial Proof of Weak Perfect Graph Theorem.
Apr Probability of having a bounded ratio of two types of balls in each of 'S' bins after random partitioning of a fixed number of balls
22 comment I agree with Peter. For certain values of $S$, $L$, and $A$, the Chernoff bound seems like it would be more than sufficient.
13 awarded Yearling
Feb Maximal clique intersection graphs
12 comment Thanks for this link... it may be useful for me, as I am also interested in maximal clique graphs (for different reasons).
9 answered Reasonable “Random” matrices to test numerical algorithms
Feb Is there evidence whether undergraduate math courses improve problem-solving?
8 comment Kevin, the section on the LSAT that math types tend to do particularly well on is "analytical reasoning". I can tell you from experience that if you have a fair amount of experience
working through mathematical proofs, you should find this section incredibly easy.
Feb What is the shortest Ph.D. thesis?
8 comment I can think of at least one preeminent mathematician who does not have a Ph.D. at all. I don't think that really falls into the same set of trivia, though.
Jan 12 and 13-bit balanced Gray codes
21 comment You mean binary Gray code? There is a construction in the Wikipedia article for Gray codes.
Jan definition of “exact neighborhood” [optimization]
19 comment I'm not familiar with the terminology but it's not really my area of expertise. It certainly seems like a strange choice of words, given how analogous it is to convexity.
24 answered When your paper makes a borderline case for a top journal
Dec 4-coloring maps of pentagons
17 comment I also think this case seems likely to be known, but sometimes these things can be surprising -- it is not known, for example, whether or not there is a 5-chromatic triangle-free graph
of maximum degree 5.
Partitioning a matrix with bounded row sums
Dec comment Yes, that's what I mean. Here is the link for the original Alon-Tarsi paper springerlink.com/content/u627qn50r7013363 , but you might get more out of it by looking at the papers which
15 cite it, for example onlinelibrary.wiley.com/doi/10.1002/jgt.20500/abstract . The proof of their result, which relates to list colourings, uses combinatorial nullstellensatz, which is
useful but intimidating. Better to look at what you can do using their theorem as a black box, first.
|
{"url":"http://mathoverflow.net/users/4580/andrew-d-king?tab=activity&sort=all&page=5","timestamp":"2014-04-18T08:19:05Z","content_type":null,"content_length":"47677","record_id":"<urn:uuid:9b480e83-61d4-4cdd-8172-05abe589ecba>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
'Cubist Cuts' printed from http://nrich.maths.org/
A $3 \times 3 \times 3$ cube may be reduced to unit cubes ($1 \times1 \times1$ cubes) in six saw cuts if you go straight at it.
If after every cut you can rearrange the pieces before cutting straight through, can you do it in fewer? Answer the same question with a $4 \times 4 \times 4$ cube:
What about a cube of any size (an $n \times n \times n$ cube)?
This problem is taken from "Sums for Smart Kids" by Laurie Buxton, published by BEAM Education. To obtain a copy call the BEAM orderline on 020 7684 3330 quoting product code SMAR. (Price: £13.50
plus handling and delivery.)
|
{"url":"http://nrich.maths.org/1158/index?nomenu=1","timestamp":"2014-04-17T15:49:21Z","content_type":null,"content_length":"3725","record_id":"<urn:uuid:4b075a28-f904-4cd9-b629-66c593ce48f3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
problems about differentiation
April 5th 2009, 01:39 AM
problems about differentiation
i seriously can't differentiate between pre-calculus and calculus, so if i post in the wrong place, i'm sorry.
1. A right circular cone with its vertex facing down has a height of 120cm and a radius of 60cm at the top. Water is poured into the cone at a rate of $x$$cm^3/s$. Find, in terms of x,
i) the rate at which the water level rises when the height is 60cm,
ii) the rate at which the wet inner surface of the cone is changing at this instant.
(Give your answer correct to 3 sig.fig.)
I can do 1i...the answer is 0.000354x.....1ii is the one that i don't know how to do....
2. A rectangle ABCD enclosed within the curve $y=8x-x^2$ and the x-axis. C and D are points on the curve A and B are points on the x-axis. If AB =2a, find the area of ABCD in terms of a. Find the
value of a for which the area of the rectangle ABCD is a maximum and find this area.
I don't even know how to start.Can someone teach me how to start??
3. If $y= a cos 2x + b sin 2x +2 cos x$, show that $\frac{d^2y}{dx^2} + 4y$ is independent of a and b. If a and b are such that y=3 and $\frac{dy}{dx} =0$ when x=0, find the values of x between
$0^\circ$ and $360^\circ$ for which $\frac{dy}{dx} = 0$.
i think i know how to do but my answer is not the same as the book's answer.
$\frac{dy}{dx} = -2a sin 2x + 2b cos 2x - 2 sin x$
$\frac{d^2y}{dx^2} = - 4a cos 2x - 4b sin 2x -2 cos x$
4y= 4a cos 2x + 4b sin 2x + 8 cos x
$\frac{d^2y}{dx^2} + 4y$
= -4a cos 2x - 4b sin 2x - 2 cos x + 4a cos 2x + 4b sin 2x + 8 cos x
=6 cos x
[shown:it's independent of a and b.]
by substituting y=3,a=2.
by substituting $\frac{dy}{dx}=0$ , b=0.
-2(2)sin 2x + 2(0) cos 2x - 2 sin x = 0
-4 sin 2x - 2 sin x =0
-2(2 sin 2x + sin x)= 0
2 sin 2x + sin x =0
4 sin x cos x + sin x = 0
sin x ( 4 cos x +1) =0
sin x =0
x= 0, 180, 360
$cos x = (-\frac{1}{4})$
Basic angle of x = 75.5
x=104.5, 255.5
but the book's answer is 120 and 240....those in red are wrong.....where did i went wrong???
April 5th 2009, 03:48 AM
i seriously can't differentiate between pre-calculus and calculus, so if i post in the wrong place, i'm sorry.
1. A right circular cone with its vertex facing down has a height of 120cm and a radius of 60cm at the top. Water is poured into the cone at a rate of $x$$cm^3/s$. Find, in terms of x,
i) the rate at which the water level rises when the height is 60cm,
ii) the rate at which the wet inner surface of the cone is changing at this instant.
(Give your answer correct to 3 sig.fig.)
I can do 1i...the answer is 0.000354x.....1ii is the one that i don't know how to do....
lateral surface area of a cone ... $S = \pi r \sqrt{r^2+h^2}$... the question is asking for $\frac{dS}{dt}$
2. A rectangle ABCD enclosed within the curve $y=8x-x^2$ and the x-axis. C and D are points on the curve A and B are points on the x-axis. If AB =2a, find the area of ABCD in terms of a. Find the
value of a for which the area of the rectangle ABCD is a maximum and find this area.
I don't even know how to start.Can someone teach me how to start??
sketch a picture of the parabola ... let point A be left of point B and note that the value $a > 0$.
point A is at position $(4-a , 0)$
point B is at position $(4+a , 0)$
rectangle height is h = $8(4-a) - (4-a)^2$
$A = 2a[8(4-a) - (4-a)^2]$
3. If $y= a cos 2x + b sin 2x +2 cos x$, show that $\frac{d^2y}{dx^2} + 4y$ is independent of a and b. If a and b are such that y=3 and $\frac{dy}{dx} =0$ when x=0, find the values of x between
$0^\circ$ and $360^\circ$ for which $\frac{dy}{dx} = 0$.
i think i know how to do but my answer is not the same as the book's answer.
$\frac{dy}{dx} = -2a sin 2x + 2b cos 2x - 2 sin x$
$\frac{d^2y}{dx^2} = - 4a cos 2x - 4b sin 2x -2 cos x$
4y= 4a cos 2x + 4b sin 2x + 8 cos x
$\frac{d^2y}{dx^2} + 4y$
= -4a cos 2x - 4b sin 2x - 2 cos x + 4a cos 2x + 4b sin 2x + 8 cos x
=6 cos x
[shown:it's independent of a and b.]
by substituting y=3,a=2. ... I think a = 1, check that.
by substituting $\frac{dy}{dx}=0$ , b=0.
-2(2)sin 2x + 2(0) cos 2x - 2 sin x = 0
-4 sin 2x - 2 sin x =0
-2(2 sin 2x + sin x)= 0
2 sin 2x + sin x =0
4 sin x cos x + sin x = 0
sin x ( 4 cos x +1) =0
sin x =0
x= 0, 180, 360
$cos x = (-\frac{1}{4})$
Basic angle of x = 75.5
x=104.5, 255.5
but the book's answer is 120 and 240....those in red are wrong.....where did i went wrong???
|
{"url":"http://mathhelpforum.com/calculus/82309-problems-about-differentiation-print.html","timestamp":"2014-04-20T19:04:59Z","content_type":null,"content_length":"15610","record_id":"<urn:uuid:e046333a-c3ab-4af3-bfff-7d25c3f1a914>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
first derivatives test and trig functions
June 16th 2013, 04:54 PM #1
Mar 2013
first derivatives test and trig functions
I am supposed to use the first derivative test to find any local min/max of $f(x)=sin^2(x)+cos)(x)$ on the interval $\frac{\pi}{6},\frac{3\pi}{2}$
Take the derivative and get $f'(x)=2cos(x)-sin(x)$
Set the derivative = 0 and I get $2cos(x)=sin(x)$
I graphed both 2cos(x) and sin(x) and there are 2 intersections. How do I solve for x?
Re: first derivatives test and trig functions
Re: first derivatives test and trig functions
Hey baldysm.
Hint: sin(x)/cos(x) = tan(x).
Edit: Also check HallsOfIvy's post above.
June 16th 2013, 05:46 PM #2
June 16th 2013, 05:46 PM #3
MHF Contributor
Sep 2012
|
{"url":"http://mathhelpforum.com/calculus/219896-first-derivatives-test-trig-functions.html","timestamp":"2014-04-17T22:19:32Z","content_type":null,"content_length":"37686","record_id":"<urn:uuid:60df83d2-4dbb-4992-9739-37baf404526b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Real-World Probability Books: Popular Science
Silver, Nate. The Signal and the Noise: Why So Many Predictions Fail -- but Some Don't. Penguin Press, 2012.
See my amazon.com review.
Senn, Stephen. Dicing With Death. Chance, risk and health. Cambridge University Press, 2003.
Excellent! The focus is on statistics in medicine, but the book zigzags through recent issues (ethics and politics of clinical trials, lawyer's abuse of statistical evidence, vaccine scares),
sometimes sophisticated analysis of particular data, combined with explanation and history of basic concepts, with half-page biographies of historical and modern statisticians going far beyond the
usual suspects. Has the lively style of The Economist, addressing a mentally alert adult reader rather than a casual reader or bored student.
Bernstein, Peter L. Against the Gods: The Remarkable Story of Risk. Wiley, 1996.
Surprising but well-deserved best-seller. 19 shortish chapters, different themes in historical order. Lively writing, almost no mathematics but gives the sense that real data is behind the prose. The
later chapters on investing and psychology are the most interesting to me: Chapter 16 (how information is presented affects people's decisions) and Chapter 17 (different perceptions of gains and
Rosenthal, Jeffrey S. Struck by Lightning: the curious world of probabilities. Joseph Henry Press, 2006.
The only author amongst whole list who does research in mathematical probability. Half the book is a "Textbook Lite" exposition of the more interesting parts of a college course in probability and
statistics: birthday problem and coincidences, law of large numbers, basic odds and strategy at roulette, poker, craps, utility functions, p-values in randomized controlled experiments, opinion polls
and the normal curve, genetics, Monty Hall. The other half samples "Popular Science" topics (Monte Carlo experiments, epidemics, spam filters, chaos) without the usual historical tales. Provides a
nice overview, in modern reader-friendly style, of how probabilists view the world. Unfortunately (to my taste) the logical points are mostly illustrated by hypothetical or fictional stories: to
argue that probability is relevant to the real world, surely one should appeal to fact not fiction?
Holland, Bart K. What are the Chances? Voodoo deaths, office gossip and other adventures in probability. Johns Hopkins, 2002.
Ignore the misleading subtitle. This is a professor who knows his stuff and can write clearly. Emphasis on medicine-related topics. Tells a lot of interesting stories (forensic psychiatrist's
predictions of recidivism; testing astrologers' predictions of personality; psychology of waiting in Disneyland lines or for airport luggage). But (to my taste) too little hard data to back up the
Everitt, Brian S. Chance Rules: An informal guide to probability, risk and statistics. Springer; 2nd edition, 2008.
See my amazon.com review.
Kaplan, Michael and Kaplan, Ellen. Chances Are: Adventures in Probability. Viking, 2006.
Eleven themed chapters cover the usual historical figures (Pascal, de Moivre, Laplace, Bernoulli, Bayes, Galton, Fisher) seeking to relate their innovations to the existing world-view. Includes some
eclectic history (fire insurance in seventeenth century London is related to Laplace's principles) and a little math (normal curve, Bayes formula). The "fighting" chapter has interesting historical
content beyond the usual game theory setting, though it's not clear this extra material has much to do with probability. Comparatively flamboyant rhetoric is sometimes overwrought ([the weak law of
large numbers] is a devourer of data: it must be fed to produce its certainties. Think how many poor scriveners, inspectors, census-takers, and graduate students have given the marrow of their lives
to preparing consistent series of facts to serve this tyrannical theorem ...) and sometimes overreaching ( ... history's most dangerous men are those who believe they knew how the game ends, whether
in earthly victory or paradise.) But in all, a good eclectic overview in a format between those of Bernstein and Peterson.
Peterson, Ivars. The Jungles of Randomness. Wiley, 1998.
Consists of 2-3 page sections on topics (e.g. Chutes and Ladders as a Markov chain; Ramsey theory; coupled oscillators; error-correcting codes; Brownian motion and Levy flights) in probability and
related areas of mathematics. The individual sections are clearly and interestingly explained by science journalist author who understands the mathematics. But the book has an overall choppy feel,
jumping from topic to topic without sustained logical thread.
Mlodinow, Leonard. The Drunkard's Walk: How Randomness Rules Our Lives. Pantheon, 2008.
Promising prologue "... when chance is involved, people's thought processes are often seriously flawed .... [this book] is about the principles that govern chance, the development of those ideas, and
the way they play out in business, medicine, economics, sports, ..." but a disappointing book. The book consists of a range of topics already well covered in a dozen previous popular science style
books: history of probability (Cardano, Pascal, Bernoulli, Laplace, de Moivre) and of demographic and economic data; statistical logic (Bayes rule and false positives/negatives; Galton and the
regression fallacy, normal curve and measurement error, mistaking random variation as being caused); overstating predictability in business affairs (past success doesn't ensure future success) and
perennials such as Monty Hall, the gambler's fallacy, and hot hands. These topics are presented in a way that's easy to read -- historical stories, anecdotes and experiments, with almost no
mathematics. So it's a perfectly acceptable read if you haven't seen any of this material before before, but it doesn't bring any novel content or viewpoint to the table.
Ekeland, Ivar. The Broken Dice, and other mathematical tales of chance. University of Chicago Press, 1993.
As an aficionado of Norse sagas, I was intrigued to find that a mathematician wrote a book on probability framed by Saint Olaf's saga. Six essays on popular science topics, with clear explanations
and interestingly non-standard historical and literary detours. But the choice of math topics (random number generators vs true randomness vs Kolmogorov complexity; random strategies in game theory;
chaos, attractors, fractals and ergodicity; risk aversion and underestimation of rare serious events) seems in 2006 very unimaginative, and despite its colorful background the book brings no new
insight or individualistic perspective to the science.
Tsonis, Anastasios A. Randomnicity: Rules and randomness in the realm of the infinite. Imperial College Press, 2008.
See my amazon.com review.
Bennett, Deborah J. Randomness. Harvard University Press, 1999.
Short book, mostly covering several of the usual topics but with some less common stories and a little math (e.g. the simplest random number generator).
Aczel, Amir D. Chance. A guide to gambling, love, the stock market, and just about everything else. Thunder's Mouth Press, 2004.
Yet another short book on the usual topics (gambler's ruin, coincidences, birthday problem, secretary problem). The writing style is clear but the content is completely derivative.
|
{"url":"http://www.stat.berkeley.edu/~aldous/157/Books/popular.html","timestamp":"2014-04-18T20:43:35Z","content_type":null,"content_length":"8362","record_id":"<urn:uuid:689cccaf-aeff-42b5-998c-3d5c8745bce3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Metric Units of Mass and Capacity
4.7: Metric Units of Mass and Capacity
Created by: CK-12
The Metric Park
Mrs. Andersen’s class is having a great time at the science museum. Sam and Olivia are very excited when the group comes upon the metric playground. This playground has been built inside the museum
and combines playground toys with metrics.
The first one that they try is the metric seesaw. Sam sits on one side of the seesaw and Olivia sits on the other side. Since they weigh about the same, it is easy to keep the seesaw balanced. Under
Sam, there is a digital scale. Under Olivia there is the same scale with a key pad. Sam’s weight shows up under the scale.
Sam weighs 37 kg.
“Next, we have to convert kilograms to grams and punch it in so both of our scales will have the same reading,” Sam tells Olivia.
Olivia pauses, she can’t remember how to do this.
“Let’s move on to something else, I can’t remember,” She tells Sam.
The two move on to a birdbath. Together, they need to fill one 4.5 liter birdbath with water using a scoop. Once they have it filled, the sign above the birdbath will light up and tell them how many
milliliters are in 4.5 liters.
“I think I can figure this out without filling the birdbath,” Olivia says.
Can you figure it out? How many milliliters can be found in that 4.5 liter birdbath?
This lesson is all about metrics, but by the end, you will be able to master the tasks at the metric park.
What You Will Learn
In this lesson you will learn the following skills:
• Identify equivalence of metric units of mass.
• Identify equivalence of metric units of capacity.
• Choose appropriate metric units of mass or capacity for given measurement situations.
• Solve real-world problems involving metric measures of mass or capacity.
Teaching Time
I. Identify Equivalence of Metric Units of Mass
In the United States, the most common system of measurement is the Customary system of measurement. The Customary system of measurement for mass or weight is measured in pounds and tons. Outside of
the United States and when people work with topics in science, people use a system called the Metric system. The metric system measures mass or weight differently from the customary system.
How do we measure mass in the Metric system?
In the metric system we use different standard units to measure mass or weight.
This text box lists the units of measuring mass from the largest unit, the kilogram, to the smallest unit, the milligram. If you think back to when you learned about measuring length, the prefix
“milli” indicated a very small unit. That is the same here as we measure mass.
How can we find equivalent metric units of mass?
The word equivalent means equal. We can compare different units of measuring mass with kilograms, grams and milligrams. To do this, we need to know how many grams equal one kilogram, how many
milligrams equal one gram, etc. Here is a chart to help us understand equivalent units.
Here you can see that when we convert kilograms to grams you multiply by 1000.
When you convert grams to milligrams, you multiply by 1000.
To convert from a large unit to a small unit, we multiply.
To convert from a small unit to a large unit, we divide.
5 kg = _____ g
When we go from kilograms to grams, we multiply by 1000.
5 kg = 5000 g
These two values are equivalent.
2000 mg = _____ g
When we go from milligrams to grams, we divide.
2000 mg = 2 g
These two values are equivalent.
Now it is your turn to practice. Convert each metric unit of mass to its equivalent.
1. 6 kg = _____ g
2. 3000 g = _____ kg
3. 4 g = _____ mg
Take a few minutes to check your work with a peer.
II. Identify Equivalence of Metric Units of Capacity
When we think about capacity, often referred to as volume, we think about measuring liquids. In the Customary system of measurement, we measure liquids using cups, pints, ounces, gallons etc. In the
Metric System of measurement, we measure capacity using two different measures, liters and milliliters.
Since there are only two common metric units for measuring capacity, this text box shows them and their equivalent measures.
Liters are larger than milliliters. Notice that prefix “milli” again.
When converting from large units to small units, you multiply.
When converting from small units to large units, you divide.
Let’s apply this in an example.
4 liters = _____ milliliters
Liters are larger than milliliters, so we multiply by 1000.
4 liters = 4000 milliliters
Use what you have learned to write each equivalent unit of capacity.
1. 5 liters = _____ milliliters
2. 2000 milliliters = _____ liters
3. 4500 milliliters = _____ liters
Take a minute to check your work with a neighbor. Did you divide or multiply when needed?
III. Choose Appropriate Metric Units of Mass or Capacity for Given Measurement Situations
When you think about the metric units for measuring mass, how do you know when to measure things in grams, milligrams or kilograms? To really understand when to use each unit of measurement we have
to understand a little more about the size of each unit. If you know measurements in the customary or standard system of measurement, such as ounces and pounds, you can compare them to measurements
in the metric system of measurement, such as milligrams, grams, and kilograms. Grams compare with ounces, which measure really small things like a raisin. Kilograms compare with pounds, which we use
pounds to measure lots of things, like a textbook. What about milligrams?
Milligrams are very, very tiny. Think about how small a raisin is and recognize we would use grams to measure that. Scientists are one group of people who would measure the mass of very tiny items.
These things would be measured in milligrams.
If you think about things that would be seen under a microscope, you would measure the mass of those items in milligrams.
A milligram is $\frac{1}{1000}$
Use what you have learned to select the correct metric unit for measuring the mass of each item.
1. The weight of five pennies
2. The weight of a person
3. The weight of a car
Now take a minute to check your answers with your neighbor.
What about capacity? How do we choose the correct unit to measure capacity?
There are two metric units for measuring capacity, milliliters and liters.
This comparison may seem a little more obvious that the units for mass. A milliliter would be used to measure very small amounts of liquid. Milliliters are much smaller even than ounces. A liter
would be used to measure much larger volumes of liquid.
A milliliter is $\frac{1}{1000}$
Would you measure a bottle of soda in liters or milliliters?
You would measure it in liters. A 2 liter bottle of soda is a standard size for soda bottles. Think about milliliters as the amount of liquid in an eyedropper.
Real Life Example Completed
The Metric Park
Remember back to the metric park? Well, now you are ready to help Sam and Olivia with those conversions.
Let’s take another look at the problem.
Mrs. Andersen’s class is having a great time at the science museum. Sam and Olivia are very excited when the group comes upon the metric playground. This playground has been built inside the museum
and combines playground toys with metrics.
The first one that they try is the metric seesaw. Sam sits on one side of the seesaw and Olivia sits on the other side. Since they weigh about the same, it is easy to keep the seesaw balanced. Under
Sam, there is a digital scale. Under Olivia there is the same scale with a key pad. Sam’s weight shows up under the scale.
Sam weighs 37 kg.
“Next, we have to convert kilograms to grams and punch it in so both of our scales will have the same reading,” Sam tells Olivia.
Olivia pauses, she can’t remember how to do this.
“Let’s move on to something else, I can’t remember,” She tells Sam.
The two move on to a birdbath. Together, they need to fill one 4.5 liter birdbath with water using a scoop. Once they have it filled, the sign above the birdbath will light up and tell them how many
milliliters are in 6 liters.
“I think I can figure this out without filling the birdbath,” Olivia says.
Can you figure it out? How many milliliters can be found in that 4.5 liter birdbath?
First, let’s underline all of the important information.
Next, Sam and Olivia need to convert 37 kg into grams. There are 1000 grams in 1 kilogram, so there are 3700 grams in 37 kilograms.
You can see why it makes so much more sense to measure someone’s weight in kilograms versus grams.
The birdbath holds 4.5 liters of water. Now that you know that there are 1000 milliliters in one liter, you can figure out how many milliliters will fill the birdbath by multiplying 4.5 $\times$
Our answer is 4500 milliliters.
Wow! You can see why it makes much more sense to measure the amount of water in the birdbath in liters versus milliliters.
Here are the vocabulary words that are found in this lesson.
Customary System
The system of measurement common in the United States, uses feet, inches, pounds, cups, gallons, etc.
the weight of an object
the amount of liquid an object or item can hold
Technology Integration
Khan Academy Conversion Between Metric Units
James Sousa Metric Unit Conversions
Other Videos:
http://www.linkslearning.org/Kids/1_Math/2_Illustrated_Lessons/6_Weight_and_Capacity/index.html – Great animated video on weight and capacity using metric units and customary units
Time to Practice
Directions: Convert to an equivalent unit for each given unit of mass.
1. 5 kg = ______ g
2. 2000 g = ______ kg
3. 2500 g = ______ kg
4. 10 kg = ______ g
5. 2000 mg = ______ g
6. 30 g = ______ mg
7. 4500 mg = ______ g
8. 6.7 g = ______ mg
9. 9 kg = ______ g
10. 1500 g = ______ kg
Directions: Convert to an equivalent unit for each given unit of capacity.
11. 4500 mL = ______ L
12. 6900 mL = ______ L
13. 4400 mL = ______ L
14. 5200 mL = ______ L
15. 1200 mL = ______ L
16. 3 L = ______ mL
17. 5.5 L = ______ mL
18. 8 L = ______ mL
19. 9.3 L = ______ mL
20. 34.5 L = ______ mL
Directions: Choose the best unit of either mass or capacity to measure each item.
21. A dictionary
22. A flea under a microscope
23. A jug of apple cider
24. An almond
25. Drops of water from an eyedropper
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Grade-6/r4/section/4.7/","timestamp":"2014-04-17T05:27:59Z","content_type":null,"content_length":"122448","record_id":"<urn:uuid:43166c29-ab48-4acd-bcd2-98e8f07b0fa1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course calendar.
SES TOPICS KEY DATES
Part 1: Demand theory
L1 Utility theory, properties of preferences, choice as primitive, revealed preference, and Afriat's theorem Problem set 1 out
L2 Classical demand theory, Kuhn-Tucker necessary conditions, implications of Walras's law, indirect utility functions, theorem of the maximum (Berge's Problem set 2 out
theorem), expenditure minimization problem, Hicksian demands, compensated law of demand, and Slutsky substitution
L3 Price changes and welfare, compensating variation, and welfare from new goods Problem set 1 due
L4 Price indexes, bias in the U.S. consumer price index, integrability, demand aggregation, aggregate demand and welfare, Frisch demands, and demand Problem set 3 out
Part 2: Producer theory
L5 Producer theory, robust comparative statics, increasing differences, producer theory applications, the LeChatelier principle, Topkis' theorem, and Problem set 2 due
Milgrom-Shannon monotonicity theorem
L6 Monopoly pricing, monopoly and product quality, nonlinear pricing, and price discrimination
Part 3: Partial equilibrium competitive markets
L7 Externalities, simple models of externalities, government intervention, Coase theorem, Myerson-Sattherthwaite proposition, missing markets, price vs. Problem set 3 due and problem set 4 out
quantity regulations, Weitzman's analysis, uncertainty, common property externalities, optimization, and equilibrium number of boats
Part 4: General equilibrium
L8 General equilibrium in context, existence, welfare theorems, uniqueness and determinacy, price-taking assumption, Edgeworth box, welfare properties, Problem set 5 out
Pareto efficiency, and Walrasian equilibrium with transfers
L9 Arrow-Debreu economy, welfare theorems, separating hyperplanes, and Minkowski's theorem Problem set 4 due
L10 Existence of Walrasian equilibrium, Kakutani's fixed point theorem, Debreu-Gale-Kuhn-Nikaido lemma, and additional properties of general equilibrium Problem set 6 out
L11 Microfoundations, core, and core convergence Problem set 5 due and problem set 7 out
L12 General equilibrium with time and uncertainty, Jensen's inequality, and security market economy, arbitrage pricing theory, and risk-neutral probabilities
L13 Housing markets, competitive equilibrium, one-sided matching house allocation problem, serial dictatorship, two-sided matching, marriage markets, Problem set 6 due in Ses #L13 and final
existence of stable matchings, optimization, incentives, and housing markets core mechanism exam taken 2 days after Ses #L13
|
{"url":"http://ocw.mit.edu/courses/economics/14-121-microeconomic-theory-i-fall-2009/syllabus/","timestamp":"2014-04-20T11:40:34Z","content_type":null,"content_length":"36237","record_id":"<urn:uuid:44f7aabd-4959-4882-a433-69ca8f1cb677>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/di_kaz/asked","timestamp":"2014-04-16T07:55:25Z","content_type":null,"content_length":"113592","record_id":"<urn:uuid:fb10e7e6-a662-4022-81cc-6f5a4cf0be2f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jumping Jack Math
In this lesson, students prepare jumping jack data to send to officials on the planet Jumpalot. Students record how many jumping jacks they can do in ten seconds and use their knowledge of time
conversions to figure out how many jumping jacks they could complete in a minute all the way to a year if they never tired. Students then organize class data and explore mean, median, and mode and
the effects extreme values have on these measures. Students then brainstorm the advantages and disadvantages each measure offers.
Pass out the Jumping Jack Math and Jumpalot Data activity sheets. Read the introductory paragraph to students and explain that students will be developing a jumping jack data set which they will use
to discuss mean, median, and mode. Students should be familiar with or have at least been introduced to mean, median, and mode before beginning this activity to get the most out of this lesson.
Jumping Jack Math Activity Sheet
Jumpalot Data Activity Sheet
Read the introduction of the Jumping Jack Math activity sheet and have students complete the time conversion chart in Question 1. Next, students should complete Question 2. Explain that students need
to count how many jumping jacks they can complete in 10 seconds, and then write that number in the first row of their chart. An easy way to organize this is to have every other student stand up in
the room and spread out. If your room is particularly small you could have every third or fourth student stand up at a time. Have the students sitting help count for the students jumping so everyone
is engaged. Tell students that you will be the official timer, and then using a timer or a clock with a second hand, tell students, "Go!" and then "Stop!" when the time is up. Continue until all
students have collected their data.
Next, have students pair up; you may want to encourage students with similar numbers to work together Students who have completed the same number of jumping jacks will have identical charts for
Question 2. Have students complete Question 2 together. You may either allow students to use calculators the entire time, or have students complete the chart using paper and pencil and then allow
them to check their answers with a calculator. The chart goes up to one year is to give students experience with very large numbers and to help develop number sense.
During this time, circulate and have students explain the math behind the time conversions when you come to them. Some students may want to multiply by ten to get from ten seconds to a minute,
instead of multiplying by six, which would make the rest of their data chart incorrect.
Groups who finish Question 2 should move on to Questions 3 and 4 after checking with the teacher. Encourage students who find they have a vastly different number from the other students at the hour
mark to go back and check their work. Many students will correct themselves when they are collecting data in Question 3 from classmates, but you may have to point it out for some students.
After students have completed Questions 3 and 4, bring the group together and have students share their answers to Question 4. Ask students:
Why do you all seem to have different values for the mean, median, and mode?
[Because everyone used data sets with information from ten different people, rather than the whole class]
Next, explain Questions 5–8 and have students go back to their partner or group to work on completing Questions 5 through 8.
After students have completed Questions 7 and 8 bring the students together to discuss their answers. Students should say that Jumpalot School District should admit Speedy because his jumping jack
value increases the value of the mean which would mean more energy production for the school. For Question 8, students should find that the extreme values affect the mean with an extremely low value
making the mean lower and an extremely high value making the mean higher, but have little effect on the median or mode.
Summary Activity
Have students brainstorm the best times to use mean, median, or mode. To complete this, you have a few options:
• Students can complete it with the partner or group they are working with then you can have a class discussion
• You could break the class into groups and give each group a different measure to focus on. Then you could have each group write their results on a poster to share with the class. You could have a
class discussion about them in which you wrote student responses on the board, interactive whiteboard, or overhead.
1. Have students develop their own survey question that has a numerical response. For example, "How many minutes of homework do you do a night?" or "How many minutes does it take you to get to
school in the morning?" Have students go around the room and collect data from at least eight students. Have students organize the data from least to greatest, and then find the mean, median, and
mode. Have students create a small poster that contains their question, their organized data and their calculations for mean, median, and mode. Have students present their findings to the class.
2. Have students time another activity, such as Every Beat of Your Heart or Every Breath You Take, to use for data for a comparison of mean, median, and mode.
3. Give students a set of data and have them calculate the mean, median, and mode. Allow students to check their answers for mean and median using the Mean and Median tool. To use this tool numbers
must be between 0 and 100.
1. Have students complete the To Jumpalot and Beyond activity sheet. Students predict results and develop a plan to test out other activities that officials in Jumpalot could use to gain power.
To Jumpalot and Beyond Activity Sheet
2. Have students research how mean, median, and mode are used in the real world.
Questions for Students
1. Why would it be useful to know both the median and the mean for a set of data?
[It gives you more information about the data set and helps you know if there are any extreme values in the data set.]
2. Why is it useful to know the mode of a set of data?
[To see which number is most popular or most frequent; to see what the majority of responses are]
Teacher Reflection
• Were the concepts of central tendency presented too abstractly? How could you change them?
• Did you find it necessary to make adjustments while teaching the lesson? If so, what adjustments and were they effective?
• Was your lesson developmentally appropriate? If so, how could you tell? If not, what was inappropriate? What would you do to change it?
Learning Objectives
Students will:
• Use student-created data to calculate mean, median, and mode
• Practice time conversion (seconds, minutes, hours, days, weeks, year)
• Develop number sense
• Discover the effects of extreme values on the mean
• Analyze the advantages and disadvantages of using mean, median, and mode
Common Core State Standards – Practice
• CCSS.Math.Practice.MP1
Make sense of problems and persevere in solving them.
• CCSS.Math.Practice.MP4
Model with mathematics.
• CCSS.Math.Practice.MP5
Use appropriate tools strategically.
• CCSS.Math.Practice.MP7
Look for and make use of structure.
|
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=3242","timestamp":"2014-04-16T18:57:04Z","content_type":null,"content_length":"75993","record_id":"<urn:uuid:c27c141f-5dd9-4bf7-9dd3-45a4887a648a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
evaluating integrals - checking my work
September 20th 2009, 07:56 AM #1
Feb 2009
evaluating integrals - checking my work
Hi I am feeling semiconfident about this set of questions but if someone could give it a once over I would appreciate it. There are 3 parts
Evaluate the following integrals
definite integral sign (I will use "S") tan u^2 dx
my answer: = tan u - u + C
S b=2;a=1 ((y+ 5y^7)/(y^3)) dy = S b=2;a=1 y^-2 + 5y^4
= ((-1/y)+ (y^5)) b=2; a=1
= (-(1/2) + 32)) - (-1 +1)
= 31.5
S b= 3pi/2; a=0 lcos xl dx = S b= 3pi/2; a= 0 lsin xl
= lsin (3pi/2)l - lsin 0l
= l sin 3pi/2 l
Hopefully I am right in my logic - and if I am not you can point me in the right direction.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/103249-evaluating-integrals-checking-my-work.html","timestamp":"2014-04-19T20:55:21Z","content_type":null,"content_length":"29465","record_id":"<urn:uuid:7a6baf36-35d6-4372-b999-71c2d873ab9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum computing with recycled particles
A research team from the University of Bristol's Centre for Quantum Photonics (CQP) have brought the reality of a quantum computer one step closer by experimentally demonstrating a technique for
significantly reducing the physical resources required for quantum factoring.
The team have shown how it is possible to recycle the particles inside a quantum computer, so that quantum factoring can be achieved with only one third of the particles originally required. The
research is published in the latest issue of Nature Photonics.
Using photons as the particles, the Bristol team constructed a quantum optical circuit that recycled one of the photons to set a new record for factoring 21 with a quantum algorithm - all previous
demonstrations have factored 15.
Dr Anthony Laing, who led the project, said: "Quantum computers promise to harness the counterintuitive laws of quantum mechanics to perform calculations that are forever out of reach of conventional
classical computers. Realising such a device is one of the great technological challenges of the century."
While scientists and mathematicians are still trying to understand the full range of capabilities of quantum computers, the current driving application is the hard problem of factoring large numbers.
The best classical computers can run for the lifetime of the universe, searching for the factors of a large number, yet still be unsuccessful.
In fact, Internet cryptographic protocols are based on this exponential overhead in computational time: if a third party wants to spy on your emails, they will need to solve a hard factoring problem
first. A quantum computer, on the other hand, is capable of efficiently factoring large numbers, but the physical resources required mean that constructing such a device is highly challenging.
CQP PhD student Enrique Martín-López, who performed the experiment, said: "While it will clearly be some time before emails can be hacked with a quantum computer, this proof of principle experiment
paves the way for larger implementations of quantum algorithms by using particle recycling."
More information: Nature Photonics, 21 October 2012. doi:10.1038/nphoton.2012.259
|
{"url":"http://phys.org/news/2012-10-quantum-recycled-particles.html","timestamp":"2014-04-21T00:28:10Z","content_type":null,"content_length":"69382","record_id":"<urn:uuid:f8468c7f-11e2-451f-97fc-d0aeebcede94>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Blocks world axioms: Style 2
The sort of a variable is indicated by its first letter:
Objects (blocks and table) (O,X,Y,Z) ; situations (S); fluents (F); actions (A).
Non-logical primitives
• block(X). Predicate: X is a block.
• table. Constant: The table.
• on(X,Y). Function. The fluent of object X being on object Y.
• clear(X). Function. The fluent of object X being clear.
• puton(X,Y). Function. The action of putting X onto Y.
Situation calculus
• holds(S,F). Predicate: Fluent F holds in situation S.
• result(S,A). Function: The situation that results if action A is performed in situation S.
• poss(S,A). Predicate: Action A is possible in situation S.
Atemporal axiom
• 1. forall[O] block(O) xor O=table.
State coherence axioms (Domain constraints)
Note: The axioms below are not complete; indeed there does not exist a complete first-order axiomatization using the above primitives. However, these axioms are sufficient to do prediction.
• 2. forall[S,X] block(X) => exists^1[Y] holds(S,on(X,Y)).
(A block is always on a unique other object.)
• 3. forall[S,X,Y,Z] block(X) ^ holds(S,on(Y,X)) ^ holds(S,on(Z,X)) => Z=Y
(At most one object can be on block X in situation S.)
• 4. forall[S,X,Y] holds(S,on(X,Y)) => block(X) ^ X != Y.
(Only a block can be on another object. A block cannot be on itself.)
• 5. forall[S,X] holds(S,clear(X)) < = > block(X) ^ ~exists[Y] holds(S,on(Y,Z)).
(Definition of clear: A block is clear if there's nothing on it.)
Causal axiom
• 6. forall[S,X,Y] poss(S,puton(X,Y)) => holds(result(S,puton(X,Y)),on(X,Y))
Frame axiom
• 7. forall[S,X,Y,W,Z]
[ (W=X ^ Z=Y) V
(W=X ^ holds(S,on(W,Z)) V
holds(result(S,puton(X,Y)),on(W,Z)) < = > holds(S,on(W,Z))
The fluent "on(W,Z)" does not change as a result of the action "puton(X,Y) unless either W=X and Z=Y or W=X and Z is the object that X was on initially.
Feasibility axiom
• 8. poss(S,puton(X,Y)) < = > X !=Y ^ holds(S,clear(X)) ^ [holds(S,clear(Y)) V Y=table].
It is possible to put X onto Y iff X is clear and either Y is clear or Y is the table.
Unique names
Sample inference
Scenario: a,b,c are blocks. In the initial situation s0, a is on c; b and c are on the table. You then move a to b.
• P1: holds(s0,on(a,c)).
• P2: holds(s0,on(b,table)).
• P3: holds(s0,on(c,table)).
• P4: holds(s0,clear(a)).
• P5: holds(s0,clear(b)).
• P6: holds(s0,block(a)).
• P7: holds(s0,block(b)).
• P8: holds(s0,block(c)).
• P9: s1 = result(s0,puton(a,b)).
• P10: a != b !=c !=a.
(Note that nothing in these rules out the possibility of other blocks being elsewhere on the table.)
We wish to derive a complete characterization of the state of the world after putting a onto b. Specifically, we need to prove. holds(s1,on(a,b)).
(Asterisk indicate parts of the result to be proven.)
S1: poss(s0,puton(a,b)). (From 8, P4, P4, P10).
S2*: holds(s1,on(a,b)). (From 6, S1).
S3: forall[X] ~holds(s0,on(X,a)). (From P4, 5).
S4: forall[X] ~holds(s1,on(X,a)). (From S3, P10, 7)
S5*: holds(s1,clear(a)). (From S4, 5).
S6: ~holds(s1,on(a,c)). (From S2, P10, 2).
S7: forall[X] X !=a => ~holds(s0,on(X,c)). (From P1,2).
S8: forall[X] X !=a => ~holds(s1,on(X,c)). (From S7,7).
S9: forall[X] ~holds(s1,on(X,c)). (From S6, S8)
S10*: holds(s1,clear(c)). (From S4, 5).
S11: table !=b. (From P7, 1)
S12*: holds(s1,on(b,table)). (From P2,S11,7)
S13*: holds(s1,on(c,table)). (From P2,S11,7)
|
{"url":"http://cs.nyu.edu/courses/spring02/G22.2560-001/bw2-axioms.html","timestamp":"2014-04-17T03:53:10Z","content_type":null,"content_length":"4452","record_id":"<urn:uuid:189c5c12-eefd-414f-ad76-087d34ddc508>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Press Release
Media Contact Beth Sorensen
Office of Communications
Lectures by three internationally notable mathematicians--Bart de Smit, John Sullivan, and Yvan Saint-Aubin--will be held at Reed College on July 17, 18, and 22 in connection with a summer school at
Reed sponsored by the Mathematical Sciences Research Institute in Berkeley, California (lecture details follow). The lectures are free and open to the public; for more information visit http://
web.reed.edu/publicevents or call 503/777-7755.
THURSDAY, JULY 17
7:30 p.m., psychology auditorium
Bart de Smit, University of Leiden: "Escher and the Droste Effect"
Bart de Smit received his Ph.D. in mathematics in Berkeley in 1993. Since 1997 he has been a mathematician at the Universiteit Leiden in the Netherlands, primarily working in number theory. De Smit
will discuss Print Gallery, one of M.C. Escher's most intriguing works, which depicts a man standing in a gallery who looks at a print of a city that contains the building that he is standing in
himself. This picture contains a mysterious white hole in the middle. De Smit and mathematician Hendrik Lenstra have discerned that what Escher was trying to achieve in this work has a unique
mathematical solution concerning elliptic curves. With help from artists and computer scientists a completion of the picture was constructed at the Universiteit Leiden. The white hole turns out to
contain the entire image on a smaller scale (which in Dutch is known as the Droste effect, after the Dutch chocolate maker). De Smit will explain the mathematics behind Escher's print and the process
of making the completion, which will also be visualized with computer animations. Many more aspects of this topic can be found at http://escherdroste.math.leidenuniv.nl/ .
FRIDAY, JULY 18
7:30 p.m., psychology auditorium
John Sullivan, University of Illinois, University of Berlin: "Optimal Geometry as Art"
John Sullivan is a professor of mathematics at the Technical University of Berlin and at the University of Illinois. He received his Ph.D. from Princeton in 1990, after earlier degrees from Harvard
and Cambridge. Sullivan's research in geometry deals with finding optimal shapes for curves and surfaces in space. Examples include clusters of soap bubbles, which minimize their surface area, or
knots tied tight in rope, which minimize their length. Sullivan will show two computer-generated videos, illustrating optimal shapes for knots and a mathematical way to turn a sphere inside out
(controlled by surface bending energy). He will discuss the artistic choices that went into the making these films and will show other examples of mathematical art arising from optimal geometry,
including computer-generated sculpture.
TUESDAY, JULY 22
7:30 p.m., psychology auditorium
Yvan Saint-Aubin, University of Montreal: "Mathematics and Technology"
"Mathematics pervades technology. Or so claim mathematicians," says Yvan Saint-Aubin. He is a professor and chair at the University of Montreal department of mathematics and statistics, specializing
in theoretical and mathematical physics. Saint-Aubin is co-editor, with Luc Vinet, of Theoretical Physics at the End of the XXth Century (Springer Verlag, 2002) and Algebraic Methods in Physics: A
Symposium for the 60th Birthday of Jiri Patera and Pavel Winternitz (Springer Verlag, 2001); he has also written numerous articles in journals of mathematics and physics. In this lecture Saint-Aubin
will present two ways in which mathematics were indeed essential to the establishment of the standard of the compact disc.
Reed College, in Portland, Oregon, is an undergraduate institution of the liberal arts and sciences dedicated to sustaining the highest intellectual standards in the country. With an enrollment of
about 1,360 students, Reed ranks third in the undergraduate origins of Ph.D.s in the United States and second in the number of Rhodes scholars from a liberal arts college (31 since 1915).
This press release may also be found at http://administration.reed.edu/news/news.taf.
# # # #
|
{"url":"http://www.reed.edu/news_center/press_releases/2002-2003/500.html","timestamp":"2014-04-20T13:21:29Z","content_type":null,"content_length":"13013","record_id":"<urn:uuid:b6cc71d1-a366-49f5-8381-a14e84d5e106>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Date Intervals
This page describes a few methods for working with intervals of dates. Specifically, it address the questions of whether a date falls within an interval, the number of days that two intervals
overlap, and the how many days are in one interval, excluding those days in another interval. These formulas can be quite useful for scheduling applications, such as employee vacation schedules.
Is A Date In An Interval?
Suppose we have 3 dates -- a start date, and end date, and a test date. We can test whether the test date falls within the interval between start date and end date. In this formula, we will use three
named cells:
TDate1 for the start date, TDate2 for the end date, and TDate for the test date. This formula will return either TRUE or FALSE, indicating whether TDate falls in the interval.
For example if TDate1 is 1-Jan and TDate2 is 31-Jan , and TDate is 15-Jan , the formula will return TRUE, indicating that TDate falls in the interval.
In this formula, it does not matter whether TDate1 is earlier or later than TDate2.
Number Of Days In One Interval And Not In Another
We can also work with two date intervals, and determine the number of days that fall in one interval, and not in another. This can become complicated because of how the intervals may overlap. For
example, the main interval may complete contain the exclusion interval. Or, the exclusion interval may completely contain the main interval. Moreover, only part of the main interval may be contained
within the exclusion interval, either at the starting or the ending end of the interval. Finally, the two intervals may not overlap at all.
Below is a summary of the various interval types. The Dates values are the days we wish to count. The VDates values are the days we wish to exclude from the Dates interval. The complexity of the
formula is due to the fact that it must handle all of the interval types.
For this formula, we will have 4 named cells, as shown below:
│Name │Description │
│Date1 │The starting date of the main interval. The main interval is the dates we want to work count. │
│Date2 │The ending date of the main interval. │
│VDate1 │The starting date of the exclusion interval. The exclusion interval is the dates that we want to exclude from the count of the main interval.│
│VDate2 │The ending date of the exclusion interval. │
│NWRange│A list of holiday dates. Used in the second version of the formula, which uses the NETWORKDAYS function. │
For this formula, we require that Date1 is less than (earlier than) or equal to Date2, and that VDate1 is less than (earlier than) or equal to VDate2.
Here are some examples.
│Date1 │Date2 │VDate1│VDate2│Result│Description │
│1-Jan │31-Jan│10-Jan│20-Jan│ 20 │There are 20 days between 1-Jan and 9-Jan (9 days) and 21-Jan and 31-Jan (11 days). The 11 days between 10-Jan and 20-Jan are subtracted from the 31 days between │
│ │ │ │ │ │1-Jan and 31-Jan. In this example, the entire exclusion interval (the VDates) is included within the main interval (the Dates). │
│10-Jan│20-Jan│1-Jan │31-Jan│ 0 │Here, the entire main interval is included within the exclusion interval. There are no days between 10-Jan and 20-Jan that do fall outside the 1-Jan and 31-Jan. │
│1-Jan │15-Jan│10-Jan│20-Jan│ 9 │In this case, the ending segment of the main interval (1-Jan to 15-Jan) overlaps with the beginning segment of the exclusion interval (10-Jan to 20-Jan). There │
│ │ │ │ │ │are 9 days (1-Jan to 9-Jan) in the main interval that do not overlap with the exclusion interval. │
│10-Jan│20-Jan│1-Jan │15-Jan│ 5 │Ending segment of the main interval overlaps the exclusion interval. There are 5 days (16-Jan to 20-Jan) in the main interval that are not included in the │
│ │ │ │ │ │exclusion segement. │
Note that the dates here are inclusive. There are 10 days between 1-Jan and 10-Jan. This is one day different that what you would get from simply subtracting the dates.
The formula above does not treat weekend days differently from working days. In other words, Saturdays and Sundays are included in the calculations. If you want to count only weekdays, excluding
weekends and holidays, use the modified version below, which calls the NETWORKDAYS function to compute the number of working days in the intervals. This function adds another name ranged to the mix.
This name, NWRange, refers to a range containing a list of holidays. If you do not use holidays, you can either point this name to an empty cell, or eliminate it from the formula entirely.
The NETWORKDAYS function is part of the Analysis Tool Pack Add-In, so you must have this module installed in order to use this formula. For more information about using formulas to create the dates
of holidays, see the Holidays page.
Tangent: The reason the named cells are VDate1 and VDate2 is that I originally wrote this formula for a Vacation timekeeping application, and the V refers to "Vacation". Of course, you can name your
cells anything that works with your application, or you can simply use cell references.
Number Of Days Common To Two Intervals
The previous section worked with a logical NOT condition -- dates in one interval and NOT in another. This section describes a formula for working with the inverse of that -- the number of days that
are in BOTH of two intervals.
For this formula, we will have 4 named cells, as shown below:
│Name │Description │
│IDate1 │The starting date of the first interval. │
│IDate2 │The ending date of the first interval. │
│RDate1 │The starting date of the second interval. │
│RDate2 │The ending date of the second interval. │
│NWRange│A list of holiday dates. Used in the second version of the formula, which uses the NETWORKDAYS function. │
For this formula, we require that IDate1 is less than (earlier than) or equal to IDate2, and that RDate1 is less than (earlier than) or equal to RDate2. The formula below will return the number of
days that are in both intervals.
Here are some examples.
│IDate1│IDate2│RDate1│RDate2│Result│Description │
│1-Jan │31-Jan│10-Jan│20-Jan│ 11 │There are 11 days common to the intervals. Since the RDates are contained within the IDates, the result is the number of days between 10-Jan and 20-Jan, or 11 │
│ │ │ │ │ │days. │
│10-Jan│20-Jan│1-Jan │31-Jan│ 11 │Since this is an AND condition format, we can reverse the dates between IDates and RDates, and get the same result as above, 11 days. │
│1-Jan │15-Jan│10-Jan│20-Jan│ 6 │Here, there are 6 days common to the two intervals -- the dates 10-Jan to 15-Jan fall in both intervals. │
│1-Jan │10-Jan│15-Jan│20-Jan│ 0 │The result here is 0, because there are no dates in the IDate interval (1-Jan to 10-Jan) than fall in the RDate interval (15-Jan to 20-Jan). │
Note that the dates here are inclusive. There are 10 days between 1-Jan and 10-Jan. This is one day different that what you would get from simply subtracting the dates.
The formula above does not treat weekend days differently from working days. In other words, Saturdays and Sundays are included in the calculations. If you want to count only weekdays, excluding
weekends and holidays, use the modified version below, which calls the NETWORKDAYS function to compute the number of working days in the intervals. This function adds another name ranged to the mix.
This name, NWRange, refers to a range containing a list of holidays. If you do not use holidays, you can either point this name to an empty cell, or eliminate it from the formula entirely.
The NETWORKDAYS function is part of the Analysis Tool Pack Add-In, so you must have this module installed in order to use this formula. For more information about using formulas to create the dates
of holidays, see the Holidays page.
|
{"url":"http://www.cpearson.com/excel/DateIntervals.htm","timestamp":"2014-04-18T22:18:01Z","content_type":null,"content_length":"25231","record_id":"<urn:uuid:e624ce08-ec45-4070-8a25-26df68725560>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Features in MATLAB 7 for Handling Large Data Sets
MATLAB 7 introduces a number of enhancements to support large data set handling. These include improvements to file access, data storage efficiency, and data processing speed, as well as support for
new 64-bit platforms.
The material and examples in this article use features of products in MathWorks Release 14 with Service Pack 1.
Large Data Set Handling Issues
Solving technical computing problems that require processing and analyzing large amounts of data puts a high demand on your computer system. Large data sets take up significant memory during
processing and can require many operations to compute a solution. It can also take a long time to access information from large data files.
Computer systems, however, have limited memory and finite CPU speed. Available resources vary by processor and operating system, the latter of which also consumes resources. For example:
• 32-bit processors and operating systems can address up to 2^32 = 4,294,967,296 = 4 GB of memory (also known as virtual address space).
• Windows XP and Windows 2000 allocate only 2 GB of this virtual memory to each process (such as MATLAB). On UNIX, the virtual memory allocated to a process is system-configurable and is typically
around 3 GB.
• The application carrying out the calculation, such as MATLAB, can require storage in addition to the user task.
The main problem when handling large amounts of data is that the memory requirements of the program can exceed that available on the platform. For example, MATLAB generates an “out of memory” error
when data requirements exceed approximately 1.7 GB on Windows XP.
The following sections describe a number of enhancements in MATLAB 7 that help address large data set handling, including increased available memory, improved file access, more efficient data storage
, and increased processing performance.
Maximizing Available Memory
New 64-bit Platforms
A 64-bit version of MATLAB is now available for Linux platforms based on AMD64 and Intel EM64T processors. A 64-bit processor provides a very large amount of available memory, up to 2^64 bytes =
18,446,744,073,709,552, 000 bytes (16 exabytes), enabling you to store a very large amount of information. For example, the Google search engine currently uses 2 petabytes of disc space. With 16
exabytes, you could fit 9,000 Googles into memory.
Platforms with 64-bit architecture solve the problem of memory limitation for handling today's large data sets, but do not address other issues such as execution and file I/O speed.
Note: In MATLAB on 64-bit platforms, the size of a single matrix is currently limited to 2^32 elements such as a square matrix of 65,000x 65,000, consuming 16 GB.
Memory Enhancements for Windows XP
MATLAB 7 increases the largest contiguous block of memory under Windows XP to approximately 1.5 GB, equivalent to 180 million double precision values.
Also, on Windows XP, MATLAB now supports the 3GB switch boot option, allocating an additional 1 GB of addressable memory to each process. This increases the total amount of data you can store in the
MATLAB workspace to approximately 2.7 GB. This is equivalent to 330 million double precision values. This additional block of memory is not contiguous with the rest of the memory MATLAB uses so you
cannot create a single array to fill this space.
Viewing Available Memory
To see what memory is available in MATLAB 7 on a Windows system, use the following command.
The example below shows the results for a 1.2-GB RAM Windows XP system with the 3-GB switch set. You can see two large memory blocks of more than 1 GB each with a total of 2.7 GB available.
Physical Memory (RAM):
In Use: 340 MB (1549f000)
Free: 938 MB (3aa4d000)
Total: 1278 MB (4feec000)
Page File (Swap space):
In Use: 236 MB (0ec78000)
Free: 986 MB (3dad9000)
Total: 1223 MB (4c751000)
Virtual Memory (Address Space):
In Use: 296 MB (1283d000)
Free: 2775 MB (ad7a3000)
Total: 3071 MB (bffe0000)
Largest Contiguous Free Blocks:
1. [at 10007000] 1546 MB (60a69000)
2. [at 7ffe1000] 1023 MB (3ffbf000)
3. [at 7c41b000] 28 MB (01c75000)
4. [at 74764000] 28 MB (01c2c000)
======= ==========
2734 MB (aae1a000)
You must install sufficient physical memory (RAM) on the computer to cover your data storage needs. Doing so minimizes paging the data to disk, which can substantially degrade performance.
For more information on maximizing the available memory in MATLAB see the Memory Management Guide.
Data Access
Text File Reading
The new textscan function enables you to access very large text files that have arbitrary format. This function is similar to textread but adds the ability to specify a file identifier so that a file
pointer can be tracked and traversed through the file. The file can therefore be read a block at a time, changing the format on each occasion.
For example, suppose we have a text file, test12_80211b.txt, which contains multiple different-sized blocks of data, each with the following format:
• Two headerlines of description
• A parameter m
• A p x m table of data
Here is how test12_80211b.txt looks:
* Mobile1
* SNR Vs test No
Num tests=19
* Mobile2
* SNR Vs test No
Num tests=20
You could use the following MATLAB commands to read it in: fid = fopen('test12_80211b.txt', 'r'); % Open text file
InputText = textscan(fid, '%s', 2, 'delimiter', '\n'); % Read header lines
HeaderLines = InputText{1} HeaderLines =
'* Mobile1'
'* SNR Vs test No'
InputText = textscan(fid, 'Num tests=%f'); % Read parameter value
NumCols=InputText{1} NumCols =
InputText=textscan(fid, '%f', 'delimiter', ','); % Read data block
format short g
Section=Data(1:5,1:5) Section =
NaN -5 -4 -3 -2
1 6.19e-007 8.63e-007 6.43e-007 1.84e-007
2 2.88e-007 4.71e-007 6.92e-007 1.43e-007
3 2.52e-007 8.11e-007 4.74e-007 8.48e-007
4 1.97e-007 1.64e-007 1.38e-007 6.17e-007
For improved data access speed, in this release of MATLAB the reading of comma-separated-value (CSV) files is an order of magnitude faster.
MAT File Compression
The save command in MATLAB 7 now compresses the data before writing the MAT file to disk. This results in smaller files for compressible (non-random) data sets and faster reading for very large data
files over a network.
Data Storage Efficiency
MATLAB 7 now provides integer and single-precision math. This new capability enables processing of integer and single-precision data in its native type, resulting in more efficient memory usage and
the ability to process larger, nondouble data sets.
For example, you can process up to 8 times as many 8-bit integer values when stored natively then if cast and stored as doubles. So on Windows XP (without the 3-GB switch), you could read in a file
of 8-bit integer values of up to 1.5 GB in size as compared to the previous limit of 180 MB when you were required to store the data as doubles. (This is a theoretical maximum and there would be no
space available to save the answers to any operations.) See the July 2004 MATLAB Digest article “Integer and Single-Precision Math in MATLAB 7” for more information.
Data Processing Performance
Improved Execution Speed
MATLAB 7 introduced a number of processing speed enhancements for faster execution of large dataset problems. These include optimized Basic Linear Algebra Subprograms (BLAS) libraries provided by the
vendors of processors used in most of the platforms MATLAB supports, including the Intel^® Math Kernel Library (MKL), the AMD Core Math Library (ACML), and the BLAS library available through the
Accelerate framework on the Macintosh. Also, the latest version of the FFTW (3.0) routines is used for maximum speed execution of FFT tasks.
The JIT Accelerator now covers all numeric data types, such as complex variables, and function calls (when called from a function), increasing the speed of more of the MATLAB language. It also
generates MMX instructions for optimized execution of integer operations. In the case of 8-bit integers, this results in execution that is up to 8 times faster than doubles.
New Large Data Set Handling Features
Other new features that support the processing of large data sets include:
• The ability to view larger numerical arrays in the array editor (up to 500,000 elements) during interactive data analysis
• Nested functions, allowing the inner function to see the workspace of the parent. This feature lets you share large data sets between functions, such as in a GUI, without having to use global
variables or pass the data by value as function parameters. In the example below, the nested function process can see the variables, such as street1, in the workspace of the parent function
functiony = percentNonzero(filename, scalevalue, thresholdvalue)
%PERCENTNONZERO Calculate the percentage of non-zero elements
% P = PERCENTNONZERO('FILENAME', SCALEVALUE, THRESHOLDVALUE)
% returns the percentage of non-zero elements in an image read
% from the file FILENAME, scaled by the value SCALEVALUE and
% thresholded at a value of THRESHOLDVALUE.
% Example:
% p=percentNonzero('street1.jpg',1.5,140);
street1 = imread(filename); % Read image from file
process(scalevalue, thresholdvalue); % Scale and threshold image
% Find percentage of non-zero elements
y = 100 * sum(street1(:))/numel(street1);
function process(scaleval, threshval)
% Scale image
street1 = street1 * scaleval;
% Threshold image to create logical array
street1 = street1 > threshval;
• The new M-Lint Code Checker reports unused variables that you can remove to minimize your code's memory usage. For more about M-Lint, see the article “Clean Up Your Code!” in the December 2004
issue of MATLAB News and Notes.
A collection of new tools and capabilities in MATLAB 7 enables you to handle larger data sets, letting you take on larger and more complex engineering and science problems and solve them in less
|
{"url":"http://www.mathworks.co.uk/company/newsletters/articles/new-features-in-matlab-7-for-handling-large-data-sets.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-24T04:51:32Z","content_type":null,"content_length":"43319","record_id":"<urn:uuid:7c4abc26-e3c2-450b-9dd6-760f5185aa64>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plotting Lines - Boundless Open Textbook
A line graph is a type of chart which displays information as a series of data points connected by straight line segments. It is a basic type of chart common in many fields. It is similar to a
scatter plot except that the measurement points are ordered (typically by their x-axis value) and joined with straight line segments. A line chart is often used to visualize a trend in data over
intervals of time – a time series – thus the line is often drawn chronologically.
A line chart is typically drawn bordered by two perpendicular lines, called axes. The horizontal axis is called the x-axis and the vertical axis is called the y-axis. To aid visual measurement, there
may be additional lines drawn parallel either axis. If lines are drawn parallel to both axes, the resulting lattice is called a grid.
Each axis represents one of the data quantities to be plotted. Typically the y-axis represents the dependent variable and the x-axis (sometimes called the abscissa) represents the independent
variable. The chart can then be referred to as a graph of quantity one versus quantity two, plotting quantity one up the y-axis and quantity two along the x-axis.
In the experimental sciences, such as statistics, data collected from experiments are often visualized by a graph. For example, if one were to collect data on the speed of a body at certain points in
time, one could visualize the data to look like the graph in :
Data Table
A data table showing elapsed time and measured speed.
The table "visualization" is a great way of displaying exact values, but can be a poor way to understand the underlying patterns that those values represent. Understanding the process described by
the data in the table is aided by producing a graph or line chart of Speed versus Time:
Line chart
A graph of speed versus time
In statistics, charts often include an overlaid mathematical function depicting the best-fit trend of the scattered data. This layer is referred to as a best-fit layer and the graph containing this
layer is often referred to as a line graph.
It is simple to construct a "best-fit" layer consisting of a set of line segments connecting adjacent data points; however, such a "best-fit" is usually not an ideal representation of the trend of
the underlying scatter data for the following reasons:
1. It is highly improbable that the discontinuities in the slope of the best-fit would correspond exactly with the positions of the measurement values.
2. It is highly unlikely that the experimental error in the data is negligible, yet the curve falls exactly through each of the data points.
In either case, the best-fit layer can reveal trends in the data. Further, measurements such as the gradient or the area under the curve can be made visually, leading to more conclusions or results
from the data.
A true best-fit layer should depict a continuous mathematical function whose parameters are determined by using a suitable error-minimization scheme, which appropriately weights the error in the data
values. Such curve fitting functionality is often found in graphing software or spreadsheets. Best-fit curves may vary from simple linear equations to more complex quadratic, polynomial, exponential,
and periodic curves. The so-called "bell curve", or normal distribution often used in statistics, is a Gaussian function.
|
{"url":"https://www.boundless.com/statistics/visualizing-data/graphing-data/plotting-lines/","timestamp":"2014-04-21T09:44:52Z","content_type":null,"content_length":"73035","record_id":"<urn:uuid:c20a3416-06a4-4449-888c-0000762219ac>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
As discussed in Chapter 1, the committee is charged with three specific tasks: determining whether a statistical association exists between exposure to the herbicides used in Vietnam and health
outcomes, determining the increased risk of effects among Vietnam veterans, and determining whether a plausible biologic mechanism or other causal evidence of a given health outcome exists. This
section discusses the committee's approach to each of those tasks.
Determining Whether a Statistical Association Exists
In trying to determine whether a statistical association exists between any of the herbicides used in Vietnam or the contaminant 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and a health outcome, the
committee found that the most helpful evidence came from epidemiologic studies—investigations in which large groups of people are studied to determine the association between the occurrence of
particular diseases and exposure to the substances at issue. Epidemiologists estimate associations between an exposure and a disease in a defined population or group using measures such as relative
risk, standardized mortality ratio, or odds ratio. Those terms describe the magnitude by which the risk or rate of disease is changed in a given population. For example, if the risk in an exposed
population increases two-fold relative to an unexposed population, it can be said that the relative risk, or risk ratio, is 2.0. Similarly, if the odds of disease in one population are 1:20 and in
another are 1:100, then the odds ratio is 5.0. Sometimes the use of terms such as relative risk, odds ratio, and estimate of relative risk is inconsistent, for instance when authors refer to an odds
ratio as a relative risk. In this report relative risk refers to the results of cohort studies and odds ratio (an estimate of relative risk) refers to the results of case–control studies. An
estimated relative risk greater than 1 could indicate a positive or direct association (that is, a harmful association), whereas values between zero and 1 could indicate a negative or inverse
association (that is, a protective association). A “statistically significant” difference is one that, under the assumptions made in the study and the laws of probability, would be unlikely to occur
if there were no true difference and no biases.
Determining whether an observed association between an exposure and a health outcome is “real” requires additional scrutiny because there may be alternative explanations for the observed association.
Those explanations include error in the design, conduct, or analysis of the investigation; bias, a systematic tendency to distort the measure of association so that it may not represent the true
relation between exposure and outcome; confounding, distortion of the measure of association because of failure to recognize or account for another factor related to both exposure and outcome; and
chance, the effect of random variation, which produces spurious associations that can, with a known probability, sometimes depart widely from the true relation. In deciding whether an association
|
{"url":"http://books.nap.edu/openbook.php?record_id=10603&page=22","timestamp":"2014-04-18T21:29:40Z","content_type":null,"content_length":"35083","record_id":"<urn:uuid:086344ed-9be8-4707-965d-427702c32343>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question about Ring Homomorphism
April 17th 2011, 08:28 AM #1
Question about Ring Homomorphism
Hello I just had a quick question about ring homomorphisms.
Is it possible to have a ring isomorphism from $R$ to F, where F is a field but $R$ is not a field?
edit: deleted post.
hi slevvio!!
f: R----> F
f(x)=0 for all x in R.
this is trivial homomorphism i guess. Is this acceptable to you??
Thanks but that is not a ring isomorphism!
if φ is onto, R has to be a field, since then R = φ^-1(F).
if φ is not onto, and φ(R) is just a subring of F, then sure, consider the identity homomorphism which includes Z in Q.
Thanks guys. I was just confused because in my notes a ring homomorphism was defined from a ring to a field, and then after this was proved to be a ring isomorphism, he showed that the initial
ring was actually a field. But this should arise from the fact we have a ring isomorphism. I would type out the specific example but Latex is borked
April 17th 2011, 08:35 AM #2
April 17th 2011, 08:50 AM #3
April 17th 2011, 08:57 AM #4
MHF Contributor
Mar 2011
April 17th 2011, 09:08 AM #5
April 17th 2011, 09:14 AM #6
MHF Contributor
Mar 2011
April 17th 2011, 07:22 PM #7
April 19th 2011, 03:19 AM #8
|
{"url":"http://mathhelpforum.com/advanced-algebra/177911-question-about-ring-homomorphism.html","timestamp":"2014-04-18T08:50:58Z","content_type":null,"content_length":"51120","record_id":"<urn:uuid:af334b13-6b87-4512-af75-b59759685f19>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Implicaciones de la manipulaciónde la correspondencia de Lindahl: un ejemplo", joint with J.V.LLinares, V.Romero andT.Rubio, Revista Española de Economía, 7,2,(1990)
In this paper we examine the consequencesof the manipulation of Lindahl equilibrium by means of an example.It will be proven that it is likely that some agent has incentiveto be sincere. Moreover,
this equilibrium favors the agent withless initial endowments, which implies that manipulation has aredistributing effect.
"On the Generic Impossibility ofTruthful Behavior: a Simple Approach",joint with L. Corchón, Economic Theory, 6,2, 365-371(1995).
We provide an elementary proofshowing how in economies with an arbitrary number of public goodsan utility functions quasi-linear in money, any efficient andindividually rational mechanism is not
strategy-proof for anyeconomy satisfying a mild regularity requirement.
"Identical Preferences Lower BoundSolution and Consistency in Economies with Indivisible Goods", Social Choice and Welfare, 13,1,113-126 (1996).
We consider the problem of allocatinga finite set of indivisible good among a group of agents, andwe study a solution, called the Identical Preferences Lower Boundsolution, in the presence of
consistency properties. This solutionis not consistent. we prove that its maximal consistent subsolutionis the No-envy solution. Our main result is that the minimal consistentextension of the
intersection of the Identical Preferences Lowerbound solution with the Pareto solution is the Pareto solution.This result remains true in the restricted domain when all theindivisible goods are
identical, but not when there is a uniqueindivisible good.
"Population Monotonicity in a GeneralModel with Indivisible Goods",Economic Letters, 50, 91-97 (1996)
We show that, in a model withindivisible goods where each agent can receive more than one indivisiblegood, there are no population monotonic selections from the Paretosolution. However, if
substitutability is imposed on preferences,the Shapley solution satisfies population monotonicity.
"Population Monotonicity in Economieswith one Indivisible Good",Mathematical Social Sciences, 32,2, 125-138 (1996)
In this paper we present a Pareto-efficientsubsolution of the Identical Preferences Lower Bound solutionsatisfying population monotonicity in economies with one indivisiblegood and where preferences
are not necessarily quasi-linear. Sucha solution is a generalization of the Shapley solution, whichsatisfies the above properties in quasi-linear economies.
"Fair Allocation in a General Modelwith Indivisible Goods",Review of Economic Design, 3, 195-213 (1998).
In this paper we study the problemof fair allocation in economies with indivisible goods, droppingthe usual restriction that one agent receives at most one indivisiblegood. We show that most of the
results obtained in the literaturedo not hold when the aforementioned restriction is dropped.
"Buying Several IndivisibleGoods", joint withM. Quinzii and J.A. Silva, Mathematical Social Sciences,37, 1-23 (1999).
This paper studies economies inwhich agents exchange indivisible good and money. The indivisiblegoods are differentiated and agents have potential use for allof them. we assume that agents have
quasi-linear utilities inmoney, have sufficient money endowments to afford any group ofobjects priced below their reservation values, have reservationvalues which are submodular and satisfy the
Cardinality Condition.This Cardinality Condition requires that for each agent the marginalutility of an object depends only on the number of objects towhich it is added, not on their characteristics.
Under these assumptions,we show that the set of competitive equilibrium prices is a nonempty lattice and that, in any equilibrium, the price of an objectis between the social value of the object and
its value in itssecond best use.
"Manipulation Games in Economieswith Indivisible Goods",mimeo (1997).
In the first part of the paperwe study the strategic aspects of the Non-Envy solution for theproblem of allocating a finite set of indivisible goods amonga group of agents when monetary compensations
are possible and,each agent, receives, at most, one indivisible good. In this contextwe prove that the set of equilibrium allocations of any directrevelation game associated with a subsolution of the
No-Envy solutioncoincides with the set of envy-free allocations. That is, undermanipulation all the subsolutions of the No-Envy solution areequivalent. In the second part of the paper, the same
problemis addressed, but now, we allow each agent to receive more thanone indivisible good. In this situation the result is sightlydifferent from the above. We prove that any equal income
walrasianallocation can be supported by an equilibrium of any direct revelationgame associated with subsolutions of the No-envy solution.
"Taxation, Altruism and Subsidiesfro Higher Education",joint with Iñigo Iturbe-Ormaetxe, mimeo (1998).
The financing of higher educationthrough public spending imposes a transfer of resources from taxpayers(being or not users of the education services) to students andtheir parents. Moreover, most of
the students come from the middleand upper income groups and then they are the chief recipientsof that transfer of purchasing power. We provide a simple explanationof this phenomenon. We know that
those individuals who attendhigher education will earn a higher level of income in the futureand thus they will pay more taxes. Then people whose childrendo not attend higher education will agree to
help pay the costof education provided taxes are high enough to imply that in thefuture there will be enough redistribution in favor of their ownchildren.
|
{"url":"http://pareto.uab.es/cbevia/abstracts.html","timestamp":"2014-04-21T04:52:07Z","content_type":null,"content_length":"8896","record_id":"<urn:uuid:fd74e897-61f1-45f4-b550-8b8bbfaada75>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
locus in co-ordinate geometry
Re: locus in co-ordinate geometry
I assumed he meant all the line segments from (6,-8) to the x axis. A few sample points are shown with the red point being the midpoint.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=19695","timestamp":"2014-04-20T11:38:53Z","content_type":null,"content_length":"15111","record_id":"<urn:uuid:3d7d5dec-3c35-480c-9e4c-9b60f68f1eb0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability with a "known"
October 23rd 2013, 05:22 PM #1
MHF Contributor
Dec 2007
Ottawa, Canada
Probability with a "known"
A box contains 4 red apples, 4 green apples and 8 peaches.
3 are picked at random.
It is known that at least 1 of the 3 is an apple.
What is probability that at least 2 are red apples?
Re: Probability with a "known"
Hey Wilmer.
This is a conditional probability. (Hint: P(A|B) = P(A and B)/P(B)).
Re: Probability with a "known"
Thanks Chiro.
I'm trying to re-learn (completely forgot!) conditional probability.
16^3 = 4096
At least 1 (of the 3 picked) is an apple: 3584
At least 2 (of the 3 picked) are red apples: 640
P(A and B) = 3584/4096 + 640/4096 ? That's sure not correct...
Where am I goofing?
Re: Probability with a "known"
Couple of things:
1. You are assuming that the 3 items are picked with replacement, meaning that after you pick a fruit you put it back in the bin and then select another. This makes each selection independent of
the others. But I intepret the problem to mean that the fruit is selected without replacement, so the bin ends up with 13 fruits in it. In this case there are 16*15*14 = 3360 possible
combinations of selections.
2. In chiro's post event A is "selection of at least 2 red apples" and event B is "selection of at least 1 red apple" So P(A and B) is the probability of selecting at least 2 red apples and at
least 1 red apple, which is the same as simply the probability of selecting at least 2 red apples.
P(at least 2) = P(2) + P(3)
P(2) = (4/16) x (3/15) x (12/14) x C(3,1), where C(3,1) is the combination of one item out of 3.
P(3) = (4/16) x (3/15) x (2/14)
P(at least one red apple) = P(1)+P(2)+P(3). You already have values for P(2) and P(3), so now you just need to determine P(1). Then you can apply P(A|B) = P(A and B)/P(B). Can you finish it from
3. If the fruits are replaced after each selection then the number of ways to get 3 reds is 4^3 = 64. not 640, and the number of ways to get 2 reds is 4^2x12x3 = 576. So the probability of
selecting at least two reds is (64+576)/4096.
Last edited by ebaines; October 24th 2013 at 08:24 AM.
Re: Probability with a "known"
Thanks for replying, Mr. Baines.
> 1. You are assuming that the 3 items are picked with replacement,
YUK...my face just turned red! Without replacement, of course...
> 2. In chiro's post event A is "selection of at least 2 red apples" and event B is "selection of at least 1 red apple"
event A ok but not event B: should be "selection of at least 1 apple" (red or green)
I'd appreciate a full solution (which you just about gave me anyhow)!
Easier for me to "learn" that way.
Btw, this is not homework (I'm 72!), but a re-learning process.
Re: Probability with a "known"
In ebaines post, he writes P(1), P(2), P(3), but doesn't explain what those are. By P(1), he means the probability of picking exactly one red apple. P(2) would be the probability of picking
exactly two red apples. P(3) is the probability of picking exactly three red apples. And he did not mention it, but we should add P(0) is the probability of picking exactly 0 red apples.
So, why are we discussing P(0), P(1), P(2), and P(3)? It is because these are probabilities of "disjoint" events. That means that if you pick exactly zero apples, it is not possible that you also
picked exactly one apple. The events do not overlap. No matter what three fruit you pick, the outcome will be in exactly one of the four events. These are also called "mutually exclusive" events.
So, their probabilities are additive. If you want the probability of picking zero or two apples, it would be P(0) + P(2). If you want the probability of picking at least 2 apples, like ebaines
said, that would be P(2) + P(3). Lastly, if you pick three fruits, there is 100% chance that you picked three fruits. So, the sum of the four probabilities must equal 1. In other words, P(0)+P(1)
+P(2)+P(3) = 1.
Re: Probability with a "known"
Well, the probability of at least 1 red apple would be P(at least 1) = P(1) + P(2) + P(3), as ebaines said.
Since P(0)+P(1)+P(2)+P(3) = 1, we have P(0) + P(at least 1) = 1, so P(at least 1) = 1-P(0).
Let's calculate P(0). That means that you do not pick any red apples. So, for your first pick, you have 12/16 fruits to choose. The second pick, you have 11/15, and the third, you have 10/14. So,
$P(0) = \dfrac{12\cdot 11\cdot 10}{16\cdot 15\cdot 14}$ and $P\left(\text{at least }1\right) = 1-\dfrac{12\cdot 11\cdot 10}{16\cdot 15\cdot 14} = \dfrac{17}{28}$
Next, another way to calculate the probability of at least 2 red apples is to take the probability of at least 1 red apple and subtract the probability of exactly 1 red apple. Let's look at the
outcomes that yield exactly 1 red apple:
We can choose a red apple, then two fruit that are not red apples.
We can choose a fruit that is not a red apple, then a red apple, then a fruit that is not a red apple.
We can choose two fruit that are not red apples, then a red apple.
These are disjoint outcomes. It is not possible that your first choice is simultaneously a red apple and not a red apple. But, if you wind up with one red apple, it had to be one of your three
choices. So, the sum of the three probabilities must equal the probability of picking exactly one red apple (again, we can only add probabilities if they are disjoint, and we only know they add
up to the whole probability if they cover every possible outcome).
So, the probability for picking red, non-red, non-red:
The probability for picking non-red, red, non-red:
The probability for picking non-red, non-red, red:
Then, the probability for picking exactly 1 red apple:
\begin{align*}\dfrac{4}{16} \dfrac{12}{15} \dfrac{11}{14}+\dfrac{12}{16} \dfrac{4}{15} \dfrac{11}{14}+\dfrac{12}{16} \dfrac{11}{15} \dfrac{4}{14} & = \dfrac{4\cdot 12\cdot 11 + 12\cdot 4\cdot 11
+ 12\cdot 11\cdot 4}{16\cdot 15\cdot 14}\\ & = 3\dfrac{4\cdot 12\cdot 11}{16\cdot 15\cdot 14} \\ & = \dfrac{33}{70}\end{align*}
The 3 you see before the fraction on the middle row is the same as C(3,1) that ebaines used. It means we want one red apple, but we don't care which specific pick of the three got it. There are
three picks to choose from where we wind up with the red apple, so there are three times the number of ways of getting the red apple on the first pick, then no more red apples.
This means, the probability of at least 2 red apples would be
$P\left(\text{at least }2\right) = P(2)+P(3) = P\left(\text{at least }1\right) - P(1) = \dfrac{17}{28}-\dfrac{33}{70} = \dfrac{19}{140}$
So, the conditional probability would be:
$\dfrac{P\left(\text{at least }2\right)}{P\left(\text{at least }1\right)} = \dfrac{\left(\dfrac{19}{140}\right)}{\left(\dfrac{ 17}{28}\right)} = \dfrac{19}{85}$
Re: Probability with a "known"
Thanks, Slip....
Once more: it's not the probability of "at least 1 red apple",
but the probability of "at least 1 apple" (red or green); ********************
here's my initial post:
A box contains 4 red apples, 4 green apples and 8 peaches.
3 are picked at random.
It is known that at least 1 of the 3 is an apple. ********************
What is probability that at least 2 are red apples?
Last edited by Wilmer; October 24th 2013 at 09:51 AM.
Re: Probability with a "known"
Thanks, Slip....
Once more: it's not the probability of "at least 1 red apple",
but the probability of "at least 1 apple" (red or green); ********************
here's my initial post:
A box contains 4 red apples, 4 green apples and 8 peaches.
3 are picked at random.
It is known that at least 1 of the 3 is an apple. ********************
What is probability that at least 2 are red apples?
I see. Sorry about that.
Ok, so again, if at least two of the chosen fruit are red apples, then you are guaranteed that at least one of the chosen fruit is an apple (red or green), so $P(A\text{ and }B)$ is still $P\left
(\text{at least }2\text{ are red apples}\right) = \dfrac{19}{140}$ as we calculated before.
Now, we just need the probability that at least one of the chosen fruit is an apple. What are the possible outcomes? We can pick so that we wind up with exactly 0, 1, 2, or 3 apples (red or
green). It is not possible that we wind up with exactly 0 apples and exactly 1 apple simultaneously, so these are disjoint events. Also, if we pick three fruits, the only possible outcomes are
exactly 0, 1, 2, or 3 apples, so the sum of the probabilities of each disjoint event must equal 1. The probability of picking at least one apple is:
$1 - P\left(\text{only picked peaches}\right) = 1 - \dfrac{8}{16}\cdot \dfrac{7}{15}\cdot \dfrac{6}{14} = \dfrac{9}{10}$.
So, again, to find the conditional probability by dividing the two:
$P(A\text{ if we know }B) = \dfrac{P(A\text{ and }B)}{P(B)} = \dfrac{\left(\dfrac{19}{140}\right)}{\left(\dfrac{ 9}{10}\right)} = \dfrac{19}{126}$
Re: Probability with a "known"
Thanks loads, Slip...
Was confused, as I couldn't get close to that from simulation program I wrote;
then I realised that if there is no apples (red or green) picked, then this "attempt"
is skipped and a new attempt made; see line 130:
100 1-4 = red apples, 5-8 = green apples, 9-16 = peaches
110 Do 1000000 times
120 Pick at random (from 1 to 16), no replacement, a b and c
130 If a>8 and b>8 and c>8 then goto 120
140 If at least 2 of a,b,c are < 5 then t = t + 1
150 End loop
160 Print t
Ran it 5 times; t = 150870, 151364, 150930, 150,747 and 150562
And 19/126 = ~150794
So all's fine! Thanks again.
Re: Probability with a "known"
Slip, what do you get if we change problem slightly to:
A box contains 8 red apples, 8 green apples and 16 peaches.
5 fruits are picked at random.
It is known that at least 1 of the 5 is an apple (red or green).
What is probability that at least 2 are red apples?
Trying to simulate...thanks.
Re: Probability with a "known"
You can use the same technique as before:
P(At least 2 red | at least 1 red or green) = P( at least 2 red AND at least 1 red or green)/P(at least one red or green).
As before, the numerator can be simplified to P(at least 2 red), since if there are 2 red then you know that there is at least 1 red or green.
P(at least 2 red) = P(2 red) + P(3 red)+P(4 red) + P(5 red). Note that it also equals 1 - (P(0 red)+P(1 red)). We'll use the latter form as it's less work:
P(0 red) = Permut(24,5)/Permut(32,5), where "Permut(A,B)" means the permutation of A things B at a time:
$P(0\ red)= \frac {24 \times 23 \times 22 \times 21 \times 20}{32 \times 31 \times 30 \times 29 \times 28} = 0.211$
P(1 red) = Permut(24,5)xPermut(8,1)/Permut(32,5) x C(5,1):
$P(1\ red)= \frac {24 \times 23 \times 22 \times 21 \times 8} {32 \times 31 \times 30 \times 29 \times 28} \times 5 = 0.422$
$P(at\ least\ 2\ red) = 1 - 0.211-0.422 = 0.367$
Now for the denominator: P(at least 1 red or green) = 1 - P( no red or green)
$1 - P(no\ red\ or\ green) = 1-\frac {16 \times 15 \times 14 \times 13 \times 12}{32 \times 31 \times 30 \times 29 \times 28} = 0.978$
So - the probability that there are at least 2 red apples given that there is at least 1 red or green apple is:
$\frac {0.367}{0.978} = 0.375.$
Last edited by ebaines; October 25th 2013 at 05:22 AM.
Re: Probability with a "known"
Another way to do this is with combinations (since you don't care about the order that you pick the fruit). You will get the same answer either way. With combinations, you can just divide the
number of outcomes. So, if we figure out the number of outcomes with (at least two red AND at least one red or green) divided by the number of outcomes with at least one apple (red or green),
that will give us the same probability.
Just as ebaines did, we want the number of outcomes with exactly 0 red and exactly 1 red, then take the compliment (total number of outcomes minus outcomes that give us only 0 or 1 red). For
exactly 0 red, that is $\binom{24}{5}$ outcomes.
For exactly 1 red, that is $\binom{24}{4}\binom{8}{1}$ outcomes (note: since we are doing combinations, we don't care which of the apples is red). So, there are $\binom{32}{5} - \binom{24}{5} - \
binom{24}{4}\binom{8}{1}$ outcomes with at least 2 red.
For at least one apple (red or green), we want the number of outcomes with 0 apples and take the compliment: $\binom{32}{5} - \binom{16}{5}$.
So, the probability that you get at least 2 red apples given that you pick at least one apple (red or green):
$\dfrac{\binom{32}{5} - \binom{24}{5} - \binom{24}{4}\binom{8}{1}}{\binom{32}{5} - \binom{16}{5}} = \dfrac{73,864}{197,008} = \dfrac{1319}{3518} \approx 0.375$
Last edited by SlipEternal; October 25th 2013 at 07:02 AM.
Re: Probability with a "known"
Thank you very much Slip and Mr.Baines; confirmed my simulation; 5 runs @ 1 million each:
1: 374,846
2: 375,431
3: 375,019
4: 374,897
5: 374,961
...and 1319/3518 = ~374,929
More important is the help you've both given me in "remembering" this probability style,
by showing the steps...thanks again.
I'll recommend you both for a raise
October 23rd 2013, 05:51 PM #2
MHF Contributor
Sep 2012
October 23rd 2013, 11:40 PM #3
MHF Contributor
Dec 2007
Ottawa, Canada
October 24th 2013, 08:16 AM #4
October 24th 2013, 08:45 AM #5
MHF Contributor
Dec 2007
Ottawa, Canada
October 24th 2013, 08:49 AM #6
MHF Contributor
Nov 2010
October 24th 2013, 09:14 AM #7
MHF Contributor
Nov 2010
October 24th 2013, 09:48 AM #8
MHF Contributor
Dec 2007
Ottawa, Canada
October 24th 2013, 10:23 AM #9
MHF Contributor
Nov 2010
October 24th 2013, 01:19 PM #10
MHF Contributor
Dec 2007
Ottawa, Canada
October 24th 2013, 01:55 PM #11
MHF Contributor
Dec 2007
Ottawa, Canada
October 25th 2013, 05:08 AM #12
October 25th 2013, 06:55 AM #13
MHF Contributor
Nov 2010
October 25th 2013, 09:44 AM #14
MHF Contributor
Dec 2007
Ottawa, Canada
|
{"url":"http://mathhelpforum.com/statistics/223410-probability-known.html","timestamp":"2014-04-19T17:41:45Z","content_type":null,"content_length":"85601","record_id":"<urn:uuid:8b8082bf-1229-4cca-a537-b1d98ce3057c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Non-parametric Residual Variance Estimation in Supervised Learning
Elia Liitiäinen, Amaury Lendasse and Francesco Corona
In: 9th InternationalWork-Conference on Artificial Neural Networks Lecture Notes in Computer Science , 4507/2007 . (2007) Springer-Verlag , Berlin Heidelberg , . ISBN 978-3-540-73006-4
The residual variance estimation problem is well-known in statistics and machine learning with many applications for example in the field of nonlinear modelling. In this paper, we show that the
problem can be formulated in a general supervised learning context. Emphasis is on two widely used non-parametric techniques known as the Delta test and the Gamma test. Under some regularity
assumptions, a novel proof of convergence of the two estimators is formulated and subsequently verified and compared on two meaningful study cases.
PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
|
{"url":"http://eprints.pascal-network.org/archive/00003735/","timestamp":"2014-04-19T04:21:48Z","content_type":null,"content_length":"7792","record_id":"<urn:uuid:e38c5e38-52ff-4953-a058-8f141ac609a8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Material Results
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
By means of four spin buttons, the user specifies the directrix, the focus and the eccentricity for a conic section; an...
see more
Material Type:
Thomas Mitchell
Date Added:
Feb 24, 2005
Date Modified:
Nov 20, 2011
Illustrates Napoleon's theorem: If you take any triangle ABC and draw equilateral triangles on each side, then join up the...
see more
Material Type:
Saltire Software
Date Added:
Dec 19, 1997
Date Modified:
Jul 24, 2002
Illustrates the relationship between perpendicular bisectors of a right angled triangle.
Material Type:
Saltire Software
Date Added:
Dec 19, 1997
Date Modified:
Sep 13, 2002
A geometry question relating to analysys
Material Type:
Drill and Practice
Mohammad Mazaheri
Date Added:
May 09, 2010
Date Modified:
Jan 04, 2014
Explore the points of quadrangle with this applet!
Material Type:
Ben Cheng
Date Added:
Feb 25, 1998
Date Modified:
Nov 21, 2011
In this configuration, the quadrilateral EFGE is formed by the centers of the circles which are tangent to 2 sides and pass...
see more
Material Type:
Saltire Software
Date Added:
Dec 19, 1997
Date Modified:
Jul 24, 2002
Reuleaux Triangle is an example of a constant width shape other than the circle. The applet illustrates its ability to rotate...
see more
Material Type:
Alexander Bogomolny
Date Added:
Mar 16, 1998
Date Modified:
Mar 10, 2009
Illustrates geometrical relationships between the sides of related triangles.
Material Type:
Saltire Software
Date Added:
Dec 19, 1997
Date Modified:
Jan 03, 2003
Illustrates a theorem using the Pythagoras diagram but yielding a surprising triangle.
Material Type:
Saltire Software
Date Added:
Dec 19, 1997
Date Modified:
Jan 03, 2003
Lets you experiment with the Cheng-Pleijel point of a quadrangle.
Material Type:
Ben Cheng
Date Added:
May 28, 1997
Date Modified:
Jul 24, 2002
|
{"url":"http://www.merlot.org/merlot/materials.htm?nosearchlanguage=&pageSize=&page=5&category=2569&sort.property=overallRating","timestamp":"2014-04-16T06:50:51Z","content_type":null,"content_length":"166703","record_id":"<urn:uuid:55e69b62-4e0e-4c2e-9a2b-fbbb7a58d94e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-dev] Numerical Recipes (was tagging 0.7rc1 this weekend?)
Bruce Southey bsouthey@gmail....
Thu Dec 4 08:19:32 CST 2008
josef.pktd@gmail.com wrote:
> I looked at ttest_rel and ttest_ind, both are simple t-tests for
> equality of two samples, either in mean or of a twice ("related")
> observed random variable.
> step 1: calculate difference between samples (means)
> step 2: divide by appropriate standard deviation measure
> step 3: look up distribution to report p-value
> The doc strings don't explain well what the test is useful for, but it
> looks a straight forward implementation of the statistical formulas.
> The only messy part is to make it more general so that it also applies
> for higher dimensional arrays, and that looks all numpy to me.
> ttest_ind is identical to
> http://en.wikipedia.org/wiki/T-test#Independent_two-sample_t-test
> ttest_rel is described in http://en.wikipedia.org/wiki/T-test#Dependent_t-test
> So, if there is any resemblance left to NR (which I don't know) then
> it's purely because it calculates the same thing. I think the two
> Wikipedia references are a lot better than NR, since Wikipedia also
> explains a bit the background and usage.
> The only part that I wouldn't have immediately implemented, is
> handling of zerodivision problems when the variance (of both samples
> or of difference of samples) is zero, which is an unusual boundary
> case.
> Josef
> _______________________________________________
> Scipy-dev mailing list
> Scipy-dev@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-dev
Could you also please replace the use of 'betai' with the appropriate
call to the t-distribution to the the probabilities?
Numerical Recipes uses betai and also it is more understandable to use
the actual t-distribution.
More information about the Scipy-dev mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2008-December/010471.html","timestamp":"2014-04-16T07:43:34Z","content_type":null,"content_length":"5129","record_id":"<urn:uuid:a9ad3b08-bb58-4ec3-a1d6-470118260a7b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Passage to Abstract Mathematics
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/passage-abstract-mathematics-1st-watkins/bk/9780321738639","timestamp":"2014-04-16T08:10:42Z","content_type":null,"content_length":"29206","record_id":"<urn:uuid:3886ff52-a552-40a2-9127-e32f909c03ee>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
For loop using the modulo
My brain doesn't seem to be working at all today.
Here's my problem. Imagine having an array of integers, but i want to add up these integers from a pair of indices.
So, let's imagine we have an integer array containing 10 items, and the two indices are given by 3 and 8. To do the sum, we would have:
2 int a[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
3 int ind1 = 3, ind2 = 8;
4 int total = 0;
6 for (int i = ind1; i <= ind2; i++)
7 total += a[i];
Of course, this is simple. However, what happens if i want to go from indices of 8 to 3?
That is i want to calculate 9+10+1+2+3+4........
I could write two for loops to handle this situation, but i'm sure there's a way to do this in one loop with the help of the modulo. I just can't remember.
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/beginner/78182/","timestamp":"2014-04-20T18:45:04Z","content_type":null,"content_length":"13921","record_id":"<urn:uuid:fb9e608c-8691-472a-ad40-42352e135df6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Return to List
Group Representations: Cohomology, Group Actions and Topology
Edited by:
Alejandro Adem
University of Wisconsin, Madison, WI
Jon Carlson
University of Georgia, Athens, GA
Stewart Priddy
Northwestern University, Evanston, IL
, and
Peter Webb
University of Minnesota, Minneapolis, MN
             
Proceedings of Symposia This volume combines contributions in topology and representation theory that reflect the increasingly vigorous interactions between these areas. Topics such as group
in Pure Mathematics theory, homotopy theory, cohomology of groups, and modular representations are covered. All papers have been carefully refereed and offer lasting value.
1998; 532 pp; hardcover Features:
Volume: 63 • state of the art contributions from this active, interdisciplinary branch of mathematical research
• excellent, high-level survey papers by experts in the field
ISBN-10: 0-8218-0658-0 • a unique combination of topics in algebra and topology
• a compilation of open problems
978-0-8218-0658-6 Graduate students, research mathematicians and physicists interested in group theory and generalizations.
List Price: US$120 • T. Akita -- On the cohomology of Coxeter groups and their finite parabolic subgroups. II
• R. Boltje -- Linear source modules and trivial source modules
Member Price: US$96 • S. Bouc -- Résolutions de foncteurs de Mackey
• C. Broto and A. Viruel -- Homotopy uniqueness of \(BPU(3)\)
Order Code: PSPUM/63 • J. Brundan -- Lowering operators for \(GL(n)\) and quantum \(GL(n)\)
• J. F. Carlson and W. W. Wheeler -- Homomorphisms in higher complexity quotient categories
• F. R. Cohen and R. Levi -- On the homotopy theory of \(p\)-completed classifying spaces
• M. D. Crossley -- \(H^*V\) is of bounded type over \(\mathcal{A}(p)\)
• J. Dietz and T. Ratliff -- Classifying spaces of central group extensions
• W. G. Dwyer -- Sharp homology decompositions for classifying spaces of finite groups
• P. Fong and R. J. Milgram -- On the geometry and cohomology of the simple groups \(G_2(q)\) and \(^3D_4(q)\)
• D. J. Green and P. A. Minh -- Transfer and Chern classes for extraspecial \(p\)-groups
• R. Grieder -- The infinite order of the symplectic classes in the cohomology of the stable mapping class group
• H.-W. Henn -- A variant of the proof of the Landweber Stong conjecture
• H.-W. Henn -- Unstable modules over the Steenrod algebra and cohomology of groups
• L. G. Lewis, Jr. -- The category of Mackey functors for a compact Lie group
• Z. Lin -- Comparison of extensions of modules for algebraic groups and quantum groups
• F. Luca -- The defect of the completion of a Green functor
• A. Mathas -- Simple modules of Ariki-Koike algebras
• R. J. Milgram -- On the geometry and cohomology of the simple groups \(G_2(q)\) and \(^3D_4(q)\): II
• R. J. Milgram -- On the relation between simple groups and homotopy theory
• D. K. Nakano -- Varieties for \(G_rT\)-modules
• F. Oda -- On defect groups of the Mackey algebras for finite groups
• S. Priddy -- Applications of stable classifying space theory to group cohomology
• Yu. B. Rudyak -- The spectra \(k\) and \(kO\) are not Thom spectra
• N. S. N. Sastry and P. Sin -- The code of a regular generalized quadrangle of even order
• M. Schaps, D. Shapira, and O. Shlomo -- Quivers of blocks with normal defect group
• T. Watanabe -- Cohomology of a homogeneous space \(E_6/T^1 \cdot SU(6)\)
• D. Benson -- Problem session, Seattle 1996
|
{"url":"http://ams.org/bookstore?fn=20&arg1=pspumseries&ikey=PSPUM-63","timestamp":"2014-04-18T17:05:34Z","content_type":null,"content_length":"17504","record_id":"<urn:uuid:66342b47-9af2-4c80-a420-7e1b55cd8fad>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MIT Scheme
MIT Scheme 7.7.90.+
From the MIT Scheme web page:
Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman. It was designed to have an
exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find
convenient expression in Scheme.
MIT/GNU Scheme is a complete programming environment that runs on many unix platforms, as well as Microsoft Windows and IBM OS/2. It features a rich runtime library, a powerful source-level
debugger, a native-code compiler, and an integrated Emacs-like editor.
To reproduce my results on Ubuntu Gutsy Gibbon, install MIT Scheme with
sudo apt-get install mit-scheme
and run the interpreter with
MIT Scheme makes the following choices:
|
{"url":"http://web.mit.edu/~axch/www/scheme/choices/mit-scheme.html","timestamp":"2014-04-19T19:57:22Z","content_type":null,"content_length":"4705","record_id":"<urn:uuid:56b20782-ab0a-4fdf-93ee-f67bf8991467>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Determinant of Block Matrix
September 17th 2011, 07:47 AM #1
Sep 2011
Determinant of Block Matrix
Let $M=\begin{pmatrix} A & B \\ O & C\end{pmatrix}$, where A and C are square matrix. Show that det(M)=det(A)det(C)
I have the following idea but i dont have a concrete proof. We can perform elementary row operations on the matrix M. Eventually A and C will be a upper triangular matrix. Hence det(A) and det(C)
will just be the product of their diagonal entries. On the other hand, M will also become an upper triangular matrix after the elementary row operations. Therefore, det(M)=product of its diagonal
entries=product of diagonal entries of A * product of diagonal entries of C=det(A)det(C).
How do i write a concrete proof using this idea or is there any other method of doing it (e.g. cofactor expansion, etc.)
Re: Determinant of Block Matrix
Let $M=\begin{pmatrix} A & B \\ O & C\end{pmatrix}$, where A and C are square matrix. Show that det(M)=det(A)det(C)
I have the following idea but i dont have a concrete proof. We can perform elementary row operations on the matrix M. Eventually A and C will be a upper triangular matrix. Hence det(A) and det(C)
will just be the product of their diagonal entries. On the other hand, M will also become an upper triangular matrix after the elementary row operations. Therefore, det(M)=product of its diagonal
entries=product of diagonal entries of A * product of diagonal entries of C=det(A)det(C).
You can perform elementary row operations on the matrix $M$but then you will take a matrix $M_1$ with
$det(M_1) eq det(M)$.
One method is by induction on $n$ - using cofactor expansion - where $n$ is the type of $A$ i.e. $A$ is an $n\times n$ matrix.
September 18th 2011, 04:01 AM #2
Junior Member
Mar 2011
|
{"url":"http://mathhelpforum.com/advanced-algebra/188181-determinant-block-matrix.html","timestamp":"2014-04-19T02:49:49Z","content_type":null,"content_length":"35153","record_id":"<urn:uuid:6e24486a-3369-4e51-b3ba-74243e31f278>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Expressions and Operators
Next: Assignment Statement Up: C Programming Previous: Arrays
One reason for the power of C is its wide range of useful operators. An operator is a function which is applied to values to give a result. You should be familiar with operators such as +, -, /.
Arithmetic operators are the most common. Other operators are used for comparison of values, combination of logical states, and manipulation of individual binary digits. The binary operators are
rather low level for so are not covered here.
Operators and values are combined to form expressions. The values produced by these expressions can be stored in variables, or used as a part of even larger expressions.
January 1995
|
{"url":"http://www2.its.strath.ac.uk/courses/c/section3_7.html","timestamp":"2014-04-16T15:59:31Z","content_type":null,"content_length":"2225","record_id":"<urn:uuid:0c132203-ee89-4999-9499-0fbbcf86f57c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can someone tell me how to find the percent of these numbers? Im not asking for the answer just how to do this because I have forgotten:\ PLEASE!! 3150 1800 2250 1350 450
• one year ago
• one year ago
Best Response
You've already chosen the best response.
The question is a little vague. The numbers themselves are 'numbers' and are 100% of them self. What are you trying to do exactly?
Best Response
You've already chosen the best response.
It's for a project and these numbers are sales (in dollars) of different music genres and they give me a grph and I have to figure out what the percent of the graph each one is so that i can find
the measure of each central angle... i can post it after this so you canlook at it if you would like?
Best Response
You've already chosen the best response.
You will need to as the only way to determine the percentage is to see what "part" of the whole they are.
Best Response
You've already chosen the best response.
To say that if you had a million dollars in music sales and 3150 of that went to rock music, then (3150/1000000)*100 would tell you the percentage of the million that rock captured.
Best Response
You've already chosen the best response.
so will i need to add my 5 numbers and divide them by 5 to get a total number of sales? sorry for the trouble:\
Best Response
You've already chosen the best response.
Yes, if you add all the numbers up you will have a total, the "whole", and then you take the "part" that each genre has and divide it by that whole to get a decimal equivalent. Multiply that by
100 and you will have the percentage each genre brought in.
Best Response
You've already chosen the best response.
:) Good luck.
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fa41df5e4b029e9dc34a287","timestamp":"2014-04-18T16:37:55Z","content_type":null,"content_length":"45077","record_id":"<urn:uuid:194870bf-eb08-4fd7-8e34-10414010ba21>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Minus Mathematics
Recent Comments
11 Sep
Author Paul H. Nahin tells in Introduction to his new book how on several occasions the Nobel Prize winner Richard Feynman spoke condescendingly of mathematics. Nahin suggests that "Mathematics is
trivial, but I can't do my work without it" may have been a joke and should not be taken too seriously. He may be right as Feynman was a rather good mathematician. Among many other achievements he is
well known for the eponymous "Feynman integral" - the idea of which may shed light on Feynman's attitude towards mathematics. Feynman integral replaces in quantum mechanics the notion of a single
trajectory with an integral over all possible trajectories to define a quantum amplitude. The "integral" in Feynman's view is just a tool for a description of quantum theory.
Nahin tells a story (p. xxiv) of an episode in which Feynman "got at least as good as he gave."
The great probabilist Mark Kac (1914-1984) once gave a lecture at Caltech, with Feynman in the audience. When Kac finished, Feynman stood up and loudly proclaimed, "If all mathematics
disappeared, it would set physics back precisely one week." To that outrageous comment, Kac shot back with that yes, he knew of that week; it was "Precisely the week in which God created the
1. P. H. Nahin, Number-Crunching, Princeton University Press, 2011
|
{"url":"http://www.mathteacherctk.com/blog/2011/09/physics-minus-mathematics-the-week-of-creation/","timestamp":"2014-04-20T03:10:00Z","content_type":null,"content_length":"38508","record_id":"<urn:uuid:1c56dbf7-34c5-48dc-b105-f120b0efb261>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RNAshapes - Manual
Sequence Format:
An input sequence for RNAshapes may have up to 2k nucleotids. RNAshapes supports sequences in FASTA format via file upload or copy&paste in a textfield. A sequence in FASTA format begins with a
single-line description, followed by lines of sequence data.
• The description line starts with a greater than symbol (">").
• The word following the greater than symbol (">") immediately is the "ID" (name) of the sequence, the rest of the line is the description.
• The "ID" and the description are optional.
• all lines of text should be shorter than 80 (normally 60) characters.
• the sequence ends if there is another greater than symbol (">") symbol at the beginning of a line and another sequence begins.
The following example contains one sequence (sequence_1):
Shape type:
The shape type is the level of abstraction or dissimilarity which defines a different shape. In general, helical regions are depicted by a pair of opening and closing square brackets and unpaired
regions are represented as a single underscore. The differences of the shape types are due to whether a structural element (bulge loop, internal loop, multiloop, hairpin loop, stacking region and
external loop) contributes to the shape representation: Five types are implemented. Their differences are shown in the following example:
Type Description Result
1 Most accurate - all loops and all unpaired [_[_[]]_[_[]_]]_
2 Nesting pattern for all loop types and unpaired regions in external loop and multiloop [[_[]][_[]_]]
3 Nesting pattern for all loop types but no unpaired regions [[[]][[]]]
4 Helix nesting pattern in external loop and multiloop [[][[]]]
5 Most abstract - helix nesting pattern and no unpaired regions [[][]]
The following image also describes the differences between shape types:
Match shape:
Specify a shape for the corresponding mode of operation.
Calculate structure probabilities:
This calculates the probability of every computed structure. It can be combined with any sequence analysis mode.
Generate structure graphs:
This generates postscript structure graphs for each given sequence.
Allow lonely base pairs:
In default mode, RNAshapes only considers helices of length 2 or longer. With this option, lonely base pairs are also included.
Ignore unstable structures:
This option filters out closed structures with positive free energy.
Window size & Window increment:
Beginning with position 1 of the input sequence, the analysis is repeatedly processed on subsequences of the specified size. After each calculation, the results are printed out and the window is
moved by the window position increment, until the end of the input sequence is reached.
Set maximum loop length:
This option sets the maximum lengths of the considered internal and bulge loops. The default value here is 30. Note that this restriction can have a very slight influence on the calculated structure
and shape probabilities.
Normal probability mode:
This is the default shape probabilites mode.
Also calculate shreps:
Calculates the shape probabilities based on partition function. Additional to the standard probability mode, the corresponding shreps with their minimum free energies are calculated. Note that this
mode is slightly slower and can be used with sequences up to a length of 250 bases.
Shape probabilities for mfe-best shapes:
This mode first calculates the best shapes based on free energy minimization. In a second step, it calculates the probability for each of these best shapes. This mode can be used for longer sequences
(up to 500 bases).
Energy range:
This sets the energy range either as percentage value of the minimum free energy (% of mfe) or as the difference to the minimum free energy for the sequence (kcal/mol).
Probability cutoff filter:
This option sets a barrier for filtering out results with very low probabilities during calculation. The default value here is 0.000001, which gives a significant speedup compared to a disabled
filter. Note that this filter can have a slight influence on the overall results.
Probability output filter:
This option sets a filter for omitting low probability results during output. Unlike probability cutoff filter, this option does not have any influence on probabilities beyond this value.
Number of sampling iterations:
Number of iterations for Sampling mode.
Omit sampling output.:
Omit sampling output for Sampling mode.
RapidShapes calculates exact probabilities for RNA abstract shapes. Since it is a runtime heuristic it calculates these exact values much faster than the exhaustive version of RNAshapes for most of
the RNA input sequences. This speed-up is gained by first guessing a handful of promising shapes. In a second phase the exact shape probability is calculated in O(n^3) time for each promising shape.
The difference to the exhaustive version is, that RapidShapes analyses only the promising shapes instead of all exponential many existing shapes for an input sequence. Thus the speed-up is the ratio
between the promising shapes and all exponential many shapes for the input sequence. Fewer promising shapes means faster runtime.
RapidShapes uses the sampling method to guess promising shapes.
Number of sampling iterations:
RapidShapes uses a sampling method to gain promising shapes for which it calculates exact shape probabilities in a second phase.
Sampling is a stochastic process where one RNA structure is drawn out of the complete folding space of the input sequence. The chance to draw a special RNA structure depends on its minimal free
Repeating this process <Number of sampling iterations> times and translating the RNA structures into shapes, the shape probabilities can simple be estimated by counting their appearance.
More iterations raise the chances to observe more diverse shapes and thus increases the number of promising shapes for RapidShapes.
Minimal shape probability threshold:
The lower the shape probability the less likely is it to find an RNA sequence forming an according structure in a cell. Since form follows function one would expect a non functional RNA to have a
relatively high shape probability or at least a probability of <Minimal shape probability threshold> percent.
The problem definition of RapidShapes is as follows: Given an RNA sequence s of length n and a threshold 0<T≤1, compute all shapes p of s with Prob(p)≥T. This definition permits that some shapes with
sub-threshold probability will also be computed, but the goal is, of course, to minimize the efforts spent on those. (T is the variable <Minimal shape probability threshold>.)
When the accumulated probability of all analyzed shapes exceeds 1-T, no additional shape with Prob(p)≥T can hide in the remaining unexplored folding space. Thus RapidShapes can stop the calculation.
|
{"url":"http://bibiserv.techfak.uni-bielefeld.de/rnashapes/manual.html","timestamp":"2014-04-20T05:42:31Z","content_type":null,"content_length":"33020","record_id":"<urn:uuid:ac24d3bb-7a85-4366-881f-2cbf2c99d08e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The annotated practical guide for solving the Rubik's cube
Copyright (C) 2002 This document may be freely
Oren Ben-Kiki. redistributed unchanged.
There are more ways to solve the Rubik's cube and guides to describe them than I can count. This guide is different in that it focuses on a solution that is practical: easy to follow, remember and
• Most guides either describe operations in terms of algebra and "conjugate operators" (hard to follow) or as a long magical list of face moves (hard to remember).
This guide avoids both problems by introducing a middle level of "basic operations". Each employs three basic face moves to achieve an intuitive effect (easy to remember). The solution operations
consist of combining two or at most three such basic moves together in a way that makes sense (easy to understand).
• Most guides ignore the issue of convenience of performing the operations. As an extreme example, one guide I've seen included an operation that rotated the back face of the cube. This means you
either need to contort your hands around the cube or re-orient it in the middle of a complex operation. Operations on the front face are only marginally better in this respect.
This guide uses only four face moves and one slice move. All of these can be easily applied without re-orienting the cube in the middle of an operation.
• Most guides are long (several pages) and therefore hard to use as a reference without taking your hands off the cube.
This guide also comes in a compact version that can be printed in one page. This is a plain ASCII file (no tiny fonts) so it is readable by anyone using any hardware and any operating system,
anywhere and at any time. The compact version is self-contained and complete, and is useful as a reference guide and as an aid for memorizing the solution.
• Most guides start with a cube oriented such that one of its faces is facing the viewer. If you try this in practice, you'll note you can only get a good view of two of the faces (typically the
front and the upper one).
This guide starts with a cube oriented such that one of its edges is facing the viewer. This means you get a good view of three faces (typically the upper, left and right faces).
• Many guides describe solutions that try to minimize the number of moves. This requires memorizing many specific operations for many specific cube configurations. And then trying to figure out
which of them applies to your cube.
This guide describes how to solve the cube layer by layer, using a small number of simple, incremental steps. This approach is generally accepted as being the easiest and most natural, but you
won't win any speed competitions using it.
This annotated guide is written as a tutorial for understanding the compact guide. Once you go through the tutorial, you will find the compact guide to be easier to use as an aid for solving a cube
and for memorizing the solution or refreshing your memory about a specific step.
The cube
The guide contains an ASCII image of three faces of the cube, using it to name several of its faces, edges and corners. Keep in mind that the name of each part of the cube depends on the way you hold
it. In many cases, to perform some operation you'll first need to orient the whole cube first.
Also note that not all pieces of the cube are given names. Don't worry about it. In each step of the solution, you'll only have to worry about the part of the cube you can see at one glance, which
means three faces at most.
/ | \
UL | UR
/ | \
ULB LR URB
| | |
| L | R |
| DLR | Edges Corners
| / \ |
| DL DR |
| / \ |
DLB D DRB
\ /
BL BR
\ /
Instead of using abstract names, the guide uses visual notation for each operation (as much as that is possible in ASCII). Using this notation, the operations can be specified in a terse and
intuitive manner.
Single face operations
Each of these operations rotates one of the cube faces. The guide describes them in terms of corner to corner rotation. As one corner and some edges aren't named (are hidden from view in the cube
chart), they are specified using ?. The names (and mnemonics) of each operation are based on the way they effect the "front most" corner or edge.
Starting from a cube with its edge oriented towards you, you'll find that by rotating the whole cube a bit to the left, it becomes easy to rotate the left face with your left hand, and similarly for
the right one. Likewise, the upper face is easy to rotate with the left hand, and the lower face with the right hand. Since the same four face moves are used over and over again, you'll find that
"muscle memory" makes them very easy to perform, compared with solutions requiring you to rotate the "front" face (or the "back" face - shudder).
{ = ULB <- ULR <- URB <- ??? = U left
Mnemonic: The { points left, and is rounded (somewhat) like the letter U.
} = ULB -> ULR -> URB -> ??? = U right
Mnemonic: The } points right, and is rounded (somewhat) like the letter U.
< = DRB <- DLR <- DRB <- BLR = D left
Mnemonic: The < points left, and is angular like the Greek letter delta.
> = DLB -> DLR -> DRB -> BLD = D right
Mnemonic: The > points right, and is angular like the Greek letter delta.
v( = ULR -> DLR -> DRB -> URB = R down
Mnemonic: The ( describes the arc performed by the ULR corner.
The v indicates it is moving down.
^( = DLR -> ULR -> URB -> DRB = R up
Mnemonic: The ( describes the arc performed by the DLR corner.
The ^ indicates it is moving up.
)v = ULR -> DLR -> DLB -> ULB = L down
Mnemonic: The ) describes the arc performed by the ULR corner.
The v indicates it is moving down.
)^ = DLR -> ULR -> ULB -> DLB = L up
Mnemonic: The ) describes the arc performed by the DLR corner.
The ^ indicates it is moving up.
v = UL -> DL -> BR -> ?? = C down
Mnemonic: The v points down.
This is awkward to do in a single move. It is easier to use two:
^ = BR -> DL -> UL -> ?? = C up
Mnemonic: The ^ points up.
This is awkward to do in a single move. It is easier to use two:
ULR operations
There are four basic ways to manipulate the ULR corner. It turns out that all corner operations (and some of the edge operations) can be expressed in terms of these motions.
As these operations are so basic to the solution, they are assigned special shorthand notation of their own, instead of being specified each time as a set of face operations. This makes it much
easier to memorize (and understand/feel) the solution.
)> = )v > )^ = ULR -> DRB = ULR to right
Mnemonic: The ) describes the arc performed by the ULR corner.
The > points away from it, indicating ULR is being moved to the right.
)< = )v < )^ = ULR <- DRB = ULR from right
Mnemonic: The ) describes the arc performed by the ULR corner.
The < points to it, indicating ULR is being collected from the right.
<( = v( < ^( = DLB <- ULR = ULR to left
Mnemonic: The ( describes the arc performed by the ULR corner.
The < points away from it, indicating ULR is being moved to the left.
>( = v( > ^) = DLB -> ULR = ULR from left
Mnemonic: The ( describes the arc performed by the ULR corner.
The > points to it, indicating ULR is being collected from the left.
UL operations
There are three basic ways to manipulate the UL edge. It turns out that all remaining (edge) operations that aren't covered by ULR movements can be expressed in terms of these motions.
Again, as these operations are so basic to the solution, they are assigned special shorthand notation of their own, instead of being specified each time as a set of face operations. This makes it
much easier to memorize (and understand/feel) the solution.
|> = v > ^ = UL -> DR = UL to right
Mnemonic: The | describes the motion performed by the UL edge.
The > points away from it, indicating UL is being moved to the right.
|< = v < ^ = UL <- DR = UL from right
Mnemonic: The | describes the motion performed by the UL edge.
The < points to it, indicating UL is being collected from the right.
|>> = v > > ^ = UL <-> DR = UL exchange
Mnemonic: The | describes the motion performed by the UL edge.
The double > indicates the double turn.
Note that doing a double < has the same effect.
The solution
Once you have mastered the basic operations, solving the cube becomes reasonably easy to do (and remember). The solution consists of the following 9 steps:
1. Position 1st layer (U) corners
Orient the cube so that your chosen first layer is the upper face. You now need to bring the four corners of the proper color to their positions. There are two cases you need to be able to handle:
Corner in D : ULR from.
If the corner is in the down face, orient the cube so its proper place is at the ULR corner. Rotate the down face until the corner is at either DLB or DRB. Then employ the "ULR from left" or "ULR
from right" operations. The corner will now be in place. In two out of three cases, it is possible to also properly orient the corner in the process, by choosing the appropriate "ULR from" variant.
Corner in U : First, ULR to.
If the corner is in the upper face, but at the wrong position, turn the cube so it is at ULR. Then employ either "ULR to left" or "ULR to right". The corner will now be in the down face. Bring it up
to its proper place as described above.
2. Orient 1st layer (U) corners
You can "twist" each corner in one of two directions.
Twist ULR clockwise : >( )>
Mnemonic: Think of moving the DLB corner to DRB via ULR. Along its way it twists ULR in the direction it travels (clockwise).
Twist ULR widdershins : )< <(
Mnemonic: Think of taking the DRB corner to DLB via ULR. Along its way it twists ULR in the direction it travels (widdershins). For those of you who don't read Terry Pratchet - 'widdershins' means
'the other way', or in this context, counter-clockwise.
3. Position 1st layer (U) edges
Orient the cube so that your chosen first layer is the upper face. You now need to bring the four edges of the proper color to their positions. There are three cases you need to be able to handle:
Edge in D : UL from/exchange.
If the edge is in the down face, turn the cube so its proper place is at the UL edge. If the edge needs to be flipped, rotate the down face until it is at DL, then employ the "UL Exchange"
operations. Otherwise, rotate the down face until the edge is at DR, then employ the "UL from right" operation. The edge should now be in place (and properly oriented).
Edge in U : First, UL to.
If the corner is in the upper face, but at the wrong position, turn the cube so it is at UL. Then do "UL to right". The edge will now be in the down face. Bring it up to its rightful place as
described above.
Edge in middle : Adapt step 8.
Step 8 describes how to rotate three edges in a single face without moving anything else in the cube. To adapt it, rotate the upper face so that the edge you want to move will be adjacent to its
target position. Orient the cube so both are at the down face and apply step 8 to bring the edge to its target position. Then re-orient the cube and rotate the upper face back to its proper position.
4. Orient 1st layer (U) edges
It is usually possible to bring the edges to the first layer so they will already be oriented properly. However, sometimes they start at the middle layer, or at the right place in the upper layer,
and are initially flipped. Or maybe you used the "wrong" way of bringing them to their proper position. Either way, you now need to flip them in place. Orient the cube so that the first layer is the
upper one and the edge you want to flip is in the UL position, and do either of the following:
Flip UL : |>> > |<
Mnemonic: Flipping the edge down and then bringing it up without a flip will cause it to be flipped "in place".
Flip UL : |> < |>>
Mnemonic: You can also do the same thing in reverse, first bringing the edge down without a flip and then flipping it up. The end result is the same - flipping "in place".
Having a reverse way to flip an edge will prove vital in step 9.
5. 2nd layer edges (1st is U)
At this point your first layer is complete. The second layer contains only edges. You need to position each in its place. This time, take care that each edge is brought to place in the correct
orientation (if it isn't, use one of the following operations to bring another edge to its place and try again).
These are edge operations that are caused by corner motion, which is rather surprising. This makes the mnemonics a bit of a stretch.
DR -> LR : > )< < >(
Mnemonic: Think of moving DR "away" to the right, then bringing it "back" to place by doing the "ULR from right" operation. Now all that's left is restoring ULR from the other direction.
DL -> LR : < >( > )<
Mnemonic: Think of moving DL "away" to the left, then bringing it "back" to place by doing the "ULR from left" operation. Now all that's left is restoring ULR from the other direction.
6. Position 3rd layer (R) corners
At this point your second layer is complete. This leaves just the third layer to be solved. In the first layer you could "just do things". Trying to do the same for the third layer would make a mess
of the first and second layers. The solution is to make use of the fact that while doing an operation messes up these layers, doing its reverse fixes the damage. The trick is to apply the operation
to one part of the third layer and its reverse to another part. This achieves some effect in the third layer while leaving the first and seconds layers intact.
At any rate, the first step is to position the third layer corners. You can get away with using just two operations (that are also easier to remember) that rotate between three third layer corners.
However, I'm also giving here a way to exchange two adjacent corners. It can save a lot of moves, so it is worth memorizing even if it is somewhat magical.
Note that for the three-corner rotation, the cube must be oriented so the third layer is the right one. For the two-corner exchange, the cube must be oriented so the third layer is the down one.
ULR <-> URB <-> DLR : <( { >( }
Mnemonic: To rotate ULR to URB, take it to the left and then bring it back up to its new place.
URB <-> ULR <-> DLR : { <( } >(
Mnemonic: To rotate URB to ULR, make it ULR, take it to the left and then bring it back up to its new place.
(3rd is D) DLB <-> DLR : <( )> >( > >
Mnemonic (not a very good one, I'm afraid): Think of the second move, ULR to DRB, as the motion that does the trick, as long as ULR is put out of the way first and restored afterward. It isn't how
this really works, but it makes it a bit easier to remember.
7. Orient 3rd layer (U) corners
This is rather simple once you get the hang of using an operation and its reverse operation. To twist any pair of corners, orient the cube so the third layer is the upper face, and the first corner
to twist is at ULR. Now do the following steps:
- Twist ULR as in step 2.
You can twist it in either direction. This, of course, messes the first and second layers.
- Move other corner to ULR using {/}s.
That is, by rotating just the upper face.
- REVERSE twist ULR as in step 2.
You must rotate the second corner (now in ULR) in the opposite direction from the first one. This rearranges the first and second layers.
- Re-orient U layer using {/}s.
You need to return the original ULR to its place, again by rotating just the upper face.
8. Position 3rd layer (D) edges
This works in a similar way to rotating a corner. There, by moving a corner from one place to another, a different corner was rotated. Here, by moving one edge in a path around the cube, the
positions of three other edges are rotated (exchanged). Magically, nothing else is effected.
BL <-> DR <-> BR : |> > > |>
Mnemonic: The final step actually takes UL from the left, so the operation is to move UL right and take it from the left. Along its journey, UL somehow drags BL in the same direction, causing the
DR <-> BL <-> BR : |< < < |<
Mnemonic: The first step actually takes UL to the left, so the operation is to move UL left and take it from the right. Along its journey, UL somehow drags DR in the same direction, causing the
9. Orient 3rd layer (U) edges
All that is left now is to orient the third layer edges. Flipping any pair of edges is simple, because we know how to flip and reverse flip an edge from step 4. To do so, orient the cube so the third
layer is the upper face, and the first edge to flip is at UL. Now do the following steps:
- Flip UL as in step 4.
It doesn't matter which variant you use at this point. This messes the first layer.
- Move 2nd edge to UL using {/}s.
That is, by rotating just the upper face.
- REVERSE flip UL as in step 4.
You must flip the second edge (now in UL) using the reverse way from the first one. This rearranges the first layer.
- Re-orient U layer using {/}s.
You need to return the original UL to its place, again by rotating just the upper face.
That's all there is to it. When trying to memorize the solution, focus on ULR and UL operations. For example, it is much easier to remember "move UL to the left, then take it from the right" than
memorizing the resulting series of 8 face operations.
Good luck!
|
{"url":"http://www.ben-kiki.org/oren/rubik/rubik.html","timestamp":"2014-04-16T13:35:40Z","content_type":null,"content_length":"38375","record_id":"<urn:uuid:a5444363-176e-4903-af48-c9ae4ab8eafb>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hi! my question is: Is sin pi/2= sin 90= 1 the same as sin pi/2=sin 1.57= 0.027. Thank - Homework Help - eNotes.com
Hi! my question is: Is sin pi/2= sin 90= 1 the same as sin pi/2=sin 1.57= 0.027. Thank
`sin(pi/2)=sin(1.57 rad)!=sin(1.57^0)=0.027`
So you find difference. conversio i rad and degree.
sin pi/2= sin 90= 1 the same as sin pi/2=sin 1.57.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/hi-my-question-is-sin-pi-2-sin-90-1-same-sin-pi-2-446083","timestamp":"2014-04-17T14:02:13Z","content_type":null,"content_length":"24539","record_id":"<urn:uuid:97c27c4a-ed2b-41c8-be2a-075c7ab29270>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
why is this true? \[\frac{2}{8i}=-\frac i 4\]
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f784bae4b007c4a2eb5e21","timestamp":"2014-04-18T03:51:42Z","content_type":null,"content_length":"94657","record_id":"<urn:uuid:874543bc-48b0-4ae7-b103-67996edf47c8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 3 of 3
1. CJM 2012 (vol 65 pp. 222)
Distance Sets of Urysohn Metric Spaces
A metric space $\mathrm{M}=(M;\operatorname{d})$ is {\em homogeneous} if for every isometry $f$ of a finite subspace of $\mathrm{M}$ to a subspace of $\mathrm{M}$ there exists an isometry of $\
mathrm{M}$ onto $\mathrm{M}$ extending $f$. The space $\mathrm{M}$ is {\em universal} if it isometrically embeds every finite metric space $\mathrm{F}$ with $\operatorname{dist}(\mathrm{F})\
subseteq \operatorname{dist}(\mathrm{M})$. (With $\operatorname{dist}(\mathrm{M})$ being the set of distances between points in $\mathrm{M}$.) A metric space $\boldsymbol{U}$ is an {\em Urysohn}
metric space if it is homogeneous, universal, separable and complete. (It is not difficult to deduce that an Urysohn metric space $\boldsymbol{U}$ isometrically embeds every separable metric space
$\mathrm{M}$ with $\operatorname{dist}(\mathrm{M})\subseteq \operatorname{dist}(\boldsymbol{U})$.) The main results are: (1) A characterization of the sets $\operatorname{dist}(\boldsymbol{U})$ for
Urysohn metric spaces $\boldsymbol{U}$. (2) If $R$ is the distance set of an Urysohn metric space and $\mathrm{M}$ and $\mathrm{N}$ are two metric spaces, of any cardinality with distances in $R$,
then they amalgamate disjointly to a metric space with distances in $R$. (3) The completion of every homogeneous, universal, separable metric space $\mathrm{M}$ is homogeneous.
Keywords:partitions of metric spaces, Ramsey theory, metric geometry, Urysohn metric space, oscillation stability
Categories:03E02, 22F05, 05C55, 05D10, 22A05, 51F99
2. CJM 2012 (vol 65 pp. 241)
Lagrange's Theorem for Hopf Monoids in Species
Following Radford's proof of Lagrange's theorem for pointed Hopf algebras, we prove Lagrange's theorem for Hopf monoids in the category of connected species. As a corollary, we obtain necessary
conditions for a given subspecies $\mathbf k$ of a Hopf monoid $\mathbf h$ to be a Hopf submonoid: the quotient of any one of the generating series of $\mathbf h$ by the corresponding generating
series of $\mathbf k$ must have nonnegative coefficients. Other corollaries include a necessary condition for a sequence of nonnegative integers to be the dimension sequence of a Hopf monoid in the
form of certain polynomial inequalities, and of a set-theoretic Hopf monoid in the form of certain linear inequalities. The latter express that the binomial transform of the sequence must be
Keywords:Hopf monoids, species, graded Hopf algebras, Lagrange's theorem, generating series, Poincaré-Birkhoff-Witt theorem, Hopf kernel, Lie kernel, primitive element, partition, composition,
linear order, cyclic order, derangement
Categories:05A15, 05A20, 05E99, 16T05, 16T30, 18D10, 18D35
3. CJM 2011 (vol 63 pp. 1284)
Non-Existence of Ramanujan Congruences in Modular Forms of Level Four
Ramanujan famously found congruences like $p(5n+4)\equiv 0 \operatorname{mod} 5$ for the partition function. We provide a method to find all simple congruences of this type in the coefficients of
the inverse of a modular form on $\Gamma_{1}(4)$ that is non-vanishing on the upper half plane. This is applied to answer open questions about the (non)-existence of congruences in the generating
functions for overpartitions, crank differences, and 2-colored $F$-partitions.
Keywords:modular form, Ramanujan congruence, generalized Frobenius partition, overpartition, crank
Categories:11F33, 11P83
|
{"url":"http://cms.math.ca/cjm/kw/partition","timestamp":"2014-04-19T22:33:01Z","content_type":null,"content_length":"30798","record_id":"<urn:uuid:fcad888d-fd2b-4ab6-929b-d5060b4b6fa2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Euless Prealgebra Tutor
Find an Euless Prealgebra Tutor
...Since then I have held a number of positions that have required me to be an expert in a subject. Whether you are in elementary, secondary or college, my knowledge and skill set will be a
valuable asset in preparing you for a successful and productive academic career! Outside of classes I took, I have a lot of experience with genetics in the practical setting of the laboratory.
30 Subjects: including prealgebra, reading, chemistry, English
...This continued through my years as a graduate student at Duke University where I was part of the team that helped tutor student athletes. I am certified in Secondary Math in the state of Texas
and have taught Algebra I, Geometry, Algebra II and Math Models in the Irving ISD. In that capacity I ...
82 Subjects: including prealgebra, English, chemistry, calculus
...While in college, I spent 3 years tutoring high school students in math, from algebra to AP Calculus. I also tutored elementary students in reading and spent 6 months homeschooling first and
third grade. When I was in high school, I would help my classmates in every subject from English to government to calculus.
40 Subjects: including prealgebra, chemistry, reading, calculus
...My main concentration is in Science for the older kiddos, but I have tutored math, reading and science for elementary-aged students in the past. Additionally, I have tutored pre-Algebra and
Algebra for both high school and college students. I believe in individualized instruction whenever possible and have never tutored a student the same way yet!
28 Subjects: including prealgebra, chemistry, English, reading
...First, I work as a teacher and I am engaged in writing everyday. I also work on my research in my free time. I am hoping to perfect my paper into something of a publishable quality.
37 Subjects: including prealgebra, reading, Spanish, writing
|
{"url":"http://www.purplemath.com/Euless_prealgebra_tutors.php","timestamp":"2014-04-21T02:26:20Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:ceacf21e-15ae-4a0e-8a6f-9423063dd782>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Yes, we can do them, but we are not going to do your hw for you, as that will not help you in any way. Please read the forum guidelines on hw help here:
The gist of it is that you must post your work and show that you have made a decent attempt, but got stuck somewhere. Then we can help point you in the right direction.
2. Your question is not very clear, and your notation not very good. For the first integral, do you mean:
[tex] \int{\sin x \ dx} [/tex]
If so, are you serious??? You don't know this integral?
As for this one:
[tex] \int{(\ln z)^2 \ dz} [/tex]
Did you attempt integration by parts?
|
{"url":"http://www.physicsforums.com/showthread.php?p=937812","timestamp":"2014-04-16T10:29:02Z","content_type":null,"content_length":"44697","record_id":"<urn:uuid:ac3c8a35-4bfd-4a45-9222-da2ebde5ab9c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tools Discussion: All Activities in Calculus on Computer Algebra System, Intigration
Discussion: All Activities in Calculus on Computer Algebra System
Topic: Intigration
<< see all messages in this topic
< previous message | next message >
Subject: RE: Intigration
Author: mmckelve
Date: Apr 27 2006
> Perhaps Mathematica has a solution. I don't have access to that.
Actually, you do have access to it ...sort of :)
Try out The Integrator:
Of course, it doesn't help much in this particular situation; it just returns
Reply to this message Quote this message when replying?
yes no
Post a new topic to the All Activities in Calculus on Computer Algebra System discussion
Visit related discussions:
Computer Algebra System
Discussion Help
|
{"url":"http://mathforum.org/mathtools/discuss.html?context=cell&do=r&msg=24204","timestamp":"2014-04-20T16:08:18Z","content_type":null,"content_length":"16349","record_id":"<urn:uuid:530d8bdc-d114-46f5-9946-95f69b1d9675>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vopěnka's principle
The basis of it all
Foundational axioms
• basic constructions:
• strong axioms
Removing axioms
Vopěnka's principle
Vopěnka’s principle is a large cardinal axiom which implies a good deal of simplification in the theory of locally presentable categories.
It is fairly strong as large cardinal axioms go: its consistency follows from the existence of huge cardinal?s, and it implies the existence of arbitrarily large measurable cardinals.
The Vopěnka principle
Vopěnka’s principle has many equivalent statements. Here are a few:
The VP is equivalent to the statement:
For every proper class sequence $\langle M_\alpha | \alpha \in Ord\rangle$ of first-order structures, there is a pair of ordinals $\alpha\lt\beta$ for which $M_\alpha$embeds elementarily into $M_\
This is (AdamekRosicky, theorem 6.28).
This is in (Rosicky)
The VP is equivalent to both of the statements:
1. For every $n$, there exists a C(n)-extendible cardinal.
2. For every $n$, there exist arbitrarily large C(n)-extendible cardinals.
This is in (BCMR).
The weak Vopěnka principle
The Vopěnka principle implies the weak Vopěnka principle.
This is AdamekRosicky, theorem 6.28
Relativized versions of Vopěnka’s principle
Vopěnka’s principle can be relativized to levels of the Lévy hierarchy by restricting the complexity of the (definable) classes to which it is applied. The following theorems are from (BCMR).
For any $n\ge 1$, the following statements are equivalent.
1. There exists a C(n)-extendible cardinal.
2. Every proper class of first-order structures that is defined by a conjunction of a $\Sigma_{n+1}$ formula and a $\Pi_{n+1}$ formula contains distinct structures $M$ and $N$ and an elementary
embedding $M\hookrightarrow N$.
The ”$n=0$ case” of this is:
For any $n\ge 1$, the following statements are equivalent.
1. There exists a supercompact cardinal.
2. Every proper class of first-order structures that is defined by a $\Sigma_2$ formula contains distinct structures $M$ and $N$ and an elementary embedding $M\hookrightarrow N$.
Many more refined results can be found in (BCMR).
From a category-theoretic perspective, Vopěnka’s principle can be motivated by applications and consequences, but it can also be argued for somewhat a priori, on the basis that large discrete
categories are rather pathological objects. We can’t avoid them entirely (at least, not without restricting the rest of mathematics fairly severely), but maybe at least we can prevent them from
occurring in some nice situations, such as full subcategories of locally presentable categories. See this MO answer.
This is theorem 2.3 in (RosickyTholen)
The VP implies the statement:
Let $C$ be a locally presentable (∞,1)-category and $Z$ a class of morphisms in $C$. Then the reflective localization of $C$ at $W$ extsts.
By the facts discussed at locally presentable (∞,1)-category and combinatorial model category and Bousfield localization of model categories we have that every locally presentable $(\infty,1)$
-category is presented by a combinatorial model category and that under this correspondence reflective localizations correspond to left Bousfield localizations. The claim then follows with the (above
Set-theoretic notes
First- versus second-order
As usually stated, Vopěnka’s principle is not formalizable in first-order ZF set theory, because it involves a “second-order” quantification over proper classes (”…there does not exist a large
discrete subcategory…”). It can, however, be formalized in this way in a class-set theory such as NBG.
On the other hand, it can be formalized in ZF as a first-order axiom schema consisting of one axiom for each class-defining formula $\phi$, stating that ”$\phi$ does not define a class which is a
large discrete subcategory…” We might call this axiom schema the Vopěnka axiom scheme. As in most situations of this sort, the first-order Vopěnka scheme is appreciably weaker than the second-order
Vopěnka principle. See, for instance, this MO question and answer.
Vopěnka cardinals
Unlike some large cardinal axioms, Vopěnka’s principle does not appear to be merely an assertion that “there exist very large cardinals” but rather an assertion about the precise size of the
“universe” (the “boundary” between sets and proper classes). In other words, the universe could be “too big” for Vopěnka’s principle to hold, in addition to being “too small.”
(The equivalence of Vopěnka’s principle with the existence of C(n)-extendible cardinals may appear to contradict this. However, the property of being $C(n)$-extendible itself “depends on the size of
the whole universe” in a sense.)
More precisely, if $\kappa$ is a cardinal such that $V_\kappa$ satisfies ZFC + Vopěnka’s principle, then knowing that $\lambda\gt\kappa$ does not necessarily imply that $V_\lambda$ also satifies
Vopěnka’s principle. By contrast, if $V_\kappa$ satisfies ZFC + “there exists a measurable cardinal” (say), then there must be a measurable cardinal less than $\kappa$, and that measurable cardinal
will still exist in $V_\lambda$ for any $\lambda\gt\kappa$. On the other hand, large cardinal axioms such as “there exist arbitrarily large measurable cardinals” have the same property that Vopěnka’s
principle does: even if measurable cardinals are unbounded below $\kappa$, they will not be unbounded below $\lambda$ if $\lambda$ is the next greatest inaccessible cardinal after $\kappa$.
Relativizing Vopěnka’s principle to cardinals also raises the same first- versus second-order issues as above. We say that a Vopěnka cardinal is one where Vopěnka’s principle holds “in $V_\kappa$”
where the quantification over classes is interpreted as quantification over all subsets of $V_\kappa$. By contrast, we could define an almost-Vopěnka cardinal to be one where $V_\kappa$ satisfies the
first-order Vopěnka scheme. Then one can show, using the Mahlo reflection principle (see here again), that every Vopěnka cardinal $\kappa$ is a limit of $\kappa$-many almost-Vopěnka cardinals, and in
particular the smallest almost-Vopěnka cardinal cannot be Vopěnka. Thus, being Vopěnka is much stronger than being almost-Vopěnka.
Definable counterexamples
If Vopěnka’s principle fails, then there exist counterexamples to all of its equivalent statements, such as a large discrete full subcategory of a locally presentable category. If Vopěnka’s principle
fails but the first-order Vopěnka scheme holds, then no such counterexamples can be explicitly definable.
On the other hand, if the Vopěnka scheme also fails, then there will be explicit finite formulas one can write down which define counterexamples. However, there is no “universal” counterexample, in
the following sense: if Vopěnka’s principle is consistent, then for any class-defining formula $\phi$, there is a model of set theory in which Vopěnka’s principle fails (and even in which the
first-order Vopěnka scheme fails), but in which $\phi$ does not define a counterexample to it. See here yet again.
The relation to the theory of locally presentable categories is the contents of chapter 6 of
The relation to combinatorial model categories is discussed in
• Jiří Rosický, Are all cofibrantly generated model categories combinatorial? (ps)
The implication of VP on homotopy theory, model categories and cohomology localization are discussed in the following articles
• Jiří Rosický, Walter Tholen, Left-determined model categories and universal homotopy theories Transactions of the American Mathematical Society Vol. 355, No. 9 (Sep., 2003), pp. 3611-3623 (JSTOR
• Carles Casacuberta, Dirk Scevenels, Jeff Smith, Implications of large-cardinal principles in homotopical localization Advances in Mathematics Volume 197, Issue 1, 20 October 2005, Pages 120-139
• Joan Bagaria, Carles Casacuberta, Adrian Mathias, Jiri Rosicky Definable orthogonality classes in accessible categories are small, arXiv
|
{"url":"http://www.ncatlab.org/nlab/show/Vop%C4%9Bnka's+principle","timestamp":"2014-04-18T08:14:40Z","content_type":null,"content_length":"53432","record_id":"<urn:uuid:b4210e5c-07ea-4b4d-bf9c-faeef0450793>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How VCO Tuning BW Affects Phase Noise
Phase noise is a differentiating performance specification when comparing voltage-controlled oscillators (VCOs). it can be estimated for VCOs operating at different frequencies by means of Leeson's
expressions, although a VCO parameter other than frequencyits tuning bandwidthalso plays a key role in the phase-noise performance of the oscillator. What follows is the development of an expression
that suggests that the phase noise performance of two VCOs implemented using the same approach and technology are related by the ratio of the VCO tuning bandwidths. Te validity of the expression will
be put to the test against the known measured phase-noise performance of several commercial VCOs.
Leeson's equation1 relates the phase noise of an oscillator to various factors:
L(ΔΩ) = 10log{(2FkT/P[sig])0/2QΔΩ)^2> 1/f3)/|ΔΩ?|>} (1)
This empirical expression has been found to capture the key features of oscillator phase noise for a wide range of oscillator designs. as shown in Fig. 1, the phase noise associated with this
expression can be broken into three distinct regions:
1. A -30-dB/decade slope for close-in offset frequencies, due to upconversion of 1/f noise;
2. A -20-dB/decade slope for intermediate offset frequencies due to limited resonator quality factor (Q); and
3. Flat phase noise for large offsets where performance is dominated by the noise characteristics of the oscillator's active device(s).
Of course, these distinct regions are approximations; a real VCO has a phase-noise characteristic that is smooth and continuous. For offset frequencies of interest here, a slope in the range of 20 to
30 dB/decade will apply. A slope of "(20 + x)" will be used to represent a general case, where x is in the range of 0 to 10.
There is a wealth of literature on oscillator circuit design which relates phase-noise performance to key circuit parameters. A phase-noise figure of merit (FOM) is often used to assess a given
oscillator design in a normalized sense, and can also be used to compare different oscillator solutions. This FOM takes account of phase noise, oscillation frequency, offset frequency, and DC power
consumption. The FOM is given by the following expression: FOM = L(ΔΩ) -20log(Ω[0]/ΔΩ) + 10log(P[DC])
This FOM is generally applied to offsets in the 20-dB/decade region.
In a conventional VCO, the resonator incorporates at least one varactor diode. By adjusting the varactor bias voltage, its capacitance and, hence, the VCO's frequency of oscillation, is tuned.
When a VCO is oscillating at a given varactor-diode tuning voltage, the tank AC voltage waveform associated with the oscillation imposes a swing on the varactor voltage around its DC tuning value.
This AC varactor bias results in a varying instantaneous frequency of oscillation which manifests itself as phase-noise skirts around an ideal single-tone spectral response.
The greater the AC tank voltage impact on the instantaneous frequency of oscillation, the worse the associated phase noise will be. Circuit designers typically go to great lengths to optimize VCO
phase noise. One method that can be employed to achieve this involves the use of back-to-back varactor pairs.^2 In the presence of AC tank swing, as one of these varactor diodes shifts to a higher
instantaneous capacitance value, the other varactor shifts to a lower instantaneous capacitance value. The net effect is that the combined series capacitance shifts much less than the change in
capacitance for either individual varactor diode. It is this combined series capacitance which plays a key role in determining the instantaneous frequency of oscillation. A more stable frequency of
oscillation and improved phase noise result.
In a similar fashion, any noise or spurious signal on the applied varactor tuning voltage modulates the varactor capacitance and instantaneous frequency of oscillation, and also degrades the
oscillator phase noise. Good decoupling of the tuning port minimizes this effect.
Generally, fixed-frequency oscillators do not contain varactor diodes, so the phase-noise degradation associated with the above mechanism does not arise. Therefore, fixed-frequency oscillators tend
to exhibit better phase-noise performance than tunable VCOs using the same design approach and technology.
For VCO designs based on the same circuit topology and technology, it is important to understand how phase noise is related to tuning bandwidth. Experience and common sense suggest that the greater
the tuning range, the worse the phase noise. However, a more quantitative understanding would be helpful in predicting tradeoffs between phase noise and VCO tuning bandwidth.
Consider the general case of a baseline VCO with tuning sensitivity S0 (MHz/V). Assume for simplicity's sake that the tuning characteristic is linear over the full tuning range (Fig. 2). This
assumption is not strictly true for real VCOs but makes it possible to draw conclusions that are approximately applicable to real VCO designs.
It follows that:
f[osc0] = f[00] + S[0]V[tune] where f[00] is the frequency of oscillation at zero tuning voltage.
Page Title
Assume that the VCO's tuning voltage is set to V[tune_DC]. Now, consider the phase-noise characteristic. In the presence of AC tank swing, assume that the instantaneous varactor voltage traverses the
range V[tune_min] to V[tune_max].
The corresponding instantaneous frequency range is f[osc0_min] to f[osc0_max], where:
f[osc0_min] = f[00] + S[0] V[tune_min] f[osc0_max] = f[00] + S[0] V[tune_max]
Note that V[tune_DC] ~ 0.5 (V[tune_min] + V[tune_max]). The range of instantaneous frequencies of oscillation is given by: Δf[osc0] = S[0](V[tune_max] V[tune_min]) = S[0]ΔV[tune] The greater the
value of Δf[osc0], the higher and wider the skirts on the spectral response and the worse the associated phase noise.
Now, consider a second VCO, similar to the baseline design, based on the same circuit topology and technology, operating at the same center frequency, and delivering similar output power so that the
tank swing (ΔV[tune]) is also the same as for the baseline design. Assume, however, that this new VCO has sensitivity S[1]. Its idealized tuning characteristic is also shown in Fig. 2, where it has
been arbitrarily assumed that S[1] > S[0].
Since the same ΔV[tune] is assumed, it follows that:
Δf[osc1] = S[1] ΔV[tune] = S[1]/ S[0] Δf[osc0] = nΔf[sc0]
where n represents the ratio of the tuning sensitivities.
If one assumes minimum and maximum VCO DC tuning voltage settings of V[DC_min] and V[DC_max], respectively, it also follows from Fig. 2 that the corresponding frequency tuning ranges for the two VCOs
are S[0]ΔV[DC] and S[1]ΔV[DC], where: ΔV[DC] = V[DC_max] - V[DC_min]
Hence, n is also equal to the ratio of the frequency tuning ranges or bandwidths of the two VCOs.
Assuming that the baseline VCO has the idealized phase-noise response shown in Fig. 3, one can immediately infer that the spectral response of the second VCO is similar to that of the baseline, but
is scaled in terms of offset frequency by n. The idealized phase-noise response of the second VCO is also shown in Fig. 3, for two scenarios (n > 1 and n
Now, consider frequency offset Δf[x] in Fig 3, where the baseline phase noise is PN[0] (dBc/Hz). The same phase noise is obtained at nΔf[x] for the second VCO with sensitivity S[1]. Assuming local
slope of the phase-noise characteristic is (20 + χ) dB/decade, then it follows that the phase noise of the second VCO at Δf[x] is: PN[1] = PN[0] + (20 + x) log(nΔf[x]/Δf[x]) = PN[0] + (20 + x) log(n)
For example, if the tuning bandwidth of an oscillator is doubled (n = 2) while maintaining the same center frequency, then the phase noise at a given offset is degraded by 6 dB in 20 dB/decade zone,
or by 9 dB in the 30 dB/decade zone.
Thus far, constant VCO center frequency has been assumed. Next, consider the following, more generalized question: Given a baseline VCO, VCO1, with parameters: center frequency f[01]01/(2π)>, tuning
bandwidth = BW[01] and phase noise at offset Δf[REF] = PN[01], what is the anticipated phase-noise performance PN[02] at the same offset Δf[REF] of another VCO, VCO2 with parameters: center frequency
f[02]02/(2π)> and tuning bandwidth = BW02 assuming the same technology and circuit topology? Note that the tuning bandwidths alluded to are absolute values with dimension Hz, as distinct from percent
tuning range values relative to the center frequencies.
An estimate of PN[02] can be developed using a two-step operation. In the first step, VCO1 is scaled to the same center frequency as VCO2 by means of an ideal frequency multiplication. The scaled VCO
is designated VCO1'. Assuming the same output power, Q etc., it is clear that the associated phase noise of VCO1' at Δf[REF] is given by:
PN'[01] = PN[0]1 + (20 + x) log (Ω[02] /π[01])
Note that after this first step, the tuning bandwidth of VCO1' has scaled to: BW'[01] = BW[01] (Ω[02] /Ω[01])
Page Title
Now that the center frequency of VCO1' is the same as the target VCO2, the next step involves adjusting the tuning bandwidth of VCO1' using Eq. 2 above so that it equals the tuning bandwidth of
target VCO2. In other words, the tuning bandwidth is changed from BW'[01] to BW[02]. This further modified VCO is designated as VCO1''. From Eq. 2, the associated phase noise of VCO1'' is given by:
PN''[01] = PN'[01] + (20 + x) log (n)
where n = ratio of tuning bandwidth = BW[02] / BW'[01] = (BW[02] / BW[01])(ω[01] / ω[02]).
PN''[01]= PN[01] + (20 + x) log (ω[02] / ω[01]) + (20 + x) log 02 / BW[01]) (ω[01] / ω[02])> = PN[01] + (20 + x) log 02 / BW[01]>
Given that VCO'' is now consistent with VCO2 in terms of both its center frequency and its tuning bandwidth, PN''[01] can be viewed as an estimate of the phase-noise performance of VCO2 at offset Δf
[REF]. This result suggests that the phase-noise performance is independent of f[01] and f[02] and only depends on the ratio of BW01 and BW02: PN (f[02], BW[02]) = PN (f[01], BW[01]) + (20 + x) log
02 / BW[01]> (3)
Given a baseline VCO design with a known level of phase-noise performance, it should be possible to estimate the phase-noise performance of a similar VCO based on the same design approach and
technology, with the only information required being the ratio of the tuning bandwidths of the two VCOs. The operating frequencies do not need to be known. Of course, this conclusion is an
approximation and should be applied with caution, although it is a useful relationship and can be used to assess and compare the performance of different VCOs.
To demonstrate the effectiveness of Eq. 3, consider a case study using commercial VCO products for comparison. A family of VCOs based on the same circuit topology and fabricated with the same GaAs
heterojunction-bipolar-transistor (HBT) process are shown in the table.
A conventional way of analyzing these data would be to consider the relationship between phase noise and VCO center frequency. Even though it does not take tuning bandwidth into account, this is an
obvious trend to investigate in the context of Eq. 1. To be precise, the relationship of interest is phase noise versus the logarithm of a VCO's center frequency. Data in this format are plotted in
Fig. 4. There appears to be an underlying approximately linear trend, and this is confirmed by the R2 value of 83%. This shows a strong correlation between phase noise and the logarithm of a VCO's
center frequency. Based on this, one may be inclined to conclude that this is a fundamental relationship.
However, further analysis is required. Figure 5 plots VCO tuning bandwidth versus center frequency. Again, it is clear that there is a strong correlation between these two parameters. There is
nothing fundamental about this relationship, of course; but rather, for this VCO family, it is a result of the product and market requirements which have dictated that the tuning bandwidth generally
increases with increasing center frequency (though there are exceptions to this general trend, as will be discussed presently). Given the more or less linear relationship between tuning bandwidth and
center frequency, this introduces the prospect that the correlation shown in Fig. 4 may not be indicative of a fundamental relationship at all, but rather, may be simply a byproduct of a fundamental
relationship between phase noise and tuning bandwidth (which would be consistent with Eq. 3) in conjunction with this particular VCO family's strong correlation between tuning bandwidth and center
With this in mind, a plot of phase noise versus the log of the tuning bandwidth is presented in Fig. 6. Again, these parameters are strongly correlated, and this time the correlation factor (88%)
exceeds that in Fig. 4. There is stronger correlation between these parameters than shown in Fig. 4 for phase noise vs. the logarithm of center frequency.
This is further illustrated by examining two VCOs in the family, models MAOC-009265 and MAOC-009266. The general trend of center frequency versus tuning bandwidth is violated for these two VCOs, and
the corresponding two points are highlighted within the red rings superimposed on the curves in Fig. 4, Fig. 5, and Fig. 6. It is clear that while these two points lie significantly off the trend
lines in both Fig. 4 and Fig. 5and more importantly, the lines drawn between the two points deviate a great deal from those same trend linesthe points lie much closer to the trend line in Fig. 6, and
a line connecting the two points there does not deviate so much from that trend line. For these two VCOs which are not well modeled by the trend line in Fig. 4, the trend line in Fig. 6 and hence Eq.
3 provide a significantly better fit.
Thus, the VCO family data are reasonably consistent with a linear relationship between phase noise and the logarithm of the tuning bandwidth. Also shown in Fig. 6, in addition to the R2 value, is the
best-fit linear expression: phase noise at an offset of 100 kHz = -112.97 + 30.07log[10](tuning bandwidth). This is consistent with Eq. 3 and suggests that the log multiplier is about 30. This in
turn suggests that the 100-kHz offset phase noise is closer to the 30-dB/decade region rather than the 20-dB/decade region of the phase-noise spectral response (i.e. x ~ 10).
It should be noted that the tuning bandwidths for the VCOs in the table are the target ranges needed to meet typical customer requirements, and are not necessarily the same as the actual ranges
achieved by the designs. In fact, the designs exceed the ranges listed in the table with some margin; the degree of margin is not necessarily the same for all 12 VCOs. As a consequence, it would be
unreasonable to expect a perfect R2 of unity in Fig. 6. The fact that an R2 value of 88% is achieved provides confidence that using Eq. 3 as an approximate guide will yield meaningful phase-noise
predictions that should prove helpful in future VCO developments. Care should be taken to ensure that a suitable value for x is assumed in the (20 + x) dB/decade local slope term.
1. D. B. Leeson, "A simple model of feedback oscillator noise spectrum," Proceedings of the IEEE, Vol. 54, February 1966, pp. 329-330.
2. A. Bonfanti, S. Levantino, C. Samori, and A. L. Lacaita, "A Varactor Configuration Minimizing the Amplitude-to-Phase Noise Conversion in VCOs," IEEE Transactions on Circuits and SystemsI, Regular
Papers, Vol. 53, No. 3, March 2006, pp. 481-488.
|
{"url":"http://mwrf.com/print/active-components/how-vco-tuning-bw-affects-phase-noise","timestamp":"2014-04-20T09:17:46Z","content_type":null,"content_length":"33450","record_id":"<urn:uuid:8606b49c-3155-4b6d-912a-d25202a74671>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Anchoring Principles
Sails and Rigging
Anchoring Principles
By Jim Hancock Posted: Jul 20, 2004
In the July issue of SAIL we published Jim Hancock's Scope for Improvement?" in which the author argues that the traditional anchoring rules of thumb may need revision. More scope is almost always
better, unless you don't have the room to swing, or the bottom is foul.
There's more to know, however, and this is where Hancock lays out the facts surrounding the catenary curve, as we promised in your print issue.
Additional Thoughts on Anchoring
Most anchors used on pleasure boats today are designed to dig into the sand or mud bottom and provide a secure attachment. Figure 1 shows an anchor on a flat seabed with the lead angle, which is the
angle that the rode makes with the seabed measured upward from the bottom. The force (T) that the rode exerts on the anchor can be broken into horizontal and vertical components. Th and Tv.
If your anchor is dug in on a flat bottom and has a lead angle of 0 degrees, the force that the rode exerts on the anchor is purely horizontal (Th=T,Tv=0). This is the optimal lead angle and is what
one wants to achieve in an anchoring system. As the lead angle increases, so does the upward force on the anchor. Upward force has a tendency to break the anchor out of the bottom. Eight degrees is
considered by most experts to be the maximum lead angle that can be maintained without compromising an anchor's holding ability. Lead angle is controlled by veering out more, or less, scope; scope is
the ratio of rode length (S) to water depth (D). The catenary curve
The curve that an anchor chain follows is known in mathematics as a catenary curve. The name catenary comes from the Latin @I {catena}, which means chain, and was first applied to this curve by the
mathematician Huygens in 1690. The following year Huygens and fellow mathematicians Leibniz and Bernoulli each obtained independently the equation that describes the catenary curve. For those who are
more mathematically inclined, here is the catenary equation that these great mathematicians developed is as follows:
y = h*cosh((x+k)/h) + b where: x,y = Coordinate pair on the catenary curve k = Constant of integration (horizontal offset of vertex) b = Constant of integration (vertical offset of vertex) h= H/Θ H =
Horizontal load on chain Θ = Weight of chain / unit length.
Notice that the catenary equation uses the @I {hyperbolic cosine} cosh(), which is an exponential function and is quite different from the trigonometric cosine, cos(), that most of us learned in high
I should also add that the analysis I used in the July piece on anchoring uses the catenary equation to determine the shape of a hypothetical chain. I evaluated the equation over a range of values in
an Excel spreadsheet, and then computed the length of the chain using a numerical integration of the curve.
Although I believe that we can learn a great deal about anchoring from the solution of the catenary equation there are some significant assumptions that I made in my analysis and I acknowledge them
1. My model uses a static analysis, meaning that the wind, wave and current loads are assumed to be steady. In gusty or rough conditions it is possible that the dynamic loads on an anchoring system
could be significant.
2. The seabed is assumed to be flat. While this is often the case it is just as often not the case. Although the need for this simplifying assumption is obvious, anchoring on a sloped or uneven
seabed can have a dramatic effect on an anchor’s ability to hold, either favorably or unfavorably.
3. The length of the chain between the bow roller and the surface of the water is treated as though it is submerged. However attributing buoyancy to this short segment of chain is a conservative
assumption when computing minimum scope since the dry chain would give the rode a deeper sag.
* The condition of being free-hanging means that there are no external loads in mid-span. For example, the main cables of a suspension bridge are not free hanging because they have vertical suspender
cables hanging from them. Because of this, the main cables on a suspension bridge follow a @I{parabolic} curve rather than a catenary curve.
|
{"url":"http://www.sailmagazine.com/anchoring-principles","timestamp":"2014-04-19T18:17:35Z","content_type":null,"content_length":"50932","record_id":"<urn:uuid:951f6c07-7963-4bcf-90d9-9717c45ee74c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arranging Rose Bushes
Date: 9/13/95 at 15:42:28
From: Christine Heffernan
Subject: Question to be answered
Hi Dr. Math -
This is the first time I've tried this. Here's a question.
A gardener laying out a rosebed found she could plant 7 rose bushes in
such a way that they formed 6 straight lines with 3 rose bushes in each
line. How was this possible? Show a diagram.
(b) How could she plant 10 rosebushes so that she has 5 lines with 4
rosebushes in each?
For the questions, the distance between rosebushes does not have to
be equal.
That's it. Good Luck and thanks in advance.
Date: 9/13/95 at 17:20:56
From: Doctor Steve
Subject: Re: Question to be answered
Try laying out the bushes in circular patterns.
Write us back if you want more hints.
-Doctor Steve, The Geometry Forum
Date: 9/13/95 at 21:30:52
From: Christine Heffernan
Subject: Re: Question to be answered
I've been trying that for a while now and I'm still completely stuck on
both parts of the question!
I think I need more help!
Date: 9/13/95 at 21:42:26
From: Doctor Steve
Subject: Re: Question to be answered
Try putting six bushes in a circle and one in the middle. Now look for
your six straight lines with 3 bushes in each.
- Doctor Steve, The Geometry Forum
Date: 01/17/2001 at 08:39:20
From: Peter Bradford
Subject: Rose Bushes
There is an error in your answer.
Unless you consider a straight line as TWO straight lines, depending on
which end you start, there are only 3 rows by this method.
The true solution is to put 3 bushes at the corners of an equilateral
triangle. Three more, each at the midpoint of a side of the triangle.
Finally, the seventh bush is placed at the centroid of the triangle.
Date: 01/18/2001 at 13:45:49
From: Doctor Greenie
Subject: Re: Rose Bushes
The writer says "the true" solution is with 3 bushes at the corners of
an equilateral triangle...
The way I read the original problem, it is solved with the first 3
bushes at ANY points forming a triangle, the next 3 bushes at the
medians of that triangle, and the 7th bush at the point of intersection
of those medians.
-Doctor Greenie
|
{"url":"http://mathforum.org/library/drmath/view/57931.html","timestamp":"2014-04-17T08:19:35Z","content_type":null,"content_length":"7198","record_id":"<urn:uuid:fe39a824-ad32-4571-af01-20131a53bda3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Archives of the Caml mailing list > Message from Markus Mottl
[Caml-list] entropy etc. in OCaml
Date: -- (:)
From: Markus Mottl <markus@o...>
Subject: Re: [Caml-list] entropy etc. in OCaml
On Wed, 11 Aug 2004, Viktor Tron wrote:
> Does anyone know of an OCaml library implementing or binding
> Information Theoretical concepts like data entropy?
> (e.g. gsl does not provide these).
I don't have a separate library for that, but you might want to take a
look at AIFAD:
It implements several functions for computing the entropy of discrete
data including structured values. Its purpose is decision tree learning
on structured data (represented by algebraic datatypes).
One function for computing entropy from histograms is the following
(taken from src/entropy_utils.ml in the distribution):
let calc_entropy histo n =
if n = 0 then 0.0
let rec loop sum ix =
if ix < 0 then sum
let freq = histo.(ix) in
if freq = 0 then loop sum (ix - 1)
let ffreq = float freq in
loop (sum +. ffreq *. log ffreq) (ix - 1) in
let sum = loop 0.0 (Array.length histo - 1) in
let f_n = float n in
log2 f_n -. sum /. f_n /. log_2
If you pass it an array of integers (histogram) that counts the frequency
of class values in variable "histo" and the number of observations in "n"
(must be the sum of frequencies in the histogram), then this function
will return you the entropy in bits.
Markus Mottl http://www.oefai.at/~markus markus@oefai.at
To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr
Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
|
{"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2004/08/702c2759e20654251cfd1c41dd62a244.en.html","timestamp":"2014-04-17T18:37:58Z","content_type":null,"content_length":"6942","record_id":"<urn:uuid:a80ec1e9-3bcd-451f-9f9d-3d31e2302467>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
With all due reverence towards Different,
1. Difference and Duality
Dear All,
There is a contrast between a map with a section and a map with a retract: in the case of a map with a section we can find proof [of belonging]; and in the case of a map with a retract we find
that there is one proof.
The contrast appears a little more pronounced: in trying to show that there is one proof f of h in r in the case of a map r with a section s, the closest I got is srf1 = srf2; and in trying to
find proof of ‘h is in s‘ in the case of a map s with a retract r, the closest I got is srh.
Is there a duality (knowing answer vs. knowing that there is one answer) in here?
Please forgive me for rushing to ask for help if this is something that can become clear as I go through ‘The contravariant parts functor’ article (I’m still on the first page of the article in
Conceptual Mathematics).
Thank you,
For admin and other information see: http://www.mta.ca/~cat-dist/
2. You actually make it appear so easy together with your presentation however I find this matter to be really something which I think I’d by no means understand. It kind of feels too complicated
and very wide for me. I am having a look ahead for your subsequent submit, I’ll try to get the cling of it!
Trackbacks & Pingbacks
|
{"url":"http://conceptualmathematics.wordpress.com/2012/09/28/with-all-due-reverence-towards-different/","timestamp":"2014-04-16T10:48:17Z","content_type":null,"content_length":"67237","record_id":"<urn:uuid:aebcdf74-151e-481f-a5d0-4b661fdf8253>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Challenge 60 - Snow!
Hi there peeps! I hope you're all fine and snow free this week? We've got a few
and drabs left, well apart from the mountain that got dug off the drive! I think that little baby will take a bit of shifting ;)) I seriously need you all to hope we don't get any more though! I have
a BIG surprise planned for my OM for next week - snow will seriously ruin it ;(( So no snow for here please Mr Weatherman ;)) Oh and fingers crossed this stinker of a cold goes too, I mean three
weeks for a cold? How can that be fair? I look as if I'm getting into the Christmas spirit by doing a fair impression of Rudolph, well in the red nose part anyway. Don't worry I'm not growing
So thanks as always to our players from last week. I'm always surprised how everyone interprets the sketches differently. So the lucky winner as picked by the
Email me your details with the challenge number in the title line and I'll get your prize sorted for you. I know a couple of peeps are waiting on stuff from em - I'm sorry but my list is getting way
out of order ~blushes~ I'll sort it out soon.
Right what would we like to see this week? Well snow of course! But only on your entries!
This week we're kindly sponsored by
Meyer Imports
, their
glass glitter is just amazing! Its so sparkly you need shades on!
So here's the snowy inspiration from the Gorjussettes;
So there you are - inspired by all that lovely snow? Oh that's the best place for it in my opinion. On a card!
So please enter via Mr Linky and link to your entry and not your blog. New stuff only please and no backdating.
15 comments:
1. Superb cards everyone. Great inspiration.xx
2. Gorgeous cards from the DT.xx
3. Love the DT Cards they are stunning, Hazelxo
4. Great DT creations! Thank you for the inspiration and for the challenge.
I hope you like my NEW Christmas card :-)
Dorly Weitzen, Israel
5. Lovely work ladies. Merry Christmas. I was different this time.. and did a scrapbook page.
6. Great DT-cards!
Hugs Gisela
7. Fantastic DT cards and what a fab challenge idea.
8. Awesome challenge - Thank You!!... and gorgeous creations by the DT too xx
9. thank You for a great challenge! gorgeous DT cards!
10. Beautiful samples from your DT. Thanks for the fun challenge.
11. Thank you for another great challenge.
12. Great cards. You want snow? You hate snow! Lynne.x
13. WOW - amazing projects!!
14. Absolutely beautiful cards by the DT. Hugs, dj
15. Fabulous DT projects! Thank you. Pami x
|
{"url":"http://totallygorjuss.blogspot.com/2010/12/challenge-60-snow.html","timestamp":"2014-04-16T14:00:35Z","content_type":null,"content_length":"166888","record_id":"<urn:uuid:c0c5bdde-fe00-41a0-b762-267f7dbec644>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pre-Algebra: Multiplying Fractions & Mixed Numbers Video | MindBites
Pre-Algebra: Multiplying Fractions & Mixed Numbers
About this Lesson
• Type: Video Tutorial
• Length: 7:03
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 75 MB
• Posted: 01/28/2009
This lesson is part of the following series:
Pre-Algebra Review (31 lessons, $61.38)
In this lesson, Professor Burger demonstrates two ways to multiply mixed numbers (numbers that contain both a fraction less than one and a whole number that is greater than or equal to one). In the
first method, you will learn to convert the mixed numbers into improper fractions (fractions that are in fraction form but equal to an amount in excess of one). Then, you will learn that there is a
way to simplify these numbers by canceling in the numerator and denominator. Next, you will multiply the two fractions together and write either as an improper fraction, or convert back to a mixed
number. In the event a problem involves multiplying a whole number and a mixed number, Professor Burger shows you how to use the distributive property to solve it. Begin by writing the mixed number
as a sum, and then distribute the whole number to both parts of the sum. Simplify and multiply the numbers to find your answer.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Pre Algebra. This course and others are available from Thinkwell, Inc. The full course can be found
at http://www.thinkwell.com/student/product/prealgebra. The full course covers whole numbers, integers, fractions and decimals, variables, expressions, equations and a variety of other pre algebra
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart
of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals,
including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the
theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
Very happy with how easy this video is to follow!
~ brittanie
My review's title sums it up but I would just like to add that I am very happy to have videos like these for my own review. I wish my school teachers made it look this easy.
Very happy with how easy this video is to follow!
~ brittanie
My review's title sums it up but I would just like to add that I am very happy to have videos like these for my own review. I wish my school teachers made it look this easy.
Let’s take a look at how to multiply two mixed numbers together. It turns out that we already know what to do. We just have to convert each of the mixed numbers separately into a fraction, an
improper fraction. Once we have that, we are all set.
Let’s take a look at an example together. 2 and one-third multiplied by 1 and one-fifth. What do we do? First step, convert everything to improper fractions, because multiplying fractions is so easy
that we want to do that. So first, I'm going to take this piece right here. How do I do it? I take the 2 times the 3, that’s 6, then add 1, that’s 7. So I have 7 over 3. Now what do I do? Well, now I
multiply, and what is this fraction going to be? Well, 1 times 5 is 5, plus 1 is 6, over 5. So this is the improper fraction version of this. This is the improper fraction version of this, so now I
have two fractions I'm multiplying. No problem. I multiply the tops; then multiply the denominators. I could multiply all of that out, but let’s see if we can simplify it first. Well, I can write the
6 as 2 times 3. Now I see a common factor in the numerator of 3 and a common factor in the denominator of 3, so this fraction is equivalent, is equal, to the fraction I get by removing those common
factors. So I see 7 times 2, which is 14, over 5. So the answer is 14 over 5. But, if you want to write this as a mixed number, we would long divide, and what would we see? We’d see that this also
could be written as 2 and four-fifths. So we can convert this to a mixed number. Either answer is correct. It just depends upon the form that the questioner wants the answer to be given in, either
one of these. Neat! They are both the same.
All right, are you ready? How about 1 and two-fifths multiplied by 1 and one-quarter? Well, what’s the first step? The first step is always the same; we have to convert each of these mixed numbers
into an improper fraction. Once we have two fractions, even if they are improper, we know how to multiply them: multiply the tops, the numerators, multiply the bottoms, the denominators. So what is
the improper fraction version of 1 and two-fifths? I take the 1 and multiply it by 5, that’s 5, add a 2 and get 7. 7 over 5. Then I multiply that by the improper fraction version of 1 and
one-quarter. So 1 times 4 is 4, plus 1 is 5, over 4. Now let me show you something here. Already, because we’ve looked at so many of these questions, I'm beginning to see there is a factor of 5 in
the denominator; there’s a factor of 5 in the numerator, so I could write the next step, which would have been 7 times 5, divided by 5 times 4, but right here I’m going to take an extra little secret
step and notice that I have, since I am multiplying, the factors that are common already in sight. So I could just jump right to writing 7 over 4, since this is 7 times 5 over 5 times 4. That’s a
common factor in the numerator and denominator, and I can get rid of that. So, that’s a perfectly fine improper fraction answer to this problem, 7 over 4. If you wanted to write it as a mixed number,
you would have to peel off a 1, and we’d see 1 and three-fourths. Either of these answers is correct. It just depends on how the person who asked us the question wants us to answer it. So pretty
Let me show you one last question. This is a little, teeny-bit different. Four, the number 4, the whole number 4, times 2 and one-ninth. Now we can proceed in the usual way if we want to, which is to
convert this to an improper fraction and then do multiplication, but let me show you a really great, easy way of doing this. That is to remember what 2 and one-ninth means. Remember what “and” means?
“And” means “plus.” That’s just one whole big thing right there; it is a number. It’s 2 and one-ninth. So if I write that out, I’d see 4 times, and this is just going to be one number here, 2 and
one-ninth. Now I want us to see that this is really the exact same mathematical statement, because here is the 4 and here is the 4. Here is the times, and here is the times. And this number is 2 and
one-ninth, and this number is 2 and one-ninth, but it is all one number, so I put it in this parentheses. When you look at it this way, now we can use the wonderful distributive property that we know
that multiplication and addition enjoy. So take the 4 and multiply it by the 2 and then add it to taking the 4 and multiplying it by the one-ninth, so you could jump right to 4 times 2, plus 4 times
one-ninth. So what does this give me? Well, I know 4 times 2 is 8. And 4 times one-ninth, well, remember that 4 is the same thing as 4 over 1, so I multiply tops and multiply bottoms, and I see 4
over 9. How can I write that? I can write that as a mixed number by just remembering the secret code, 8 and four-ninths.
So when you are taking a whole number and multiplying it by a mixed number, it’s actually sort of sneaky. If you want, you can break up that mixed number into the sum of the whole number part plus
the fraction part, but make sure you put parentheses around that whole thing, because that is just one number. Then use the distributive law, and out comes your answer.
Neat stuff! Congratulations, you can now multiply mixed numbers. Pretty neat stuff. I'll see you soon.
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet:
|
{"url":"http://www.mindbites.com/lesson/1277-pre-algebra-multiplying-fractions-mixed-numbers","timestamp":"2014-04-20T21:51:20Z","content_type":null,"content_length":"61299","record_id":"<urn:uuid:d57aea7f-c5bc-4b13-96e4-51be72f52dc0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by Ana
Total # Posts: 902
Find the center and radius of the given circle. 1. x^2 + y^2 + 10y + 21 = 0
Algebra 2
Energy Use Light Output 35 2250 50 4000 70 5800 100 9500 150 16000 What is the quadratic model for the light output with respect to use energy use? I dont understand this at all Please help ):
Examine the table below as the spearation distance is doubled ow much does the electrostatic forces decrease? 1 20.0 Cm 0.1280N 2 40.0 cm 0.0320N 3 60.0 Cm 0.0142N 4 80.0 CM 0.0080N 5 100.0 CM 0.0051
Mrs. Brock's class has 23 students. Each student has 8 markers. Which describes how to find the total number of markers in the class? a) Multiply the ones. Record; b) Multiply the tens. Record; c)
Multiply the tens. Record. Multiply the ones. Record. Add the partial produc...
Mrs. Brock's class has 23 students. Each student has 8 markers. Which describes how to find the total number of markers in the class? a) Multiply the ones. Record; b) Multiply the tens. Record; c)
Multiply the tens. Record. Multiply the ones. Record. Add the partial produc...
I am doing a journal review paper for Psychology. What are the implications of this work (i.e. what light does it shed on the particular topic related to psychology)? I do not understand the
question. What is it asking me to do? Can someone explain the question so I am can und...
I am doing a journal review paper. What are the implications of this work (i.e. what light does it shed on the particular topic related to psychology)? I do not understand the question. What is it
asking me to do?
Write your answer using only positive exponents. 30u^-15/34u^-16
what is a millogram
thanks mrs. sue !!! you are the best you are an lifesaver!!! and just to be sure 2 and 7/8
Cassandra has a square photo that has lengths of 4 inches. She cuts 5/8 inch off the top and 1/2 inch off the bottom. What is the new height of the photo?
I got the correct answer for the thrower, but for the catcher to find the second velocity, I follow the formula but it still counts it as wrong. What could I be doing wrong?
Draw and label the following: segment AB intersects segment CD at point M. M is the midpoint of both segments. If AM = x2 + 8x + 17 and MB = 12x + 14, find AB.
What are some examples of self-monitoring techniques? What are some examples of self-managing techniques? NOTE: I just need some ideas.
So, just basically talking about myself?
What am I suppose write about when I am asked to describe my self-concept? Note: Do NOT just give a link to the definition. I know what it means already.
B. Assume the child is 1.2 meters tall and casts a shadow of 6.4 meters. Determine the angle of the sun at that moment.
three pies are cut into sixths .how many pieces will there be
how to simplify the following,please step by step 56x^7+21x^4+63x^3
Algebra 1
Determine the electron geometry, molecular geometry, and idealized bond angles for each of the following molecules. In which cases do you expect deviations from the idealized bond angle?1.pf3 2.sbr2
3.ch3br 4.bcl3
Juan had three pieces of stick. The longest one measured 64 cm and is twice as long as the other two pieces. Find the total length of the sticks.
The answer is 70. First off if we want to know the LEAST possible value for a, then b, c, d, e have to be the MAXIMUM value. In this case it would be e=109, d=108, c=107, and b=106. Now that we have
values for all the variables except a, we can solve for a. To solve for a, we ...
A person has invested $6000 . Part of the money is invested at 3% and the remaining amout at 4% . The annual income from the two investment is $225.How much is invested at each rate?
What is the answer here? Find the sum of the biggest odd numbers formed by the digits 1 and 2 and the smallest even number formed by the digits 6,7,8
30% of 59
If you must stop your moving car in a hurry, should you slam on the brakes and lock them?
Solve the equation (3^x)^2-12*3^x+27=0 Simplify and write without negative exponents: 3^2(x^-2y^3)^3/4^1/2*x^-4cuberooty^2
Is the entire problem to the fourth power, or just the denominator?
Write as a single logarithm of a single quantity ln(3)+1/2ln(x+2)-4ln(1+sqrtx)
aluminum has a density of 27g/cm3 if a chunk of aluminum with a mass of 16g is placed in a graduated cylinder partially filled with water how much will the water level rise
El domingo pasado, yo iba con mis amigos al cine. This says Last sunday, I was with my friends at the movie theater. It would be better if it said El domingo pasado fui al cine con mis amigos. Last
Sunday I went to the movie theater with my friends.
THE FIRST SENTENCE SAYS I did not speak with anyone in school. THE SECOND SAYS No, I spoke with nobody in the school ............................... I think the first is more correct.
Spanish 8th grde
¿Qué compraste la semana pasada? What did you buy last week? Compré una mochilla la semana pasada. I bought a backpack last week. ¿Bailaron Uds.? Did they dance. Sí, bailamos. Yes, we danced, (this
depends on what type of spanish, latin ameri...
5th Grade Writing
its. because it's means it is. The horse hurt it is leg, doesn't make sense.
the magnetic force on a straight 0.15m segment of wire carrying a current of 4.5 A is 1.0 N. What is the magnitude of the component of the magnetic field that is perpendicular to the wire?
if you have a large cube with a surface area of 32 and a smaller cube with the surface area of 18 what would the ratio be for the smaller figure to the larger figure a)2:5 b)5:6 c)3:4 d)6:7
if you have a large cube with a surface area of 32 and a smaller cube with the surface area of 18 what would the ratio be for the smaller figure to the larger figure a)2:5 b)5:6 c)3:4 d)6:7
if you have a large cube with a surface area of 32 and a smaller cube with the surface area of 18 what would the ratio be for the smaller figure to the larger figure a)2:5 b)5:6 c)3:4 d)6:7
logarithmic differentiation
y= 4th root (x^2+1)/(x^2-1)
If two number cubes are tossed, what is the probability of getting a sum that is less than 6, given that 1 number cube shows a 3 ?
The vertices of a triangle are listed below. H(-2,4), I(22,4), J(10,-1) Which of the following correctly classifies the triangle? 1.The triangle is an obtuse isosceles triangle. 2.The triangle is a
right scalene triangle. 3.The triangle is an obtuse scalene triangle. 4.The tri...
what will be the boiling point 54.5 grams of aluminum sulfate dissolved in 805 grams H2O
how many grams of sodium phosphate were dissolved in 382 grams of water in a 6.0 m solution
In Circuits lab, we had to construct different types of operational amplifiers. The inverting and summing amplifiers produced similar output voltage to the calculated ones but were not negative. I
was just wondering why they didn't reverse the sign in the experiment?
In Circuits lab, we had to construct different types of operational amplifiers. The inverting and summing amplifiers produced similar output voltage to the calculated ones but were not negative. I
was just wondering why they didn't reverse the sign in the experiment?
An aqueous solution contains 0.196 M hydrosulfuric acid and 0.190 M hydrochloric acid. Calculate the sulfide ion concentration in this solution. [S2-] = _________mol/L.
An aqueous solution contains 0.216 M ascorbic acid (H2C6H6O6) and 0.131 M hydrochloric acid. Calculate the ascorbate (C6H6O62-) ion concentration in this solution. [C6H6O62-] =______________ mol/L.
find the probability p(-1.14 < z < 1.01) using the normal standard distribution
your best friend ate 3/8 of a pizza from 4 diffrent pizza.how much pizza did he ate?
for 6z=126, mike wrote that z=756. explain what mikes mistakewas. then tell the correct answer.
true or false . refraction always involves a change in the speed and direction of a wave wat is required in order for diffraction to occur? wat causes wave interference?
whats the prime factorization of 62
calculate the mass of kool aid required to prepare 200.0 ml of 0.30 M kool aid
SUBTRACT 4 4/7 AND 3 2/3
Solve for X 0.0421=0.0015-x/0.025+1/2 i'm lost
put one white flower in a vase with 5 cups of water and one white flower in a vase with 5 cups of water with food coloring (red green blue yellow or any color) for one week. make sure you take
pictures and write a few sentences on what you noticed (since he's only in kinde...
A 933 kg block is pushed on the slope of a 20 ◦ frictionless inclined plane to give it an initial speed of 50 cm/s along the slope when the block is 1.5 m from the bottom of the incline. The
acceleration of gravity is 9.8 m/s 2 . 1.5 m 933 kg 20 ◦ What is the speed...
Your percent
adam smith supported three different views of theory of value (are the followings correct, i will add them to my essay) 1. Labor cost: he supported that everyone must produce his own goods using own
labor and exchange the surplus with other goods. In this model of society no c...
How many iron atoms are in 0.32 mol of Fe2O3
If cosecant of theta equals 3 and cosine of theta is less than zero, find sine of theta, cosine of theta, tangent of theta, cotangent of theta and secant of theta
College Algebra
A manufacturer of radios estimates that his daily cost of producing x radios is given by the equation C=350+5x. The equation R=25x represents the revenue in dollars from selling x radios. *Profit
function,P(x)=R(x)-C(x),P(x)=20x-350 *How many radios should the manufacturer pro...
A ball rolls off a platform that is 5 meters above the ground. The ball's horizontal velocity as it leaves the platform is 4.8 m/s. (a) How much time does it take for the ball to hit the ground? (Use
g = 10 m/s2.) . s
Suppose you dump a 5-lb sack of sugar into Bellingham Bay. Assume the sugar disperses uniformly throughout all of the world s oceans. Estimate how many molecules of sucrose would be found in each
liter of sea water.
Suppose you dump a 5-lb sack of sugar into Bellingham Bay. Assume the sugar disperses uniformly throughout all of the world s oceans. Estimate how many molecules of sucrose would be found in each
liter of sea water.
Assets = Liabilities + Equity How can you tell which company have the strongest financial position? Is buy the highest in assets, highest in liabilities, or highest in equity? I am confused.
Algebra II
If you react 2.0 g AgNO3 and 2.5 g Na2SO4, what is the percent yield if the actual yield is 1.6 g Ag2SO4? 2AgNO3 + Na2SO4 → Ag2SO4 + 2NaNO3
billy and amy want to ride their bikes from their neighborhood to school which is 14.4 kilometers away. it takes amy 41 minutes to arrive at school. bill arrives 21 minutes after amy. how much faster
(in meters/second) is amy's average speed for the entire trip?
Simplify the following ratio 12XY: 8x
Mi Mama es bonita A Mi Me Gusta El Chocolate Yo Fui al parque ayer Mi mama y yo fuimos al parque A Mi me gusta jugar juegos.
is 3 times 41 120
An isosceles trapezoid has bases length 19 and 11 centimeters and length 13 centimeters. What is the area of the trapezoid to the nearest tenth?
1.Voy a nadar en la piscina. Necesito(I aam going to swim in the swimming pool. I need) un ttraje de baño(bathing suit). 2.Está lloviendo mucho. Necesito (it is raining a lot. I need..). el
impermeable y la sombrilla (a raincoat and umbrella.) 3.No puedo ver bien...
Algebra 2
thanks so much!
Algebra 2
How would you find the x intercepts of 2x^2+17=0. The original equation is 2(x-3)^2-1.
Chemistry 1010
Sorry I thought I was posting a question not an answer to your question!!
Chemistry 1010
What is the mass in grams of one mole of water
a gate 6m wide requires a force of 5.2 kg.wt. applied at the end to open it. what force would have to be applied (A) at the middle and (B) at 1.5m from the hinges, to open it?
yeah baby bring it on!
I love you!
____ are fuels made from renewble resources You have to use ___ to grow plants and extract ethanol 2 alternative biofuels are ___ and ___
Definition for Biofuel,Hydroelectricity,Geothermal,and wind energy
Eli Whitney revolutionized indusrty in the north and south and impacted the Civil War. What 2 key innovations is he attributed with?
Can someone to correct mistakes? Thank you! Le 4 septembre a été volé un vélo. Une fille avait laissé son vélo près d'un bar. Quand elle est sortie du bar elle a vu que son vélo avait été volé et ...
I put What Veterans day means to me is how people risk their lives so we can live our lives the way we want to. To me it's a special time to remember all the people who fought for us. It's a day to
honor all our veterans... That's all I put so far
I have to write a papre about Veteran's Day and what it means to me. I have some stuff about it but I need more. Can anyone please help me???
How does waves deposit?
The element strontium has ccp packing with a face-centered cubic unit cell. The volume of the unit cell is 2.25 x 10-25 L. Calculate the density (g/mL) of the element.
When the valve between the 2.00-L bulb, in which the gas pressure is 2.00 atm, and the 3.00-L bulb, in which the gas pressure is 4.50 atm, is opened, what will be the final pressure in the two bulbs?
Assume the temperature remains constan
Spanish 8th grade
TRANSLATION OF THE PASSAGE: My family and I like to spend time outdoors. When we have time, we leave the house and go to camp in the mountains or near a lake. It is important to bring the things
necessary when going to camp because if you don't have what you need, you can ...
A hot air balloon is traveling vertically upward at a constant speed of 3.9 m/s. When it is 20 m above the ground, a package is released from the balloon. After it is released, for how long is the
package in the air? The acceleration of gravity is 9.8 m/s
could the quantitative precipitation be useful to determine the mass percent of copper in a sample that also contains magnesium ions?
Women in Society
How does the way a society treats and regards its women affect the way outsiders perceive that society? How do you think cinema - and other forms of pop culture and mass media - affect that
How can you divide 64 into 2 groups to get a ratio of 3 to 5?
How can you divide 64 into 2 groups ti get a ratio of 3 to 5?
find the value of each variable 3m+5+4m-10
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Ana&page=2","timestamp":"2014-04-20T11:27:12Z","content_type":null,"content_length":"27394","record_id":"<urn:uuid:9bc26370-296c-47b0-bc47-d7d0f2b8d5f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reviews of books, websites, poster sets, movies, and other resources for learning and teaching the history of mathematics.
A new sourcebook containing the works in their original form along with a translation and a brief commentary.
A discussion not only of the mathematics of pi, but of its applications through the centuries.
A math history class visits the 'Beautiful Science' exhibit at the Huntington Library in Southern California.
A new history of mathematics text which asks lots of questions about the history and the mathematics.
A general mathematics website with much information on the history of mathematics.
An introduction to the prime numbers in many of their aspects.
A history of the concept of zero from as far back as the Babylonian period, with philosophical excursions into the meaning of "nothing".
The two final volumes of the MAA tercentenary series on Euler present numerous papers on various aspects of Euler's life and work.
A collection of articles from the American Mathematical Monthly by experts on the evolution of various fields of mathematics.
|
{"url":"http://www.maa.org/publications/periodicals/convergence/critics-corner?page=8&device=mobile","timestamp":"2014-04-16T14:21:31Z","content_type":null,"content_length":"25559","record_id":"<urn:uuid:64429ac2-2cc9-49fb-8453-aaeded8f02b5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limit Does NOT Exist - Multivariable Function Video | MindBites
Limit Does NOT Exist - Multivariable Function
About this Lesson
• Type: Video Tutorial
• Length: 8:39
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 83 MB
• Posted: 09/09/2010
This lesson is part of the following series:
Calculus I and II and III (225 lessons, $158.40)
Calculus III: Third Semester Calculus (49 lessons, $39.60)
Calculus: Partial Derivatives (16 lessons, $14.85)
Multivariable Calculus: Showing a Limit Does NOT Exist - In this video, I spend a bit of time talking about what it means for a limit not to exist and do one example showing that the limit does not
This video is available to be viewed online for free on PatrickJMT’s YouTube channel or his main site. Here, you can purchase access to download a version of it for offline viewing.
About this Author
987 lessons
Many of the videos for sale are also available on my website: http://PatrickJMT.com or you can also do a search and check out my popular 'math channel' on YouTube (PatrickJMT). You can watch all of
them there for FREE!
Masters degree in Mathematics; former math instructor at a top 20 university!
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet:
|
{"url":"http://www.mindbites.com/lesson/8982-limit-does-not-exist-multivariable-function","timestamp":"2014-04-18T01:01:35Z","content_type":null,"content_length":"49814","record_id":"<urn:uuid:87527553-5e23-439a-af5a-41aecc93c220>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Insulation Ratings: R Factor, K Factor & C Factor
The R Factor, K Factor and C Factor of Insulation
Insulation terms can be quite confusing to anyone outside the industry. If you’ve ever bought insulation for your house, you know that insulation with a high R factor is better. But what, exactly,
does that mean? Did you know that the R Factor depends on other factors?
When it comes to buying more specific insulation products, like removable insulation jackets for hot pipes, understanding the particulars of the three measures of insulation is key. In order to
understand the well-known R factor it’s important to understand the factors upon which it relies, the K factor and C factor.
If you are seeking out the formulas to calculate these factors, check out our R, C & K Factor Formula Conversion Table that lists all the formulas discussed in this article. For more information,
read on!
I Want
K Factor C Factor R Factor
K Factor C=K-factor/in. of thickness R= in. of thickness/K-factor
C Factor K=C-factor in. of thickness R=1/C-factor
I Have R Factor K=in. of thickness / R-factor C-1/R-factor
None of K=BTU-in / hr – ft² – °F C=BTU/(hr · ft · °F) R=h · ft² · °F/BTU
the Above
The K Factor of Insulation
What is the K Factor of Insulation?
The K factor of insulation represents the material’s thermal conductivity or ability to conduct heat. Usually, insulation materials have a K Factor of less than one. The lower the K factor, the
better the insulation. The textbook definition of the K factor is “The time rate of steady heat flow through a unit area of homogeneous material induced by a unit temperature gradient in a direction
perpendicular to that unit area.” That’s a mouthful.
Simplified, the K factor is the measure of heat that passes through one square foot of material that is one inch thick in an hour.
How Do I Calculate the K Factor of Insulation?
If R factor is unknown, the formula to calculate the K factor of insulation is:
K factor = BTU-in / hr – ft^2 - °F
British Thermal Unit-Inch Per Square Foot Per Hour Per Fahrenheit Degree
If R factor is know, this easier formula can be used to calculate the K factor:
K factor = inches of thickness / R Factor
How is the K Factor of Insulation Reported?
K factors are reported at one or many mean temperatures. The mean temperature is the average of the sum of the hottest and coldest surface temperatures which the insulation material is exposed to.
Put more simply, the testing apparatus that determines the K factor of an insulation material places a sample of the material between two plates, hot & cold, and the average of the surface
temperatures of those two plates equals the mean temperature. Here is an example of an insulation material’s K factor report:
Notice that as the mean temperature rises, so does the K factor. It’s important to observe the K factor & mean temperature when comparing insulation.
The C Factor of Insulation
What is the C Factor of Insulation?
The C factor stands for Thermal Conductance Factor. The C factor, like the K factor, is a rate of heat transfer through a material. The lower the C factor, the better the insulating properties of the
material. It is the quantity of heat that passes through a foot of insulation material.
The C factor is dependent upon the thickness of the insulation. The thicker the insulation is, the lower the C factor will be and thus the better the material will be at insulating. This is one of
the main differences between the K factor and C factor, because generally the thickness of an insulation material will not affect its K factor.
How Do I Calculate the C Factor of Insulation?
If the K factor is unknown, the formula to calculate the C factor of insulation is:
Btus/hour per square foot per degree F of temperature difference
If the K factor is known, this easier formula can be used:
C factor = K factor / inches of thickness
The R Factor
What is the R Factor of Insulation?
The R factor pulls together all of the information of the other factors and makes it easy to judge the effectiveness of insulating material. The R factor of insulation can be found most easily of the
insulation factors discussed, and it is the most popular indicator of a material’s insulator properties. Generally it is listed on an insulation material’s label. The R factor stands for thermal
resistance. The higher the R factor, the better the insulation.
The textbook definition for R Factor is: the quantity determined by the temperature difference, at steady state, between two defined surfaces of a material or construction that induces a unit heat
flow through a unit area. Aren’t textbooks supposed to be helpful?
To simplify, the R factor is a variable value that measures the ability of a material to block heat rather than radiate it. The variable is the C factor, which is dependent upon the thickness of the
material. It is the opposition to the flow of heat energy.
How Do I Calculate the R Factor of Insulation?
There are a few formulas to calculate the R factor of insulation, depending on if your K factor and C factor are known. If they are unknown, you can use this formula:
degrees F times square feet of area times hours of time per Btus of heat flow
If your K factor and C factor are known, you can use these formulas which may be easier to use:
R-factor = 1 / C-factor
R-factor = thickness in inches / K-factor
Keep in mind that these factors are specific to the materials being measured. For instance, if you take two pieces of batting that are rated at R 11 and put them together, you won’t get R 22
coverage. Understanding the ins and outs of the factors that help describe how effective insulation material is will go a long way to helping make the buying process easier.
5 Responses to Insulation Ratings: R Factor, K Factor & C Factor
1. i need to know the k factor of our cooling tunnel ,we have ambient of 26 deg c and we need temp inside tunnel 6 deg c,what should be the k factor and what should be the dencity of pu foam ?
2. Hi Syed,
It seems you are asking about a tunnel rather than a pipe or valve. Is that correct?
3. “Insulation Ratings: Calculating R Factor, K Factor
& C Factor” ended up being genuinely compelling and
insightful! Within the present day society that’s tricky to deliver.
Thx, Earlene
4. Hi,
I bought a freezer with 100% rigid Urethane insulation. It was poured-in-place with density of 2.2 Lbs/cu ft, K = 0.125, U=0.025, R=40. This insulation got wet with 1 gallon of water in a 8ftx4ft
panel. Since Urethane is a hygroscopic material, does this material with these specifications absorb large amount or small amount of water?
□ Cured polyurethane foam absorbs very little water, and the foam structure should be closed cell, so that water cannot infiltrate through the insulation either. I would expect the water to
make very little difference in the performance of the insulation.
This entry was posted in Recent News & Updates. Bookmark the permalink.
|
{"url":"http://www.thermaxxjackets.com/insulation-ratings-r-factor-k-factor-c-factor/","timestamp":"2014-04-18T00:56:10Z","content_type":null,"content_length":"45458","record_id":"<urn:uuid:47cdf748-0ea7-4df2-b7a8-254b08a5eaeb>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When fluid dynamics mimic quantum mechanics (w/video)
The latest news from academia, regulators research labs and other things of interest
Posted: Jul 29, 2013
When fluid dynamics mimic quantum mechanics (w/video)
(Nanowerk News) In the early days of quantum physics, in an attempt to explain the wavelike behavior of quantum particles, the French physicist Louis de Broglie proposed what he called a “pilot wave”
theory. According to de Broglie, moving particles — such as electrons, or the photons in a beam of light — are borne along on waves of some type, like driftwood on a tide.
Physicists’ inability to detect de Broglie’s posited waves led them, for the most part, to abandon pilot-wave theory. Recently, however, a real pilot-wave system has been discovered, in which a drop
of fluid bounces across a vibrating fluid bath, propelled by waves produced by its own collisions.
In 2006, Yves Couder and Emmanuel Fort, physicists at Université Paris Diderot, used this system to reproduce one of the most famous experiments in quantum physics: the so-called “double-slit”
experiment, in which particles are fired at a screen through a barrier with two holes in it.
In the latest issue of the journal Physical Review E ("Wavelike statistics from pilot-wave dynamics in a circular corral"), a team of MIT researchers, in collaboration with Couder and his colleagues,
report that they have produced the fluidic analogue of another classic quantum experiment, in which electrons are confined to a circular “corral” by a ring of ions. In the new experiments, bouncing
drops of fluid mimicked the electrons’ statistical behavior with remarkable accuracy.
When the waves are confined to a circular corral, they reflect back on themselves, producing complex patterns (grey ripples) that steer the droplet in an apparently random trajectory (white line).
But in fact, the droplet’s motion follows statistical patterns determined by the wavelength of the waves. (Image: Dan Harris)
“This hydrodynamic system is subtle, and extraordinarily rich in terms of mathematical modeling,” says John Bush, a professor of applied mathematics at MIT and corresponding author on the new paper.
“It’s the first pilot-wave system discovered and gives insight into how rational quantum dynamics might work, were such a thing to exist.”
Joining Bush on the paper are lead author Daniel Harris, a graduate student in mathematics at MIT; Couder and Fort; and Julien Moukhtar, also of Université Paris Diderot. In a separate pair of
papers, appearing this month in the Journal of Fluid Mechanics, Bush and Jan Molacek, another MIT graduate student in mathematics, explain the fluid mechanics that underlie the system’s behavior.
Interference inference
The double-slit experiment is seminal because it offers the clearest demonstration of wave-particle duality: As the theoretical physicist Richard Feynman once put it, “Any other situation in quantum
mechanics, it turns out, can always be explained by saying, ‘You remember the case of the experiment with the two holes? It’s the same thing.’”
The pilot-wave dynamics of walking droplets.
If a wave traveling on the surface of water strikes a barrier with two slits in it, two waves will emerge on the other side. Where the crests of those waves intersect, they form a larger wave; where
a crest intersects with a trough, the fluid is still. A bank of pressure sensors struck by the waves would register an “interference pattern” — a series of alternating light and dark bands indicating
where the waves reinforced or canceled each other.
Photons fired through a screen with two holes in it produce a similar interference pattern — even when they’re fired one at a time. That’s wave-particle duality: the mathematics of wave mechanics
explains the statistical behavior of moving particles.
In the experiments reported in PRE, the researchers mounted a shallow tray with a circular depression in it on a vibrating stand. They filled the tray with a silicone oil and began vibrating it at a
rate just below that required to produce surface waves.
They then dropped a single droplet of the same oil into the bath. The droplet bounced up and down, producing waves that pushed it along the surface.
The waves generated by the bouncing droplet reflected off the corral walls, confining the droplet within the circle and interfering with each other to create complicated patterns. As the droplet
bounced off the waves, its motion appeared to be entirely random, but over time, it proved to favor certain regions of the bath over others. It was found most frequently near the center of the
circle, then, with slowly diminishing frequency, in concentric rings whose distance from each other was determined by the wavelength of the pilot wave.
The statistical description of the droplet’s location is analogous to that of an electron confined to a circular quantum corral and has a similar, wavelike form.
“It’s a great result,” says Paul Milewski, a math professor at the University of Bath, in England, who specializes in fluid mechanics. “Given the number of quantum-mechanical analogues of this
mechanical system already shown, it’s not an enormous surprise that the corral experiment also behaves like quantum mechanics. But they’ve done an amazingly careful job, because it takes very
accurate measurements over a very long time of this droplet bouncing to get this probability distribution.”
“If you have a system that is deterministic and is what we call in the business ‘chaotic,’ or sensitive to initial conditions, sensitive to perturbations, then it can behave probabilistically,”
Milewski continues. “Experiments like this weren’t available to the giants of quantum mechanics. They also didn’t know anything about chaos. Suppose these guys — who were puzzled by why the world
behaves in this strange probabilistic way — actually had access to experiments like this and had the knowledge of chaos, would they have come up with an equivalent, deterministic theory of quantum
mechanics, which is not the current one? That’s what I find exciting from the quantum perspective.”
Subscribe to a free copy of one of our daily
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news.
|
{"url":"http://www.nanowerk.com/news2/newsid=31571.php","timestamp":"2014-04-16T16:49:02Z","content_type":null,"content_length":"45164","record_id":"<urn:uuid:f0e95472-8b6e-4c99-b0a3-f54037046244>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
09. Fourier Methods
To navigate this multi-page article:
Use the drop-down lists and arrow icons located at the top and bottom of each page.
Click here to download the
If you have access to Sage and would like to interact with this article's content, either download the worksheet linked above or copy the contents of this cell into your running Sage session:
# special equation rendering
def render(x,name = "temp.png",size = "normal"):
if(type(x) != type("")): x = latex(x)
latex.eval("\\" + size + " $" + x + "$",{},"",name)
# magnitude of 2-component cartesian vector
def mag(x):
return sqrt(x[0]^2+x[1]^2)
var('a, b, c, d, f, g, m, q, t, x, y')
# time-domain function plot
def time_plot_funct(funct,nl,w=0.25,color='#006000',labels=('Time','Amplitude')):
n = 500
list = [[t*w/n,funct(t*w/n,nl)] for t in range(n)]
return list_plot(list,rgbcolor=color,axes_labels=labels,plotjoined=True)
# time-domain FFT array plot
def time_plot_fft(fftobj,freq,line_color='#006000',labels=('Time','Amplitude')):
lfft = len(fftobj)
data = [[x*2*freq/lfft,fftobj[x][0]] for x in range(lfft/2)]
return list_plot(data,plotjoined=True,rgbcolor=line_color,axes_labels=labels)
# frequency domain plot
def fft_plot(fftobj,line_color='#006000',labels=('Frequency','Amplitude')):
size = len(fftobj)
dt = 1.0/size
list = map(lambda x:(mag(x))*dt,fftobj[:size/2])
return list_plot(list,rgbcolor=line_color,plotjoined=True,axes_labels=labels)
Some of the earlier articles in this set are helpful in setting the stage for this one, in particular Tuned Circuits because of its discussion of complex numbers.
Many mathematical disciplines, though fascinating, have little practical utility, while others are the reverse (useful but not interesting). Because of innate complexity or computational
workload, some disciplines had few practical applications until people began to apply computers to mathematical problems. Fourier methods fall into the latter category — until recently, it was an
esoteric field with much theoretical substance but few tangible applications.
In mathematics and at present, the computer is properly seen as an inferior partner, freeing people to view mathematical ideas from a higher vantage point while churning out pedestrian results.
(Someday computers may generate their own mathematical ideas, but we aren't there yet.) With respect to the present topic, the computer's ability to produce practical results has turned an
esoteric mathematical field into a practical one that is central to much of modern technology — applications for Fourier methods abound in nearly every part of modern science and technology:
□ Efficient data compression in graphics, video and television
□ Analysis of general periodic data
□ Signal detection and recovery in noisy environments
□ Fourier Deconvolution in seismology and optics
And many others. The basic idea of the Fourier Transform is that a periodic function in the time domain has an equivalent form in the frequency domain, and further, that these forms are
Time Domain Frequency Domain
The first thing to understand about Fourier methods is that the frequency representation of a periodic waveform may represent a much smaller amount of information than the time representation. In
the above example the right-hand graph contains only three data points, but those points are adequate to fully reconstruct the time-domain wave at the left. This greatly simplifies the
description and reconstruction of periodic waves.
This economy of description is one reason Fourier methods are so widely used to compress data streams to a small fraction of their original size, and is the primary reason old-style analog
television has been replaced by the newer, much more efficient digital methods.
The second thing to understand is that Fourier methods may be used to recover signals apparently lost in noise:
Time Domain Frequency Domain
The above graphs show how easily periodic signals can be made to pop out of background noise. More sophisticated Fourier methods are used to extract very weak signals from distant spacecraft and
cosmological sources in the presence of high natural noise levels.
Sage is able to perform many kinds of Fourier operations, including symbolic transforms and the numerical Fast Fourier Transform to be described later. Before we explore these applications, let's
look at the mathematical underpinnings.
Fourier Transform
Using somewhat simplified notation, the Fourier Transform of a time-domain function f(x), for real numbers x and y, looks like this:
And the inverse Fourier Transform looks like this:
□ f(x) = a time-domain function
□ f(y) = a frequency-domain function
□ x = an argument with units of time
□ y = an argument with units of frequency
□ e = the base of natural logarithms
□ i = the imaginary unit (i^2 = -1)
Euler's Formula
At first glance it may seem counterintuitive that the simple exponentiation shown above (e^±2πixy) can produce a transformation between the time and frequency domains. But the expression at the
heart of the Fourier integral makes this possible. Called Euler's Formula, it has an interesting property:
The relationship expressed by Euler's Equation turns out to be ideal for representing time-varying waves in a compact frequency-domain form and the reverse. We can use Sage to graph this key
property of Euler's Equation:
lbl = text("$e^{2\pi i \\theta} = cos \, 2\pi\\theta + i \, sin \, 2\pi\\theta $"
rp = plot(lambda x:N(e^(2*pi*i*x)).real(),x,0,2,rgbcolor='#006000')
ip = plot(lambda x:N(e^(2*pi*i*x)).imag(),x,0,2)
The green trace represents the real part and the blue trace represents the imaginary part of the Euler's Equation result. (Some of the complexity in the above worksheet cell represents
workarounds for bugs in the present version of Sage (4.1.2)).
The above represents the foundation on which all Fourier methods are built. Next, we turn to a practical method for analyzing real-world data sources that aren't continuous.
Discrete Fourier Transform
The Discrete Fourier Transform (DFT), derived from the above method, represents a way to process discontinuous data, for example data sampled at regular time intervals from a source such as a
radio receiver. Because the data take the form of a set of discrete samples, the analysis method changes:
And the inverse DFT:
□ x[n] = a complex time-domain data set
□ X[k] = a complex frequency-domain data set
□ N = the size of the data sets (which are assumed to be equal)
It can be seen that the continuous integration of the Fourier Transform (equations (1) and (2) above) has been replaced by a summation of discrete terms in the corresponding DFT (equations (4)
and (5)). The notation used in the above DFT equations is a bit nonstandard and bears explanation — for each of the DFT equations one sees two indices: n and k. A careful look at the equations
reveals that there are two nested loops, both with size determined by N, the size of the data sets, so it's reasonable to assume the DFT becomes very slow for large data sets — and this is true
(a DFT over N data points requires O(N^2) operations).
Nyquist-Shannon Sampling Theorem
One important property of the DFT merits special attention. According to the Nyquist–Shannon sampling theorem, to properly analyze a band of frequencies extending up to some upper limit of N
Hertz, one must gather data at time intervals of 1/(2N) — for example, a maximum frequency of 100 Hz would require a minimum sampling rate of 1/200 second. Why this is true is easier to show than
to tell:
ff(f,t) = sin(2*pi*f*t)
f = 5
def _(n = (5..20)):
list = [ff(f,t/(2*n)) for t in range(0,n)]
The above plot is the result for a 1/(2N) sampling rate — just enough to analyze the data. By the way — those readers who are not pasting these examples into Sage are missing out — the
"@interact" feature shown above produces an interactive user control (not shown here) that changes the sampling rate and redraws the graphic.
Fast Fourier Transform
Because of the slowness of the classical DFT, a method called the Fast Fourier Transform (FFT) has been devised, which improves the speed to O(N log N) operations (an improvement roughly
proportional to N/log(N)). Many people think the FFT dates back only to the 1960s and the Cooley-Tukey algorithm, but various forms of, and steps toward, the FFT date back to Carl Friedrich Gauss
(1777-1855), who used an early version of the FFT to convert asteroid sightings into orbital predictions.
Most modern DFT computations are actually performed as FFTs behind the scenes, and because the FFT outcome is identical to a DFT with the sole difference that it's much faster, I won't dwell on
the FFT except to say that it's the method used for nearly all Fourier operations on real-world, incremental data sets.
Making Waves
Recovering Spectral Lines
It turns out that Fourier methods can be used to create or analyze any imaginable periodic waveform. To be more specific, any waveform can be constructed out of a set of frequency-domain data
points. Conversely, an unknown waveform can be converted into a compact set of unique spectral lines for analysis. Here's an example — let's use Sage to create and analyze a complex waveform.
Copy this cell's contents into Sage:
# declare constants
epsilon = 1e-12
samples = 25
# define a waveform-generating function (wgf)
wgf(t) = 12*sin(t*3) + 11*sin(t*4) + 10*sin(t*5) + 9*sin(t*6)
# display the time-domain waveform for the wgf
# create a Fast Fourier Transform object
# and populate it with wgf data
fft = FFT(samples)
for t in range(samples):
fft[t] = wgf(2*pi*t/samples)
# convert to frequency domain
# locate and print spectral lines
for f in range(samples/2):
v = 2*mag(fft[f])/samples
if(v > epsilon):
print "Frequency: %3.0f, Amplitude: %3.0f" % (f,v)
Frequency: 3, Amplitude: 12
Frequency: 4, Amplitude: 11
Frequency: 5, Amplitude: 10
Frequency: 6, Amplitude: 9
An Unknown Spectrum
The above example is meant to show the steps in the analysis process — construct a waveform, then analyze its spectrum. But we knew what the result would be, since we wrote the function to be
analyzed. Let's try a more challenging example by creating a waveform whose spectrum is unknown.
# declare constants
epsilon = .05
samples = 1000
# define a waveform-generating function (wgf)
def wgf(t,nl): return sgn(N(cos(2*pi*t)))
# display the time-domain waveform for the wgf
# create a Fast Fourier Transform object
# and populate it with wgf data
fft = FFT(samples)
for t in range(samples):
fft[t] = wgf(t/samples,0)
# convert to frequency domain
# locate and print spectral lines
for f in range(samples/2):
v = mag(fft[f])/(samples*2/pi)
if(v > epsilon):
print "Frequency: %3.0f, Amplitude: %.4f" % (f,v)
Frequency: 1, Amplitude: 1.0000
Frequency: 3, Amplitude: 0.3333
Frequency: 5, Amplitude: 0.2000
Frequency: 7, Amplitude: 0.1428
Frequency: 9, Amplitude: 0.1111
Frequency: 11, Amplitude: 0.0909
Frequency: 13, Amplitude: 0.0769
Frequency: 15, Amplitude: 0.0666
Frequency: 17, Amplitude: 0.0588
Frequency: 19, Amplitude: 0.0526
Okay, based on the plot, it seems we've created a square wave — a wave that moves from -1 to 1 abruptly, and spends an equal amount of time at -1 and 1 (e.g. has an average value of zero). Now
look at the list of spectral lines — there are lines located at frequencies of 1, 3, 5, 7 ... okay, it seems there is a spectral line at each of the odd multiples of the base frequency (such
multiples of a base frequency are called "harmonics"). And the amplitudes are 1 for the first harmonic, 0.3333 for the third, 0.2000 for the fifth, 0.1428 for the seventh ... hmm. Well, 0.3333 is
a pretty good approximation of 1/3, 0.2000 seems like 1/5, and 0.1428 might be ... umm ... let's use Sage to check that last one:
1/0.1428 [shift-Enter]
So 0.1428 is about 1/7. But we can get Sage to perform this 1/x operation for all the harmonics. Let's rewrite just one line of the above Sage cell, the one that prints the numerical results.
Copy the content below and use it to replace that one line, then run the cell again (remember to indent the new content just like the old):
print "Frequency: %3.0f, Amplitude: 1/%.0f" % (f,1/v)
Frequency: 1, Amplitude: 1/1
Frequency: 3, Amplitude: 1/3
Frequency: 5, Amplitude: 1/5
Frequency: 7, Amplitude: 1/7
Frequency: 9, Amplitude: 1/9
Frequency: 11, Amplitude: 1/11
Frequency: 13, Amplitude: 1/13
Frequency: 15, Amplitude: 1/15
Frequency: 17, Amplitude: 1/17
Frequency: 19, Amplitude: 1/19
Square Wave Theory
Okay, we now have a theory about square waves — it seems they are composed of odd-numbered harmonics (multiples of the base frequency), each with an amplitude that is the reciprocal of its
harmonic number. That seems simple enough, but how can we test this theory? In the above example, we wrote a simple function that created a square wave in the time domain, then we used a Fast
Fourier Transform to convert the result into the frequency domain, then by examining the spectral lines we developed a theory about what a square wave's spectrum is.
At this point, remember that the Fourier Transform and its inverse are reciprocal operations — one operation is the exact opposite of the other. On that basis, maybe we can turn the above process
around — maybe we can start in the frequency domain, write a function that creates spectral lines, transform them to the time domain, and see what the time-domain waves look like? Let's try that:
# declare constants
samples = 5000
def _(frequency = (1..8),harmonics = (1..64)):
# create a Fast Fourier Transform object
fft = FFT(samples)
# populate the FFT object with spectral data
fd2 = frequency*2.0
for n in range(harmonics):
a = fd2*(1+n*2)
b = -fd2*4/(a*pi)
fft[a] = (0,b)
# convert from frequency domain to time domain
# display the result
That looks promising. Again, those of my readers who are not copying these examples into Sage are really missing out — the above is actually an interactive example with two sliders, for frequency
and for the number of generated harmonic lines. As the number of generated harmonic lines increases, the plot eventually looks like this (64 harmonics):
Formal Definition
By experimenting with the above Sage code I found that (1) a multiplication by 4/π normalized the result (e.g. made the steady-state values equal to -1 and 1), (2) odd-numbered spectral lines are
all we need and (3) each spectral line's amplitude is the reciprocal of its harmonic number. I also concluded that an ideal square wave would have an infinity of spectral lines. This experiment
leads to a formal definition of a square wave in the time domain:
□ y = the wave's amplitude as a function of time
□ t = time, seconds
□ f = frequency, Hertz
Readers may wonder why the above square wave plot, created using a frequency-domain to time-domain conversion, doesn't have clean transitions between -1 and 1, instead each transition has
noticeable artifacts. It turns out these artifacts are caused by the limited resolution of the generating system — if a larger Fourier transform array is used, and if more harmonics are
generated, this effect declines. Here is a plot using an array size of 500,000 data points and 16,384 generated harmonics:
But the irony of this apparently clean plot is that the artifacts seen earlier haven't gone away — instead, because of the greatly increased resolution, they're just too narrow for the plotting
code to resolve and draw.
Fourier methods are a powerful and fascinating branch of mathematics, with many practical applications — they're well worth studying. But I want to emphasize something about the examples on this
page — much effort is expended to get certain results that are sometimes thwarted by various computer limitations, and in some cases we make an educated guess about the identity of a function or
the meaning of a result. I want to emphasize that this activity represents a limited kind of mathematics, with much breadth but little depth. (But if the reader comes away with an understanding
of Fourier methods, then this page's purpose is served.)
I mentioned earlier that computers are ideally a sort of assistant, not a full partner in mathematical activities — able to produce results a human cannot, but needing a human's guidance about
which problems to tackle and how to tackle them. As we ascend through the levels of mathematics, moving from simple operations to higher levels of abstraction, the relationship between human and
computer reverses. A computer can produce some kinds of low-level results almost instantly — much faster than the most adept human — but beyond a certain conceptual depth, for its own protection
the computer needs to head back to the shallow end of the pool.
As time passes and as mathematical software improves, the competence line between computer and human (the line below which the computer can produce better results) continues to rise. But by
freeing humans from tedious, uncreative kinds of calculation, this trend frees humans to explore mathematical territories previously too difficult to approach. This page's topic is an example —
Fourier methods now see much wider application, and are much better understood, than before computer mathematics.
This may sound overly optimistic, but I anticipate a nice outcome for computers in mathematics — people will progressively understand more mathematics and be more comfortable with mathematical
methods and ideas. I think this will happen because we no longer have to produce low-level results by hand — that liberation motivates people who might otherwise miss the entire mathematical
There are a number of additional Fourier-method resources here at arachnoid.com, for environments other than Sage:
|
{"url":"http://www.arachnoid.com/sage/fourier.html","timestamp":"2014-04-21T04:33:21Z","content_type":null,"content_length":"35800","record_id":"<urn:uuid:5a2ca020-b44a-4af5-8328-f1296407925b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
San Anselmo Geometry Tutor
Find a San Anselmo Geometry Tutor
...I received an A in both classes. It was a very interesting subject and I'll be more than happy to help students with any difficulty they have. I took many pharmacy-related courses and have
solid and extensive knowledge in pharmacology.
22 Subjects: including geometry, calculus, statistics, biology
...These students were mainstreamed in general education classes at an elementary school in Davis, CA. After earning my credential, I worked as a literacy instructor with students K-8 in
Washington DC. After doing that for a year with an education non-profit, they promoted me to coordinator and then again to program director.
16 Subjects: including geometry, English, algebra 2, special needs
I began my tutoring business over twenty years ago and have specialized in creating individualized learning plans for each child I instruct. Not every child learns in the same way and so I focus
on meeting the child where they are and implementing appropriate curriculum and teaching techniques base...
17 Subjects: including geometry, reading, English, GED
...As a former teacher, I understand how to lead, mentor, encourage and motivate students as they develop new knowledge and capabilities. As a professional scientist, I apply basic math principles
to most of my work. As a father, I teach my children math at home.
25 Subjects: including geometry, reading, English, writing
...Working with a kind and patient tutor can be instrumental in this process. I have experience with the ACT, SAT, PSAT, GRE, and LSAT.I began studying French intensively at the United Nations
International School in New York City when I was 11 and immediately fell love with the language. I continued studying it through college and have traveled to France many times.
48 Subjects: including geometry, Spanish, English, reading
|
{"url":"http://www.purplemath.com/San_Anselmo_geometry_tutors.php","timestamp":"2014-04-21T00:11:26Z","content_type":null,"content_length":"23987","record_id":"<urn:uuid:15436ee0-62ac-4ecc-bb3e-185f8d5ec621>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Data transformation Manikandan S - J Pharmacol Pharmacother
Year : 2010 | Volume : 1 | Issue : 2 | Page : 126-127
Data transformation
S Manikandan
Assistant Editor, JPP, Pondicherry, India
Date of Web Publication 10-Nov-2010
Correspondence Address:
S Manikandan
Department of Pharmacology, Indira Gandhi Medical College and Research Institute, Kadirkamam, Pondicherry
DOI: 10.4103/0976-500X.72373
PMID: 21350629
How to cite this article:
Manikandan S. Data transformation. J Pharmacol Pharmacother 2010;1:126-7
Preparing the data facilitates statistical analysis and this includes data checking, computing-derived data from the original values, statistically adjusting for outliers and data transformation. The
initial three methods have been explained previously in this series. ^[1] Data transformation also forms part of initial preparation of data before statistical analysis.
The pattern of values obtained when a variable is measured in large number of individuals is called a distribution. ^[2] Distribution can be broadly classified as normal and non-normal. The normal
distribution is also called 'Gaussian distribution' as it was first described by K.F. Gauss. This is called normal distribution as most of the biological parameters (such as weight, height and blood
sugar) follow it. There are a very few biological parameters which do not follow normal distribution, for example antibody titre, number of episodes of diarrhoea, etc. The beginners should not be
confused with the term 'normal' as it does not necessarily imply clinical normality and there is nothing abnormal in the 'non-normal' distributions.
One of the assumptions of the statistical test used for testing hypothesis is that the data are samples from normal distribution.^[3] Hence it becomes essential to identify skewed/normal
distributions. There are some simple ways to detect skewness. ^[4]
• If the mean is less than twice the standard deviation, then the distribution is likely to be skewed.
• If the population follows normal distribution, then the mean and the standard deviation of the samples are independent. This fact can be used for detecting skewness. If the standard deviation
increases as the mean increases across groups from a population, then it is a skewed distribution.
Apart from these simple methods, normality can be verified by statistical tests like Kolmogorov - Smirnov test.
Once skewness is identified, every attempt should be made to convert it into a normal distribution, so that the robust parametric tests can be applied for analysis. This can be accomplished by
Transformations can also be done for the ease of comparison and interpretation. The classical example of a variable which is always reported after logarithmic transformation is the hydrogen ion
concentration (pH). Another example where transformation helps in the comparison of data is the logarithmic transformation of dose-response curve. When the dose-response relationship is plotted it is
curvilinear. When the same response is plotted against log dose (log dose-response plot) it gives an elongated S-shaped curve. The middle portion of this curve is a straight line and comparing two
straight lines (by measuring their slope) is easier than comparing two curves. Hence transformation can assist in the comparison of data.
In a nutshell, transformation can be carried out to make the data follow normal distribution or at times for ease of interpretation/comparison.
Many a times, the transformation which makes the distribution normal also makes the variance equal. Even though there are many transformations like logarithm, square root, reciprocal, cube root,
square, the initial three are more commonly used. The following are the guidelines for the selection of a method of transformation. ^[5]
• If the standard deviation is proportional to the mean, the distribution is positively skewed and logarithmic transformation is the ideal one.
• If the variance is proportional to the mean, square root transformation is preferred. This happens more in case of variables which are measured as counts e.g., number of malignant cells in a
microscopic field, number of deaths from swine flu, etc.
• If the standard deviation is proportional to the mean squared, a reciprocal transformation can be performed. Reciprocal transformation is carried out for highly variable quantities such as serum
Among these three transformations, logarithmic transformation is commonly used as it is meaningful on back transformation (antilog). ^[3],[6]
A small cautionary note for the beginners performing transformation is that all calculations should be done in the transformed scale and back transformation should be done only at the end.
Many researchers think that transformation of data is 'data deceiving'. They are assured that transformation is a statistically approved method and it is universally valid.
While reporting the results, the summary statistics of the raw data should be mentioned. The transformation done should be clearly mentioned along with the reason for transformation. One should not
forget to mention that all the statistical analyses were carried out on the transformed data. ^[7] Finally the back transformation value (especially for 95% confidence interval) should also be
1. Manikandan S. Preparing to analyse data. J Pharmacol Pharmacother 2010;1:64-5.
2. Altman DG, Bland JM. Statistics notes: The normal distribution. BMJ 1995;310:298.
[PUBMED] [FULLTEXT]
3. Bland JM, Altman DG. The use of transformation when comparing two means. BMJ 1996;312:1153.
[PUBMED] [FULLTEXT]
4. Altman DG, Bland JM. Detecting skewness from summary information. BMJ 1996;313:1200.
[PUBMED] [FULLTEXT]
5. Bland JM, Altman DG. Transforming data. BMJ 1996;312:770.
[PUBMED] [FULLTEXT]
6. Bland JM, Altman DG. Transformations, means and confidence intervals. BMJ 1996;312:1079.
[PUBMED] [FULLTEXT]
7. Swinscow TD, Campbell MJ. Statistics at square one. 10 ^th ed. (Indian). New Delhi: Viva Books Private limited; 2003.
This article has been cited by
1 When Ignorance is Bliss: Explicit Instruction and the Efficacy of CBM-A for Anxiety
Ben Grafton,Bundy Mackintosh,Tara Vujic,Colin MacLeod
Cognitive Therapy and Research. 2013;
2 Authoręs reply.
Manikandan S
J Pharmacol Pharmacother. 2011; 2(44): 45
|
{"url":"http://www.jpharmacol.com/article.asp?issn=0976-500X;year=2010;volume=1;issue=2;spage=126;epage=127;aulast=Manikandan","timestamp":"2014-04-18T08:44:50Z","content_type":null,"content_length":"47750","record_id":"<urn:uuid:47614b9c-48ef-4a47-b298-587542942b76>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lagunitas Precalculus Tutor
Find a Lagunitas Precalculus Tutor
With my unique background and education, I can work successfully with a wide variety of students who have a wide variety of "issues" with learning math. I understand where the problems are, and
how best to get past them and onto a confident path to math success. My undergraduate degree is in mathematics, and I have worked as a computer professional, as well as a math tutor.
20 Subjects: including precalculus, calculus, Fortran, Pascal
...I've used the concepts during my years as a programmer and have tutored many students in the subject. I have a strong background in linear algebra and differential equations. I have a Masters
in mathematics and a PhD in economics which requires a good understanding of both topics.
49 Subjects: including precalculus, calculus, physics, geometry
...In addition to teaching full-time 7th and 8th grade Life Science and Physical Science, I supplement my far-insufficient salary by working evenings and weekends as the Circulation Manager for
Inquiring Mind Magazine. I manage their mailing list of 7,400 individual subscribers, plus group and inte...
43 Subjects: including precalculus, Spanish, geometry, chemistry
...Maybe you hate it, are a little afraid or just need some guidance. I'd love to solve some problems with you. Precalculus is a critical class for many majors.
28 Subjects: including precalculus, reading, chemistry, biology
...It's an important one, too, since it teaches all the necessary foundations to move on to precalculus, so I feel it's important to leave no detail out when learning it. I have extensive
experience tutoring it, because it's one of the most requested subjects. I have also worked as an instructor of this class at a community college.
32 Subjects: including precalculus, chemistry, Spanish, calculus
|
{"url":"http://www.purplemath.com/Lagunitas_precalculus_tutors.php","timestamp":"2014-04-19T23:57:21Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:dcbf4d26-0980-4556-a32d-e2bf82cd7fba>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/sydneyyy/asked","timestamp":"2014-04-17T21:25:33Z","content_type":null,"content_length":"118731","record_id":"<urn:uuid:f9206334-bfde-4d6c-bf81-e708d39b6dea>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On Becoming a Math Whiz: My Advice to a New MIT Student
April 28th, 2011 · 43 comments
Here’s how to become a math whiz:
Keep working on your problem set after you get stuck.
Don’t just sit and stare at it: think hard; until you’re exhausted; then come back the next day and try again. This will be uncomfortable, but that discomfort is the feeling of your brain stretching
to accommodate new abilities.
This advice came to mind recently when I received an e-mail from a high school senior. “Yesterday, I was accepted to MIT,” he began. “I’m ecstatic, but on the other hand, I’m a little nervous…I was
hoping you could give me some tips.”
I explained that I had been studying theoretical computer science and mathematics at a high-level for the past decade, much of it spent right here at MIT. Over these years, one conclusion has become
increasingly clear: the more hard focus you dedicate to a technical subject — be it computer science, chemistry, or physics — the better you get.
Junior graduate students think senior graduate students are smarter, but they’re not: they simply have more practice.
Senior graduate students think junior professors are smarter, but they’re not: they simply have more practice.
And so on.
When I arrived at Dartmouth, to name another example, I didn’t consider myself good at math. I had taken AB calculus during high school (not BC), and had scored a 4 on the AP exam (not a 5). By my
sophomore year of college, however, I had made a name for myself by snagging the highest grade out of 70 students in an advanced discrete mathematics class. What happened in between? A lot of hard
Eventually, this all becomes clear, but for an incoming freshman, it’s not intuitive. When you struggle with a calculus problem set while a classmate knocks it out in an hour, it’s easy to start to
thinking that you’re just not a “math person.”
But this isn’t about natural aptitude, it’s about practice. That other student has more practice. You can catch-up, but you have to put in the hours, which brings me back to my original advice: keep
working even after you get stuck.
That’s where you make up ground.
43 thoughts on “On Becoming a Math Whiz: My Advice to a New MIT Student”
1. Solving math problems and applied math problems like physics, engineering, etc is all about applying the right formulas / theorems. When you are stuck, it’s usually because you either a) don’t
understand the problem or b) aren’t applying the knowledge that was taught in class or in your textbook correctly. The best thing to do when you are stuck then is to GET HELP.
My advice to students who want to succeed at MIT is start your problem sets early. Don’t wait until the night before it’s due. If you get stuck on one problem, start working on the next one.
Sometimes working on another problem will help you realize what you were doing wrong on an earlier problem. (The same thing applies to doing well on tests.) At some point, you will really be
stuck. This happens to everyone! Then the best thing to do is to go to the professor’s teaching assistant’s office hours or even call them on the phone to get help. They can walk you through the
problem and tell you why you are getting stuck or what you are doing wrong.
2. The interested reader my search for my blog post on “solving hard problem sets without staying up all night”: it details how I used to tackle problem sets. It’s quite similar to what EricH says
3. I hate to be cliché, but didn’t Einstein say, “It’s not that I’m so smart, I just stick with problems longer?”
Have confidence that your brain can expand to meet the intellectual demand placed upon it. It might not happen immediately, but it will happen. I have found this concept useful for more than just
math – it’s especially helpful when trying to grasp a foreign concept in a foreign language.
4. ?Solving math problems and applied math problems like physics, engineering, etc is all about applying the right formulas / theorems.”
5. I’d like to hear your definition of what a technical subject is. It seems by your wording that certain things might not become easier or achiveable through deliberate practice/hard focus. What
subjects or areas would you include into technical subjects?
6. EricH, in grad school you ARE the TA. The article is about how to learn things others cannot teach you.
7. I realized how to study Math after getting my Sc.B. in Mathematics at Brown, so somewhat belatedly.
Here is my additional advice for how to approach a mathematics or theoretical computer science course at the undergrad / masters level: Most of these courses are in generally ‘solved’ arenas.
They are typically structured around a major result or group of results.
Almost no professors will explain to you when you begin study of a semester-long course that you will, as a class, be driving towards a specific result — derivatives, integrals, central limit
theorem, or what-have-you.
If you KNOW where you’re going, a mathematics class becomes much easier. There’s a simple way to figure out where you’re going — the first week of your math class, read your math book like a
novel. Skip the boring bits, read the bolded bits, don’t bother to do many problems, although you should read some at the end of each chapter. Make sure you get to the punchline. Thoroughly read
the punchline / ‘result’ of the book or course.
Now, at the pace of class, or ideally, since you are following the above advice and working harder, a little more quickly, go through the book thoroughly, line by line. Own the material between
the start and finish. For most people, knowing where exactly they’re heading makes a huge difference in comprehension.
I muscled through my math classes, frequently content to not understand the point until after the mid-term, but that didn’t need to be — I owned the books, and could have skimmed them for the
main points quite quickly. Combined with focused thought and hard work, this combination will put you ahead of the vast majority of classmates.
9. I recommend a solutions manual.
That sounds ludicrous–after all, where’s the challenge if you have the answers? However, the solutions presented are often cryptic and hard to understand without knowing the material. After
working for a while, applying the best methods you know, and getting stuck, viewing the solutions can be eye-opening.
Sometimes, even the solutions appear to be nonsense. That’s when you turn to the book. At that point, you’ve discovered a hole in your knowledge, and you’re much more curious about trigonometric
integrals–or whatever–than you were before.
If you’re doing this before the deadline and you’re still confused, then you’re ready to ask a professor questions.
10. Well, I agree.
In physics and mathematics especially, the so-called physical intuition and mathematical maturity are results from spending time with concepts and problems.
Those geniuses are usually people who spend more time exploring those physics and mathematics in their high school. Some people, like me, are supposed to catch up.
I am a Physics major btw.
11. Disclaimer: I completely agree with your main point. Just this week, went after an algorithm 4 different times on a whiteboard and finally got the efficiency I needed on the last pass. Can’t
describe how satisfying that was.
That said, I did chuckle at this:
“When I arrived at Dartmouth, to name another example, I didn’t consider myself good at math. I had taken AB calculus during high school (not BC), and had scored a 4 on the AP exam (not a 5). By
my sophomore year of college, however, I had made a name for myself by snagging the highest grade out of 70 students in an advanced discrete mathematics class. What happened in between? A lot of
hard focus.”
I think this particular paragraph is a stretch. By definition, “discrete” math is completely unrelated to calculus. So maybe you’re just better at one as opposed to the other. Perhaps a little
cause vs. correlation on that one
12. I think it was Hemmingway who suggested only stopping work when things were going well not when you’re stuck. I like to put in the hours but give up when the going is good, that way the rest of
the day is not clouded in frustration and returning to study is a pleasure, not something ominous. By the way, this theory works about 50% of the time, when you’re stuck and the clock runs out on
your day … nothing you can do.
13. Great post.
When I was in high school, I remember that physics was all about choosing the right equation to the problem. Even if you’ve memorized ALL the laws and equations in physics, you might end up not
knowing how to solve a simple problem.
So, yeah – practice is the key.
14. as a person with ADD – how do i hard focus? close to dropping out of school because my grades are that bad. college is not for me :<?
15. Hello,
I have a kind of offtopic question: I’m studying liberal arts, but keep struggling with the linguistics and grammar courses, where I have to determine constituents of a sentence and so on.
Does anyone has any tips for me?
16. @a: If you have ADD. Treat it. Psychostimulants are first-line treatment, and can dramatically improve your quality of life. Organizing your stuff is a major part of it too. Good luck.
@StudyHacks: Another great article, and so true.
17. I know this is very similar, but do you have any tips on how to master accounting? I cannot seem to get a grasp over this subject even though I did very well in calculus during high school
18. Malcolm Gladwell makes a the same point in Outliers. I think most students in the United States consider a problem hard if they can’t figure it out within 30 seconds and approximately 3 minutes.
The ultimate result though is that if you stay with things longer you tend to figure more things out.
20. For Lila,
If figuring out the constituents of sentences causes trouble, the cure is learning to diagram sentences. I just checked Amazon, and there are half a dozen books that claim to teach you how to do
the diagrams. Would love to hear a report back on how this works.
21. I feel that working math contest problems is sort of like deliberate practice for getting the general gist of problem-solving down. I know for sure that working through the Art of Problem
Solving’s contest books has been a huge help to my problem solving skills. And no, I don’t work for them. But the books are really really good.
22. Don’t forget, the point of jerking yourself off with Calculus in Uni is not because there’s a practical use, but because you’re trying to get smarter; it’s supposed to be a challenge. Actually
being ‘good’ at this stuff, or a savant, is not much of a goal. The point is to give you a whole other way of seeing the world. Like Damon in ‘Good Will Hunting’ ‘geniuses’ don’t have that other
way of seeing the world, they just have one. That’s not the goal here. The goal is to broaden the mind. Good luck.
What subjects or areas would you include into technical subjects?
My usual answer is anything with formulas. I don’t think, however, deliberate practice is limited to just technical subjects.
I think this particular paragraph is a stretch. By definition, “discrete” math is completely unrelated to calculus.
I think you’re reading a little too closely. The point in that example is that I left high school not thinking of myself as “a math person,” but by my sophomore year realized that I was perfectly
capable at doing math — it just came down to practice.
(I also aced my multivariate calculus exam the quarter before I took discrete math, if you like that example better
as a person with ADD – how do i hard focus? close to dropping out of school because my grades are that bad. college is not for me
There are specific approaches to school work that I have seen work well for ADD. Ask your doctor about this. What is going to work for you will probably seem different than the type of advice I
give here on Study Hacks.
24. All the kids in my honors chem class seem to understand things a lot quicker when it is explained in class, and then they don’t seem to forget it. For me, it takes 4 or 5 tries to understand it.
No matter how hard I study, I don’t get above an 80 on tests, mainly because they are not regurgitation of the homework we have been doing, but instead are new concepts that we have to apply the
old material to. I understand the old concepts, but when I am presented with a tricky or different problem where I have to use the concept in a different way, I get stuck and don’t know what to
do. So how would I improve?
26. something you might like to look at cal, its about one mans attempt to prove /disprove the 10,000hrs required to be an expert at something
28. in this post you talk about how people observe the behavior of a person (like a senior graduate student) and infer something about their character (that they’re smarter than you) rather than
attributing it to the situation (that they have more practice with the material than you). people explain the behavior in terms of disposition/character even though the behavior is actually
better explained by the situational factors. from reading your previous posts, it seems like you like social psychology, and social psych has a name for the phenomenon that you’re talking about
here: the fundamental attributional error. it’s well documented that instead of the situational explanation people tend to explain people’s behavior in terms of disposition/character,
particularly in individualistic cultures (i.e., western). collectivistic cultures (i.e., eastern) tend to attribute behavior more to situational factors.
fun fact. thought you might be interested. if so, there’s more here: http://en.wikipedia.org/wiki/Fundamental_attribution_error.
There are specific approaches to school work that I have seen work well for ADD.
Care too share them please o.O? I’m in a country that’s highly skeptical about ADD and has banned stimulants like Adderall and Ritalin. Mental health is also not covered in my health subsidies :
(. I know i’m not lying and I’m not lazy but I have all this energy and sitting still and applying is next to impossible.
i am looking into meditation but that too has been challenging to do for more than 5 minutes at this point in time and i don’t think it’s going to help in time for my finals next week.
i’m sorry to bother you guys again :(. i actually wish i lived in the US where they over prescribe and diagnose ADD/ADHD vs a place where doctors say that it is a children’s problem and should be
treated before 15.
30. How do you recommend “sticking with it” on problems that require some sort of leap of logic? I’m thinking of math problems that require you to “assume a solution of form f(x)”, but you have no
idea what f might be. Inductive proofs can fall into this realm, since you often have to know the form of the answer before you begin. How do you “stick with it” when you have absolutely no idea
where to go next?
33. @Sean
Sean Says:
April 30th, 2011 at 5:35 am
I know this is very similar, but do you have any tips on how to master accounting? I cannot seem to get a grasp over this subject even though I did very well in calculus during high school
Which accounting course are you in (Financial, Managerial, Intermediate 1, 2 etc.)? Also, how do you study the material? I have a 4.0 going into senior year (so I’m through with the courses that
tackle basic/intermediate accounting concepts), and I’ve found that reading the chapters and making sure that I actually understand the REASONS behind why certain things are done make the classes
much easier to tackle. Accounting is not difficult, but it does take a tremendous amount of time/work in order to get a firm grasp of the subject matter. This past year, while taking Intermediate
1 and 2, I found myself spending 3-5 hours reading 30 page, very dense chapters(not all at the same time, though. I typically would spend an hour or two each day reading only about 10 pages to
avoid fatigue). Then, right before the test I would look back and realize that the entire chapter basically centered around one or two very simple concepts. However, my peers that neglected to
read the chapters (and instead opted to read the very brief 3-4 page study guide chapters) always performed poorly on the tests. I guess the moral is that it doesn’t seem possible to take
shortcuts with accounting; it is going to take some serious study time. I also would recommend reading Cal’s yellow book, as I feel that it has been a tremendous help in allowing me to maintain
good grades.
34. I’d second this. I’ve had a lot of academic success and I’m really not a genius. I accredit most of it to serious, in-depth reflection. When I don’t really understand something, I go over and
over it (in a sort of compulsive-obsessive way) until the concept it crystal clear in my head. I think I’ve come to understand a lot of technical concepts even better than experts, just because
of the mental clarity gained from obsessively thinking and rethinking a concept.
35. Nice article! I used to think that the students at the top of my classes were just smarter than me. After my math degree, I decided to switch to stats for my masters and had to enroll in some 3rd
year stats courses to prepare. I suddenly found that I was the one at the top of the class, probably because I had been at University for 2 years longer than the other students. Math skills are
definitely something that you have to earn.
36. this reminds me of russell crow in “a beautiful mind”.
Let me mention here that being exceptionally good with numbers in my opinion is a genetic trait,its something that is inborn,and cannot be cultivated by practicing a hundred problems a day,sure
that will sharpen your mind,but only so..
37. In response to EricH, for the most part he is totally right, except his one statement: “The best thing to do when you are stuck then is to GET HELP.” While this is [i]almost[\i] the best thing
you can do, the BEST thing you can do is exactly what Cal Newport said, keep working on it! I’m a physics student at a liberal arts college in Iowa that is renowned for its physics program around
the US for its research (I even have a paper that should be out soon that is in collaboration with MIT Draper Lab), and our professors work us hard. We see MIT problems or variations on them on
our tests frequently. Do exactly what Cal Newport says, when you get stuck, KEEP WORKING. Nothing is better. In math (I am also a math student and have gotten a solid A in every math class here),
so in upper end math, the problem is always that you don’t understand a theorem like you should. And the only two ways to figure it out are 1) to ask someone who already knows it, or 2) to work
by yourself or with a group to figure it out. And if you do #2, you will be much better off because when that test comes, and there is a new theorem on it that you have never seen but are going
to have to use, you have to be able to understand what it really means. And the best way to do that is through practice. I’ve been reading this blog for almost 3 years now, and it is INCREDIBLE
stuff in many cases, I just had to learn to fit it to my needs and desires. I promise that Cal Newport is right here, just keep working until you are completely, 100% stuck, and then you go right
ahead and ask someone. And often it is beneficial to ask your professor something even after you believe you’ve gotten it right, because they have great input that you would never have thought
of. This works, wonderfully well.
38. Oh yeah, and there’s nothing that will help you more than starting early
40. I completely agree with this. In my program, many people go to a tutor every time they get stuck and ask for the solution. While there is a time and a place for tutoring, getting stuck on a
problem is a great blessing: because you’ve reached your limit, you can now expand it. Nothing can make you more intelligent besides yourself. The only way it can happen (as you talked about) is
through hard focus.
Professors can tell the difference between someone who does this and someone who doesn’t. That’s why I get professor recommendations for anything I ask for when others don’t.
41. Dear Cal,
THANK YOU SO MUCH. I’ve been looking for an example of someone who DID NOT have as “strong” of a math background as his/her peers upon entering college, but who still excelled. I’ll take your
advice and aim for my math major, no matter the cost. Thank you thank you thank you.
Grateful potential econ/math double major.
43. true, a 4 in ab calculus is not a great record in math, but i’m sure you had a perfect or near perfect math SAT. please do correct me if i’m wrong here.
|
{"url":"http://calnewport.com/blog/2011/04/28/on-becoming-a-math-whiz-my-advice-to-a-new-mit-student/","timestamp":"2014-04-16T05:56:43Z","content_type":null,"content_length":"85255","record_id":"<urn:uuid:4623a90e-1d9a-467a-bdee-aa85e4bf3a93>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weekly Challenge 27: Cubic Roots
Copyright © University of Cambridge. All rights reserved.
'Weekly Challenge 27: Cubic Roots' printed from http://nrich.maths.org/
A polynomial $f(x)$ has a factor $(x-a)$ if and only if $f(a)=0$.
Thus, a polynomial cutting the $x$-axis at $10, 100, 1000$ has factors $(x-10)(x-100)(x-1000)$. This defines a cubic polynomial up to a multiplicative factor.
$$f(x) = A(x-10)(x-100)(x-1000) = A\left(x^3 -(10+100+1000)x^2 + .. \right)\,,$$
for some constant $A$.
Now, a point of inflection necessarily has $f''(x) = 0$. Only the $x^3$ and $x^2$ terms of a cubic polynomial contributes to its second derivative, so there is no need to expand the polynomial in
full to see that
f''(x) = 6Ax-2220A
This is zero at the single point $x = \frac{2220}{6} = 370$.
Therefore the point of inflection for the cubic is at $x=370$, regardless of the choice of $A$.
For the second part, the polynomial must take the form
f(x) = A(x-10)(x-100)(x-a)\quad \mbox{for a constant } a \mbox{ where} \quad f''(a)=0
So, we need to take the second derivative to work out the constraints on $a$. I will keep the form of the factors and use the chain rule to make life simple, although you could expand the brackets
first if you wish
f''(x) = 2A\left((x-10)+(x-100)+(x-a)\right)
f''(a) = 2A(2a-110)=0
Since $A$ cannot be zero for a cubic polynomial, we must have $a=55$.
The polynomial must therefore be of the form
f(x) = A(x-10)(x-55)(x-100)
Alternative, quick, method for second part:
From the first part of the question I noticed a generalisation that the point of inflection of a cubic is found at one third of the sum of the roots $r_1+r_2+r_3$. If one of the roots is the point
of inflection then
r_1+a+r_2 = 3a
Thus, the point of inflection which is a root is found at one-half of the sum of the other two roots.
Thus, in our special case,
a = \frac{1}{2}(10+100) = 55, \mbox{ as before}.
Isn't maths great!
|
{"url":"http://nrich.maths.org/7066/solution?nomenu=1","timestamp":"2014-04-18T06:39:17Z","content_type":null,"content_length":"5077","record_id":"<urn:uuid:6352d267-d22b-4db5-bbbc-bfdf165ff817>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-Dev] scipy.signal.butter returns complex coefficients
Eric Moore ewm@redtetrahedron....
Wed Feb 20 17:58:55 CST 2013
Pierre Haessig wrote:
> Hello,
> Just a quick question about filter design using scipy.signal.butter :
> (using scipy '0.10.1')
> >>> import scipy.signal as sig
> >>> B,A = sig.butter(N=4, Wn=1, analog=1) # 4rth order low-pass filter
> with cutoff at 1 rad/s
> >>> A,B
> (array([ 1.00000000 +0.00000000e+00j, 2.61312593 -1.11022302e-16j,
> 3.41421356 +0.00000000e+00j, 2.61312593 -1.11022302e-16j,
> 1.00000000 -1.66533454e-16j]),
> array([ 1.+0.j]))
> I was surprised by the result since I was expecting real numbers and not
> complex numbers. And indeed the imaginary part is zero with respect to
> floating point precision. For comparison, bessel function return real
> coefficients.
> So my question is : is the return of complex coefficients the expected
> behavior ? or should I place an issue on Trac ?
> best,
> Pierre
> (Maybe I didn't use the proper keywords because a quick Google search
> didn't give me preexisting discussion on this topic)
I'd say that no it isn't expected. What is happening here is that the
code in np.poly which constructs a polynomial from its zeros tries to
detect that you have pairs of complex conjugate roots and then return a
real array. That code is failing in this particular case. (The code in
question is lines 137-145 of numpy/lib/polynomial.py:
I'm not sure that the correct fix would be, perhaps signal.buttap should
calculate the poles by calculating half of them, and then conjugating.
Since I'd bet that the ultimate cause is that we aren't getting exact
conjugates from the complex exponential.
More information about the SciPy-Dev mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2013-February/018378.html","timestamp":"2014-04-16T22:42:04Z","content_type":null,"content_length":"4668","record_id":"<urn:uuid:ae082f53-2537-4f2d-8611-5c9705387410>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the matrix : |1 2 1| 0 | |0 1 0|-2| |0 0 1| 3 |
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/509d310ce4b0ac7e51942eb0","timestamp":"2014-04-18T18:46:44Z","content_type":null,"content_length":"98730","record_id":"<urn:uuid:3f57c564-f026-4b18-a5a9-8b7457820bdf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
4.5 The Soft Touch (finding tangent lines)
Section 4: Derivatives
© 1997 by Karl Hahn
4.5 The Soft Touch (finding tangent lines)
You've probably already seen it in your homework problems -- "find the equation of the line that is tangent to bloppity-blop curve at the such-and-such point." And if you haven't yet, you soon shall.
It will be on the exam as well.
A line that is tangent to a curve has two properties:
1. The line shares a point with the curve in question.
2. At the shared point, the derivative of the curve is equal to the slope of the line.
If you solve for those two properties, you have completed the problem.
The equation of a line is, as you learned in algebra
y = mx + b eq. 4.5-1
So all you have to do is find
Let's take the second property first. The slope of the line is m. That value has to match the derivative of the curve at the point they give you. So for finding m you need to know the derivative of
the curve. And you need to evaluate it at the point they give in the problem. And that value is m.
Then take the first property. You need to find the b that makes that first property true. And you now know what m is equal to. If, for example, the point they want you to be tangent to is (2, 3),
then simply take y = mx + b, put in 2 for x, 3 for y, whatever you came up with for m, and solve for b. It's that easy.
Let's run through an example. Let the curve be
y(x) = x^2 + 1 eq. 4.5-2
Usually it's understood that
is a function of
, so very often this would be written as
y = x^2 + 1
. It means exactly the same thing.
Let's find the tangent at x = 2. So what point is that? We know the x-coordinate of the point. To find the y-coordinate, simply use the equation of the curve (given in 4.5-2). That gives us y = 5, so
the point that we want our line to be tangent at is (2, 5).
Step 1: Find the derivative of the curve. The equation of the curve is given in 4.5-2. You know how to take its derivative. It's
y'(x) = 2x eq. 4.5-3
Step 2: Evaluate the derivative at the point given. The problem says do it at x = 2. The derivative at that x is y'(2) = 4. That is your m. Write it down. m = 4.
Step 3: Solve for b. Remember, y = mx + b. We know what x is, what y is, and what m is. If you plug them all in, you get
5 = 4×2 + b eq. 4.5-4a
And it's trivial algebra to go from there to
b = -3 eq. 4.5-4b
Step 4: Write the equation. You know m and b now. Simply substitute them into y = mx + b. You get
y = 4x - 3 eq. 4.5-5
That's it. We're done. Once you know how to take the derivative, these problems are easy. Just follow the four steps here.
Figure 4.5-1 shows a graph of this problem. Observe how the tangent line just kisses the curve at (2, 5). The angle they meet at is, in fact, zero. Plotting the curve and the tangent line is one
way you can check your work to see if your answer is right. Another way is to substitute the x given in the problem (in this case 2) into the equation for your line and see if you get back the same
y (in this case 5) as you get by substituting that x into the equation for the curve. Indeed, in this problem, x^2 + 1 = 4x - 3 = 5 when x = 2. In addition, you should have that the slope of the
line equals y'(x) = 2x = y'(2) = 4 for this problem. But that test can hardly fail, since that's how you chose the slope of the line in the first place.
Here's a slightly more complicated variant on the same problem. Find the two lines that are tangent to
y = x^2 - 2x + 1
and pass through the point,
(5, 7)
. Observe that in this case, the point,
(5, 7)
, does not lie on the curve. So you are finding lines that pass through a point outside the curve but at some other points, which are as yet unknown, they are tangent to the curve. The strategy is to
identify those points of tangency. From them, it is easy to find the lines that solve the problem.
Step 1: Find the derivative of the curve. In this case we have
y' = 2x - 2
We do this because we know that at the point of tangency, the derivative of the curve must be equal to the slope of the tangent line. So we need to know this derivative.
Step 2: Write as much of the equation of the line as you can from what you know. The slope, m, of the line is still unknown. But you know what point it must pass through. In this case that is the
point (5,7). So using the formula for the equation of a line passing through a given point with a given slope, you have:
y - 7 = m(x - 5)
Step 3: Use the derivative to write an equation for the slope. We don't know the slope of the line yet, but we know that if the point, (x,y), is the point of tangency, then the slope of the line will
be equal to the derivative of the original curve at x. The derivative we already determined to be y' = 2x - 2. So we write the equation
m = y' = 2x - 2
Step 4: Substitute the slope expression into the equation for the line. You have m = 2x - 2 and y - 7 = m(x - 5). Put them both together by substituting for m in the second equation.
y - 7 = (2x - 2)(x - 5) = 2x^2 - 12x + 10
y = 2x^2 - 12x + 17
Step 5: Substitute the equation for the original curve back in. That is, you know that y = x^2 - 2x + 1, where (x,y) is the point of tangency. Why? Because the point of tangency must lie on the
curve. Substituting that expression for y into the above gives:
x^2 - 2x + 1 = 2x^2 - 12x + 17
Step 6: Gather like terms and solve for x. Using simple algebra the above equation becomes the quadratic
x^2 - 10x + 16 = 0
You can either use the quadratic formula, or you can factor this one in your head to get
(x - 2)(x - 8) = 0
So the
coordinate of the the point of tangency is either
x = 2
x = 8
Step 7: Substitute back to get m. That is, we have already seen that m = 2x - 2 at any point, (x, y), of tangency. Since we now know what both the x's are for the points of tangency, we can put those
values into the equation for m and get that either m = 2 or m = 14. From the equation we had in step 2,
y - 7 = 2(x - 5)
y - 7 = 14(x - 5)
When you multiply those out and put them into slope-intercept form, the equations of the two lines that are tangent to y = x^2 - 2x + 1 and pass through the point, (5, 7), are
y = 2x - 3
y = 14x - 63
and you are done. The curve and the two lines are illustrated in the graph on the right here. Notice where the two lines intersect. Both points of tangency are visible on the graph, although one is
very nearly at the top of the graph.
Click here to see an alternative attack on this same problem.
Finding the Normal Line
This is one they might throw at you just to keep you on your toes. Instead of asking for the equation of the tangent line, they'll ask you for the equation of the line that is normal to bloppity-blop
curve at such-and-such a point.
How much do you remember from algebra? Do you remember that if two lines are normal to each other (that is, they are at right angles to each other), then their slopes are negative reciprocals of each
other? In other words, if one has a slope of m, then the other must have slope of -1/m. Can you extend this definition to include a line being normal to a curve? It's just a small variation of the
rules we had for tangent lines.
1. The line shares a point with the curve in question.
2. At the shared point, derivative of the curve is the negative reciprocal of the slope of the line.
Let's take our example from before where the curve is y = x^2 + 1 and we want the normal line to pass through (2, 5). Step 1 is the same as before. It is still the case that y' = 2x. In step 2,
though, after we evaluate y'(x) at x = 2 and find that it's equal to 4, instead of setting m to that, we set m to its negative reciprocal, which is -1/4.
The remaining steps are the same. Solve for b with the x and y coordinates you have and the m you have just determined. In this case we have:
5 = (-1/4)×2 + b eq. 4.5-6a
and solving for b, we have
b = 5.5 eq. 4.5-6b
So the answer here is
y = (-1/4)x + 5.5 eq. 4.5-7
Figure 4.5-2 shows a plot of our curve and the line that is normal to it at x = 2. Observe that the normal line crosses our curve in two places. At (2, 5) is the crossing at right angles that we
expected to get. To the left there is another crossing at a point we have not determined yet. On a homework problem or on a test, you may be asked to identify this second crossing. But that is just
an algebra problem.
Recall that to find where the plots of two functions cross you simply take the difference of the two functions and solve for the x that makes the difference be zero. Our two functions are y = x^2 +
1 and y = (-1/4)x + 5.5. Write down the expression for the difference between them, and then scroll down.
You should either have for your difference
x^2 + (1/4)x - 4.5 eq. 4.5-7a
or the negative of that. The prescription then is to set that equal to zero and solve for
x^2 + (1/4)x - 4.5 = 0 eq. 4.5-7b
This is without a doubt a quadratic polynomial and we have the
quadratic formula
with which to solve it. We get
-1/4 ± √1/16 + 18
x = [S: :S] eq. 4.5-8
If you take the plus of the
, you get
x = 2
. And one of the solutions
had better be x = 2
, since we already determined that to be the right-hand intersection point. If you take the minus of the
, you get
x = -9/4 = -2.25
. Plug that back into the original equation for the curve and you get
y = 97/16 = 6.0625
. So the left-hand intersection is at
(-2.25, 6.0625)
. You can look at the graph to confirm this.
Finding Lines that are Simultaneously Tangent to Two Functions
The example is to find the equation(s) of the line or lines that are simultaneously tangent to f(x) = x^2 + 4 and g(x) = -(x - 1)^2. See step-by-step solution.
y(x) = √1 - x^2
The graph of this is shown in the figure to the right. You probably recall from algebra that this function has a graph that is a semicircle. Find the equations of both the tangent line and the normal
line at
x = 0.8
. Then, without appealing to the geometry of semicircles, but using only derivatives and algebra, show that every line normal to this function goes through the origin no matter what point on the
curve it passes through.
Step 1: Find the derivative of the function. You will need it for every step after this one. Notice that this function is a composite of y(x) = f(g(x)), where
f(x) = √x
g(x) = 1 - x^2
. Because it is a composite, you will need to apply the
chain rule
in order to find
. At this point I am not going to give away this derivative -- you should be able to do it yourself. I will remind you that if
f(x) = √x
f'(x) = [S: :S]
which is useful to know in finding the derivative of
Step 2: Where is the point of tangency? Simply substitute the x given in the problem into the function to get a value for y(x). You shouldn't have to think too hard, because I gave it away in the
Step 3: What is the slope of the tangent line? of the normal line? Remember that the slope of the tangent equals the derivative of the function at the point of tangency. So stick the value given for
x into the equation you have for y'(x) to find that value. That is the m for the tangent line. It's negative reciprocal is the m for the normal line.
Step 4: Solve for b in both cases, that is the b for the tangent line and the b for the normal line. Remember that you now know the m's for both of them as well as an x and corresponding y. Just
substitute them into y = mx + b and solve for b.
That completes the problem for both finding the tangent and the normal. So write your y = mx + b equations for each, substituting the values you found for m and b into each. Did you get b = 0 for the
normal line? If you didn't you made a mistake. If you did, you are ready to attack the second part of the problem. Show that when you compute the normal equation for any point on this curve, the b is
always zero.
Simply write y = mx + b, but substitute the expression for negative reciprocal of slope in for m and substitute the expression given in the problem for y(x) in for y. Take all the cancellations you
can. You should end up with b = 0 standing all alone.
Move on to Hilltops and Valley Floors
You can email me by clicking this button:
|
{"url":"http://www.karlscalculus.org/calc4_5.html","timestamp":"2014-04-17T07:20:16Z","content_type":null,"content_length":"19737","record_id":"<urn:uuid:a092847e-7aba-4dd1-b060-3e8f912d003d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
|
erstein 3rd
Beginning Algebra, 11th Edition Margaret L. Lial, John Hornsby, Terry McGinnis College Algebra, 8th Edition Ron Larson, Robert P. Hostetler Algebra: Introductory and ...
Detailed Syllabus of M.A./M.Sc. Semester- I (Mathematics)
Detailed Syllabus of M.A./M.Sc. Semester- I (Mathematics) MN 101 : Algebra - I I] Groups : 1. Sylows Theorems. P Sylow subgroups. Structure Theorem for finite abelian groups.
Linear Abstract Algebra [Archive] - Physics Forums
View Full Version : Linear Abstract Algebra. [SOLVED] Group Theory For Dummies; Algorithm for matrix inversion Bohemian Intellectual Mathematics and Occultism
Math 552: Modern Algebra II-Spring 2008
Math 552: Modern Algebra II-Spring 2008 Instructor: Lus Finotti Oce: Ayres Hall 212-D Phone: 974-1321 (please do not ask me to callback-leave youre-mail) e-mail: finotti ...
Ordinance No. 19: Admission of Candidates to Degrees. Ordinance No. 109: Recording of a change of name of a University student in the records of the University.
do c s .think fre e .co m
Student Plus Solutions Manuals Test Banks. SEPTEMBER, 2010 UPDATE. We would like to inform you that we have updated our list with thousands of new titles.
Algebra - Wikipedia, the free encyclopedia
Algebra is the branch of mathematics concerning the study of the rules of operations and relations, and the constructions and concepts arising from them, including ...
Amazon.com: Abstract Algebra, 3rd Edition (9780471433347): David S ...
This text is designed to give students insight into the main themes in abstract algebra. Early introduction to recurring notions such as homomorphisms, isomorphisms ...
M.A./M.Sc. Mathematics - Part - I (Sem. I)
M.A./M.Sc. Mathematics - Part - I (Sem. I) 1] Paper No. NMT 101 2] Title of the Paper - Algebra - I 3] Objectives: To study Group Theory, Ring Theory and to introduce the ...
Math 351: Algebra I{Fall 2009
Math 351: Algebra I{Fall 2009 Instructor: Lu sFinotti Oce: Aconda Court 211-H Phone: 974-1321 (please do not ask me to callback{leave youre-mail) e-mail: finotti@math.utk.edu ...
Abstract Algebra Theory and Applications
Abstract Algebra Theory and Applications Thomas W. Judson Stephen F. Austin State University February 14, 2009
Sitemap list for A
Sitemap list for A. The following are lists of documents, beginning with the letter alphabet A. a 106 gr b charpy; a 26p ch 5 reading guide key; a 5399subaru
BEACHY / BLAIR: ABSTRACT ALGEBRA - NIU Math Department
INTRODUCTION Some of the strengths of this undergraduate/graduate level textbook are the gentle introduction to proof in a concrete setting, the introduction of ...
ABSTRACT ALGEBRA ON LINE - NIU Math Department
Contains many of the definitions and theorems from the area of mathematics generally called abstract algebra. Intended for undergraduate students taking an abstract ...
Course Structure for Integrated MSc. in Mathematics
13. Differential Geometry 14. Optimization and calculus of variations 15. Advanced PDE 16. Advanced Probability and Stochastic Process 17. Algebraic Topology 18.
Aims and Objectives of the new curriculum
Aims and Objectives of the new curriculum o To maintain updated curriculum. o To take care of fast paced development in the knowledge of mathematics. o To meet the needs and ...
- 2 - M.A./M.Sc. ( Previous) Mathematics - Semester -I Paper -I- Advanced Abstract Algebra -I (Maximum Number of Periods: 60) Extension Fields, Roots of Polynomials, More about ...
online book stores, indias largest bookstore, NBC India.com
Nbcindia.com - Indias biggest book store - Online Shopping for Books .
P.G. I (MATHEMATICS) SYLLABUS
Paper : I Unit I : Abstract Algebra I Homomorphisms and Isomorphism . Cauchy theorem and p-grouops. Sylow Group and Theorems. Normal and subnormal series.
1 2 Syllabus Prescribed for M.A./M.Sc. Part-I Part-II Semester I to IV (Mathematics) M.A./M.Sc. Part-I (Mathematics) M.A./M.Sc. Part-I -Semester I : Compulsory Papers Paper-I ...
Instructor Solution Manual : Elementary Differential Equations and ...
Download Free eBook:Instructor Solution Manual : Elementary Differential Equations and Boundary Value Problems , 8th Edition - Free chm, pdf ebooks rapidshare ...
|
{"url":"http://www.cawnet.org/docid/herstein+3rd+edition+abstract+algebra+solutions/","timestamp":"2014-04-16T13:05:03Z","content_type":null,"content_length":"52694","record_id":"<urn:uuid:8629a6f1-2de2-4c42-889a-be345cd1b499>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Show that the Bisection Method converges to...
Ok, I thought that maybe I wasn't doing enough iterations. So to avoid hours of number crunching I wrote the bisection method in True Basic code and made it print out the results so I could graph it.
It worked for parts (a) and (b) because when I graph p vs n, p converges to 0 for part (a) and 2 for part (b). Strangely enough, when I do the same for part (c) it converges to 0 instead of 1. Can
anyone tell me why?
I did the same as before:
For part (a) I used a=- 0.75 and b= 2.5
part (b) a= -0.25 and b= 2.5
part (c) a=-0.5 and b=2.5
Here is the code and graphs of all 3 parts:
program Bisection
option nolet
input prompt "a: ":aa
input prompt "b: ":bb
input prompt "TOL: ":tol
print "Enter file name for saving data (enter it as filename.dat)"
input prompt "(AND make sure there isn't already a file with that name): ": file$
open #23: name file$, access output, create newold, org text
print "a", "b", "f*f", "p", "k"
print #23: "a", "b", "f*f", "p", "k"
do while kk<nn
print aa, bb, ff, pp, kk
print #23: aa, bb, ff, pp, kk
if ff<0 then
end if
print aa, bb, ff, pp, kk
print #23: aa, bb, ff, pp, kk
|
{"url":"http://www.physicsforums.com/showthread.php?t=154203","timestamp":"2014-04-20T14:18:41Z","content_type":null,"content_length":"37885","record_id":"<urn:uuid:995ffefc-4191-4ef4-aad6-19226a7e26b7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Characterization of Inefficiency in Stochastic Overlapping Generations Economies
Bloise, Gaetano and Calciano, Filippo L. (2007): A Characterization of Inefficiency in Stochastic Overlapping Generations Economies. Forthcoming in: Journal of Economic Theory (2008)
Download (238Kb) | Preview
In this paper, we provide a characterization of interim inefficiency in stochastic economies of overlapping generations under possibly sequentially incomplete markets. With respect to the established
body of results in the literature, we remove the hypothesis of two-period horizons, by considering longer, though uniformly bounded, horizons for generations. The characterization exploits a suitably
Modified Cass Criterion, grounded on the long-rung behavior of compounded safe interest rates and independent of the length of horizons of generations. Thus, the hypothesis of two-period horizons is
purely heuristic in establishing a criterion for inefficiency. In addition, for sequentially incomplete markets, we adopt a suitable notion of unambiguous inefficiency, separating the inefficient
intertemporal allocation of resources from incomplete risk-sharing. Unambiguous inefficiency reduces to inefficiency when markets are sequentially complete.
Item Type: MPRA Paper
Original A Characterization of Inefficiency in Stochastic Overlapping Generations Economies
Language: English
Subjects: D - Microeconomics > D6 - Welfare Economics > D61 - Allocative Efficiency; Cost-Benefit Analysis
D - Microeconomics > D5 - General Equilibrium and Disequilibrium > D52 - Incomplete Markets
Item ID: 8780
Depositing Filippo L. Calciano
Date 18. May 2008 04:40
Last 16. Feb 2013 09:45
S.R.Aiyagari and D. Peled. Dominant root characterization of Pareto optimality and the existence of optimal monetary equilibria in stochastic OLG models. Journal of Economic Theory,
54, 69-83, 1991.
\bibitem{albor} C.D.\ Aliprantis and K.C.\ Border. {\em Infinite Dymensional Analysis: A Hitchhiker's Guide}. Springer-Verlag, 1999.
\bibitem{bash80} Y.\ Balasko and K.\ Shell. The overlapping generations model, I: The case of pure exchange without money. {\it Journal of Economic Theory}, 23, 281-306, 1980.
\bibitem{bacash80}Y.\ Balasko, D.\ Cass and K.\ Shell. Existence of competitive equilibrium in a general overlapping-generations model. {\it Journal of Economic Theory}, 23, 307-322,
\bibitem{baka07} M.\ Barbie and A.\ Kaul. Pareto optimality and existence of monetary equilibria in a stochastic OLG model: A recursive approach. Mimeograph, March 6, 2007.
\bibitem{benveniste76} L.M.\ Benveniste. A complete characterization of efficiency for a general capital accumulation model. {\it Journal of Economic Theory}, 12, 325-337, 1976.
\bibitem{benveniste} L.M.\ Benveniste. Pricing optimal distributions to overlapping generations: A corollary to efficiency pricing. {\it Review of Economic Studies}, 53, 301-206, 1986.
\bibitem{bloise-pda} G.\ Bloise. Efficiency and prices in economies of overlapping generations. Working Paper n.\ 72, Department of Economics, University of Rome III, 2007.
\bibitem{bu87} J.L.\ Burke. Inactive transfers policy and efficiency in general overlapping generations economies. {\it Journal of Mathematical Economics}, 16, 201-222, 1987.
%\bibitem{bu88} J.L.\ Burke. On the existence of price equilibria in dynamic economies. {\it Journal of Economic Theory}, 44, 281-300, 1988.
%\bibitem{bu92} J.L.\ Burke. Incorporating overlapping-generations inefficiency and price bubbles into a general equilibrium framework. Mimeograph (preliminary and incomplete version),
June 22, 1992.
%\bibitem{bu95}J.L.\ Burke. Existence of a Pareto-optimal equilibrium in nearly-stationary overlapping-generations economies. {\it Economic Theory}, 5, 247-261, 1995.
%\bibitem{bu99} J.L.\ Burke. The robustness of optimal equilibrium %among overlapping generations. {\it Economic Theory}, 14, 311-329, %1999.
References: \bibitem{cass} D.\ Cass. On capital overaccumulation in the aggregative neoclassical model of economic growth: A complete characterization. {\it Journal of Economic Theory}, 4,
200-223, 1972.
\bibitem{chatto} S.\ Chattopadhyay. Long-lived assets, incomplete markets, and optimality. IVIE Working Papers, WP-AD 2001-10, April 2001.
\bibitem{chgo99} S.\ Chattopadhyay and P.\ Gottardi. Stochastic OLG models, market structure, and optimality. {\it Journal of Economic Theory}, 89, 21-67, 1999.
\bibitem{debreu}G.\ Debreu. The coefficient of resource utilization. {\it Econometrica}, 19, 273-292, 1951.
\bibitem{debreu-tv} G.\ Debreu. {\em Theory of Value}. New York: Wiley, 1959.
\bibitem{geapole} J.D.\ Geanakoplos and H.M.\ Polemarchakis. Overlapping generations. In W.\ Hildenbrand and H.\ Sonnenschein (eds.), {\it Handbook of Mathematical Economics}, vol.\
IV, New-York: North-Holland, 1891-1960, 1991.
%\bibitem{goku06} P.\ Gottardi and F.\ Kubler. Social security and %risk sharing. Mimeograph, June 14, 2006.
\bibitem{hesp06} E.\ Henriksen and S.\ Spear. Dynamic suboptimality of competitive equilibrium in multiperiod overlapping generations economies. Mimeograph, November 7, 2006.
\bibitem{mopi03}A.\ Molina-Abraldes and J.\ Pintos-Clap\'es. A complete characterization of Pareto optimality for general OLG economies. {\it Journal of Economic Theory}, 113, 235-252,
\bibitem{okzi80} M.\ Okuno and I.\ Zilcha. On the efficiency of a competitive equilibrium in infinite horizon monetary economies. {\it Review of Economic Studies}, 47, 797-807, 1980.
\bibitem{samuelson}P.A.\ Samuelson. An exact consumption-loan model of interest with or without the contrivance of money. {\em Journal of Political Economy}, 66, 467–482, 1958.
%\bibitem{risr88} S.F.\ Richard and S.\ Srivastava. Equilibrium %in economies with infinitely many consumers and infinitely many %commodities. {\it Journal of Mathematical Economics},
17, 9-21, %1988.
%\bibitem{schaefer1959} H.\ Schaefer. Some spectral properties of %positive linear operators. {\em Pacific Journal of Mathematics} %(???), ???, ???-???, 1959 (???). \end
URI: http://mpra.ub.uni-muenchen.de/id/eprint/8780
|
{"url":"http://mpra.ub.uni-muenchen.de/8780/","timestamp":"2014-04-19T13:11:47Z","content_type":null,"content_length":"27047","record_id":"<urn:uuid:b3370310-92cb-459f-b204-ac58d8fa69dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
poly- + (di)abolo, from the resemblance of a diabolo in cross-section to two right isosceles triangles joined at the vertices of their right angles
polyabolo (plural polyabolos or polyaboloes)
1. (geometry) A polyform made by joining right isosceles triangles edge to edge in various arrangements.
Last modified on 19 September 2013, at 15:34
|
{"url":"http://en.m.wiktionary.org/wiki/polyabolo","timestamp":"2014-04-21T02:43:18Z","content_type":null,"content_length":"16961","record_id":"<urn:uuid:0f6efe80-9611-4c40-a0da-7c5820f62721>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
vector help
June 23rd 2013, 04:16 AM #1
Junior Member
Aug 2011
vector help
Three vectors a, b and c are each 50m long and lie in the x-y place. Their directions relative to the positive x axis are 30 degree, 195 degree and 315 degree respectively. what are the magnitude
and angle of a + b + c, and of a - b + c. What are the magnitude and angle of a fourth vector d such that (a + b) - (c + d) = 0.
I took the angle between the two vectors a and b as 195 - 30 = 165 and tried to apply parallelogram law of addition for a and b. and c could be added to a + b.
Is the method correct? The problem is I am not able to do anything with the angle 165 degree.
How to do this problem. Any help?
Advance thanks
Re: vector help
Not sure that you can find the sum of the vectors using the parallelogram law (someone may correct me). I would find the x,y components of the vectors and then do simple vector addition. So for a
you have an angle of 30 and a hypotenuse of 50. Use plain old trig to get x and y. Do the same for the others.
Re: vector help
ok. For 30 degree, it would be easy. What about 195 degree and 315 degree?
Re: vector help
Well 195 gives you a triangle with an angle of 15 from the horizontal. You should be able to figure out the x,y components.
June 23rd 2013, 04:30 AM #2
Aug 2012
June 23rd 2013, 04:56 AM #3
Junior Member
Aug 2011
June 23rd 2013, 05:03 AM #4
Aug 2012
|
{"url":"http://mathhelpforum.com/algebra/220085-vector-help.html","timestamp":"2014-04-18T03:35:07Z","content_type":null,"content_length":"38143","record_id":"<urn:uuid:9665fb06-5443-479d-a651-00a1f4a4870d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gilberts Math Tutor
Find a Gilberts Math Tutor
...I bring a diverse background to the tutoring sessions. I thoroughly enjoy tutoring ACT Math due to the diversity of subject matter. With directed practice, a student can definitely improve his/
her test results in a reasonable amount of time.
18 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel
...I analyze the student's learning style, ability, interests, and talents in order to individualize instruction to best fit each student's needs. I use pretests to determine student skills, and I
strive to never bore students by teaching them what they already know. I take full advantage of lesson time.
24 Subjects: including trigonometry, differential equations, discrete math, dyslexia
...I'd love to help! I have taught various levels of physics to public and private high school students for 20 years. On the AP Physics B exam, almost half of my students earn scores of 5, the
highest score possible, while most of the other half receive scores of 4!
2 Subjects: including algebra 1, physics
...I have taught in many after-school tutoring programs and homework helping programs, but I definitely know that sometimes a child just needs a private tutor in order for him/her to get caught
up, to understand the material, or to get ahead. I am a parent as well, and so I completely understand ho...
9 Subjects: including prealgebra, algebra 1, reading, grammar
...During my two and a half years of teaching high school, I have taught various levels of Algebra 1 and Algebra 2. I have a teaching certificate in high school mathematics issued by the South
Carolina State Department of Education. During my two and half years of teaching high school mathematics, I have had the opportunity to teach various levels of Prealgebra, Algebra 1 and Algebra 2.
12 Subjects: including algebra 1, algebra 2, calculus, geometry
|
{"url":"http://www.purplemath.com/gilberts_il_math_tutors.php","timestamp":"2014-04-18T08:42:04Z","content_type":null,"content_length":"23746","record_id":"<urn:uuid:fc3990c7-b01d-44f0-9293-e5b3cf5d52de>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Instructions for making the topologically fun Hearts can be found from Matt Parker or on the 360 Blog.
Topologically, the Möbius strip can be defined as [0,1] × [0,1] with its top and bottom sides identified under the relation (x, 0) ~ (1 - x, 1) for 0 = x = 1
The Möbius strip is a two-dimensional compact manifold with a single boundary. Cutting the Möbius strip along the center line creates one long strip with two full twists in it, and two edges.
While the topology of a Möbius strip sliced down the centre line isn’t as interesting (as it’s no
longer a Möbius strip then), the gesture still seems sound. The Möbius based hearts are a nice
addition to any Valentine's Day.
There is also the Sierpinski themed option, with instructions available on the 360 Blog as well.
Based off of the Sierpinski triangle, the pop-up Sierpinski heart card is a romantic homage to the famous fractal.
The basic idea behind the creation of the pop-up heart card is a simple application of the standard algorithm for obtaining arbitrarily close approximations to the Sierpinski triangle. Unfortunately,
in the paper-heart case, the difficulties of folding come into play.
When done properly, the number of hearts (triangles) after the nth iteration is 3^n.
While it’s not the most careful application of the fractal generating algorithm, it’s still pretty cute. Although, since the (Lebesgue) area of a Sierpinski triangle is zero, the romantic gesture
might be a little lacking.
Loveable Equations
The Wolfram|Alpha blog did a nice post for this Valentine’s Day featuring some very loveable equations. Instead of paper, try generating some your favourite
shapes with the equation grapher of your choice.
There are five standard one dimensional curves (two dimensional graphs) for Valentine's Day: the (rotated) Cardioid, the First Heart Curve, the Second
Heart Curve, the Third Heart Curve, and the Fourth Heart Curve.
Of course, there are always the standard two Valentine’s Day Surfaces to choose from:
Taubin's Heart Or the Bonne Projection
If all else fails, candy is still a valid fall back option.
Happy Valentine's Day!
|
{"url":"http://individual.utoronto.ca/sck/vday.html","timestamp":"2014-04-19T20:29:31Z","content_type":null,"content_length":"7638","record_id":"<urn:uuid:cf03a2d5-d40d-4632-af9d-d52e367b0da1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Some concepts regarding area that I need help understanding
Most 3d volume formulas look similar to their corresponding 2d area formulas for a particular shape. How does calculus, specifically integration, relate the two sets of formulas?
For example: how is the formula of a cone derived from the formula of a circle?
|
{"url":"http://www.physicsforums.com/showthread.php?t=133756","timestamp":"2014-04-21T14:59:18Z","content_type":null,"content_length":"22145","record_id":"<urn:uuid:1ba53797-706c-4fbb-a08b-0464eb54e55b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Portability non-portable (DeriveDataTypeable)
Stability unstable
Maintainer Marco Túlio Pimenta Gontijo <marcotmarcot@gmail.com>
Safe Haskell None
Chuchu is a system similar to Ruby's Cucumber for Behaviour Driven Development. It works with a language similar to Cucumber's Gherkin, which is parsed using package abacate.
This module provides the main function for a test file based on Behaviour Driven Development for Haskell.
Example for a Stack calculator:
Feature: Division
In order to avoid silly mistakes
Cashiers must be able to calculate a fraction
Scenario: Regular numbers
Given that I have entered 3 into the calculator
And that I have entered 2 into the calculator
When I press divide
Then the result should be 1.5 on the screen
import Control.Applicative
import Control.Monad.IO.Class
import Control.Monad.Trans.State
import Test.Chuchu
import Test.HUnit
type CalculatorT m = StateT [Double] m
enterNumber :: Monad m => Double -> CalculatorT m ()
enterNumber = modify . (:)
getDisplay :: Monad m => CalculatorT m Double
= do
ns <- get
return $ head $ ns ++ [0]
divide :: Monad m => CalculatorT m ()
divide = do
(n1:n2:ns) <- get
put $ (n2 / n1) : ns
defs :: Chuchu (CalculatorT IO)
= do
("that I have entered " *> number <* " into the calculator")
When "I press divide" $ const divide
Then ("the result should be " *> number <* " on the screen")
$ \n
-> do
d <- getDisplay
liftIO $ d @?= n
main :: IO ()
main = chuchuMain defs (`evalStateT` [])
chuchuMain :: (MonadBaseControl IO m, MonadIO m, Applicative m) => Chuchu m -> (m () -> IO ()) -> IO ()Source
The main function for the test file. It expects one or more .feature file as parameters on the command line. If you want to use it inside a library, consider using withArgs.
|
{"url":"http://hackage.haskell.org/package/chuchu-0.4.1/docs/Test-Chuchu.html","timestamp":"2014-04-20T02:06:44Z","content_type":null,"content_length":"5799","record_id":"<urn:uuid:69cdefd2-1fb8-4b2e-8926-82cc65f2c5af>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Forks Township, PA Science Tutor
Find a Forks Township, PA Science Tutor
...For instance, when learning about the cellular respiration, I teach it from a human being standpoint, a plant standpoint, and a microorganism standpoint, and talk about the consequences in each
organism if it doesn't work properly. I have taught high school biology for 4 years, and included in t...
10 Subjects: including biology, physiology, ecology, special needs
...I'd love to teach you in any of my listed academic subjects. I favor a dual approach, focused on both understanding concepts and going through practice problems. Let me know what concepts
you're struggling with before our session, so I can streamline the session as much as possible!
26 Subjects: including mechanical engineering, psychology, ACT Science, English
...I am currently a high school Environmental Science and Biology teacher, and I previously taught Math in Pennsylvania. I graduated from East Stroudsburg University with a B.S. in Biology and
Secondary Education, and hopefully my experience can help you! Math and Science are needed to understand the world around us and how things work, as well as to prepare us for the future.
23 Subjects: including chemistry, physics, elementary math, biochemistry
...Although I prefer to work with K-5 students, I am open to working with other age groups. I completed honors and AP courses in High school, and successfully completed the SAT, ACT, and GRE. I am
open to help anyone that I can.
49 Subjects: including sociology, ACT Science, physical science, anatomy
...I can remember strict teachers drilling proper English usage into my head: diagramming sentences, looking up words in the dictionary, re-writing papers that my teachers knew I didn't put much
effort into. It's no wonder that my English skills exceed those of most of today's English teachers. Un...
23 Subjects: including ACT Science, English, calculus, geometry
|
{"url":"http://www.purplemath.com/Forks_Township_PA_Science_tutors.php","timestamp":"2014-04-19T23:22:33Z","content_type":null,"content_length":"24408","record_id":"<urn:uuid:55ff089c-f7cd-4651-a02a-cee32b71d3ca>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
sequence, not easy but not hard
November 11th 2007, 07:47 AM
sequence, not easy but not hard
A prime number $p$ is given. A sequence of positive integers $a_{1}, a_{2}, a_{3},...$ is determined by this conditon:
$a_{n+1}=a_{n}+p \lfloor \sqrt[p]{a_{n}} \rfloor$
Prove that there is a term in this sequence, which is a $p$-power of an integer number.
December 2nd 2007, 12:07 PM
Anybody can help me ?? Is this so hard?
December 4th 2007, 09:48 PM
$a_1$ is a 1-power of an integer :).
or have I misunderstood?
December 9th 2007, 04:05 AM
1 is not prime number. It is harder question.
Please help me with this question, I cannot do it anyway..... Please
December 9th 2007, 07:24 AM
What do you mean by "p-power?" For example, choose p = 2 (to keep things simple.) Then given an $a_1 = 2$ (for instance) we get
$a_2 = a_1 + 2\sqrt{a_1} = 2 + 2\sqrt{2}$
$a_3 = a_2 + 2\sqrt{a_2} = 2 + \sqrt{2} + 2\sqrt{2 + \sqrt{2}}$
Are you saying that for some n of this sequence that $a_n = k^p$ where k is some integer?
December 9th 2007, 12:08 PM
Yes exactly!!
$p$ power of an integer means
$a_{n}=k^{p}$ k integer, p prime
|
{"url":"http://mathhelpforum.com/number-theory/22455-sequence-not-easy-but-not-hard-print.html","timestamp":"2014-04-16T10:35:41Z","content_type":null,"content_length":"8635","record_id":"<urn:uuid:d224a391-062b-447d-b204-f451a17284a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What if 25% of the Cars were plug in...How much power is needed?
Last week Toyota announced a partnership with Tesla motors backed by $50M in investments. Tesla is the manufacturer of the trendy $100K all electric plug in sports car and has a model for us all in
the works, the
Model S
. Toyota wants the technology and I can just imagine a Tesla/Prius in every garage. Gov. Schwarzenegger hailed the joint venture as the future and asked us all to imagine CA with more plug ins.
“What we are witnessing today is an historic example of California’s transition to a cleaner, greener and more prosperous future. We challenged auto companies to innovate, and both Tesla and Toyota
stepped up in a big way, not only creating vehicles that reduce emissions and appeal to consumers but also boosting economic growth,” said Governor Schwarzenegger.
How will all these plug ins be powered? Everyone seems to think that electricity comes from a plug in the wall. Power has to come from somewhere. How will we make a green lifecycle from source to
vehicle? Wind turbines? Coal? Gas? Solar? Nuclear?
Lets break it down.
136,000,000 registered passenger vehicles in 2007. Lets say 25% of the cars suddenly become plug ins. Therefore: 34,000,000 vehicles.
16.8 KW = 56miles charged per hour per the Tesla website.
Assume 12,000 miles per year driven we have 214 hrs of charge at 16.8 KW or 3600 KW-Hrs per car.
with 34M cars we have 1.2 x 10^11 KW-Hrs
A new nuclear power plant generates 13 billion kilowatt-hours (kWh) or 1.3 x 10^10 (assuming 1600MWe and 92% availability).
So the final answer is: 9.4 new nuclear plants would be required to keep all those vehicles charged. One plant charges 3.6M vehicles. There were 16,153,952 new vehicles (cars trucks and SUVs) sold in
Conclusion: We need one new 1600MWe plant a year if 25% of the new cars are all electric using the numbers and 2007 sales rates above.
Electric vehicles are great, we just need to remember that the power source is part of the equation and that conservation and alternative energy will not be enough to account for future energy
3 billion barrels of gasoline were refined in 2006 out of 5.5 billion barrels of crude oil. 1.6 x10^9 gallons or 3.8 x 10^7 barrels of gasoline would be removed per year if 25% of new cars were all
electric. Using the ratio of gas to oil equates to 7 x 10^7 barrels of crude oil saved per year (2006 refining and 2007 car sales and 30mpg).
Numbers and calculations are for illustrative purposes. I am hoping for credit for error carried forward--ECF.
Good points raised from readers comments:
1. The number of cars calculation I used omits trucks and SUVs reducing the overall number of cars.
2. What about reduced electricity demand off peak at night? Good question. I did not take that into account, however smartgrid technology and offpeak charging will mitigate the effects of EV. There
is also talk of VTG or vehicle to grid where the electric vehicle could actually supply power during peak or the most expensive time of day and then charge during off peak or cheaper times of day.
1 comment:
1. what is the peak off-peak differential of the current generation capacity?
|
{"url":"http://powertrends.blogspot.com/2010/05/what-if-25-of-cars-were-plug-inhow-much.html","timestamp":"2014-04-18T20:43:18Z","content_type":null,"content_length":"56515","record_id":"<urn:uuid:c2d7b42e-6bcf-461a-9e7b-c169b207fc13>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Metric to Metric Conversions - Unit Cancelling Method
Unit cancellation is one of the easiest ways to keep control of your units in any science problem. This example converts grams to kilograms. It doesn't matter what the units are, the process is the
Question: How many kilograms are in 1,532 grams?
The graphic shows seven steps to convert grams to kilograms.
Step A shows the relationship between kilograms and grams.
In Step B, both sides of the equation are divided by 1000 g.
Step C shows how the value of 1 kg/1000 g is the equal to the number 1. This step is important in the unit cancellation method. When you multiply a number or variable by 1, the value is unchanged.
Step D restates the example problem.
In Step E, multiply both sides of the equation by 1 and substitute the left side's 1 with the value in step C.
Step F is the unit cancellation step. The gram unit from the top (or numerator) of the fraction is canceled from the bottom (or denominator) leaving only the kilogram unit.
Dividing 1536 by 1000 yields the final answer in step G.
The final answer is: There are 1.536 kg in 1536 grams.
|
{"url":"http://chemistry.about.com/od/chemistry101/ss/g2kgsteps.htm","timestamp":"2014-04-21T04:36:01Z","content_type":null,"content_length":"41182","record_id":"<urn:uuid:19232c05-daa9-4bd7-a381-4edaee23da9d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ormal language
Formal language
From Encyclopedia of Mathematics
in mathematical linguistics
An arbitrary set of chains (that is, words, cf. Word) over some (finite or infinite) alphabet mathematical linguistics and the theory of automata (cf. Automata, theory of) one considers various
effective ways of specifying a formal language, principally by means of formal grammars (cf. Grammar, formal) and automata of various types, which can, in the majority of cases, be described as
modifications of non-deterministic Turing machines (cf. Turing machine), often multi-tape, with some restrictions on the ways the machine works on each tape.
Operations on formal languages.
In addition to the usual set-theoretic operations on formal languages, one carries out: multiplication (or direct multiplication, or concatenation):
left division:
right division
and substitution: If
if each of the languages homomorphism; if all the
A variety of languages is an ordered pair
[1] A.V. Gladkii, "Formal grammars and languages" , Moscow (1973) (In Russian)
[2] S. Ginsburg, S. Greibach, Y. Hopcroft, "Studies in abstract families of languages" Mem. Amer. Math. Soc. , 87 (1969) pp. 1–32
For more details see Formal languages and automata.
How to Cite This Entry:
Formal language. A.V. Gladkii (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Formal_language&oldid=18696
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
|
{"url":"http://www.encyclopediaofmath.org/index.php/Formal_language","timestamp":"2014-04-19T01:47:35Z","content_type":null,"content_length":"22708","record_id":"<urn:uuid:965e4f45-ddec-4aac-a5e9-04bc8aa02c0a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding a Missing Angle
Date: 01/23/2002 at 21:11:07
From: Mel
Subject: Area of a triangle
I am home schooling and am stuck on an assignment about geometry, and
trigonometry. Here's one question:
I have a triangle with sides:
AB = 3.6 cm
BC = 6.5 cm
CA = 5.5 cm
and angle CAB is 90 degrees.
The question is: Using trigonometry, calculate the measure of angles
ABC and ACB.
How can I find the missing angle? If I had two angles I could subtract
them from 180 degrees, but I only have one.
And I don't know how to find the area of a triangle. In my assignment
a triangle measures:
CA = 10.8cm
AB = 7.2cm
BC = 13cm
and it says find the area. But I don't know how.
Can you help me?
Thanks in advance,
Date: 01/24/2002 at 20:11:06
From: Doctor Jeremiah
Subject: Re: Area of a triangle
Hi Mel,
Trigonometry is the way you find angles when you know only one angle.
There are only three important things you need to know to do
trigonometry. First, the triangle has to have a 90-degree angle.
Second, you need to know the definitions for sine, cosine, and
tangent. And third, in order to make those definitions you need to
define a couple of other things: the hypotenuse, the adjacent side,
and the opposite side.
The triangle MUST have a 90-degree angle. And what good luck, yours
/ |
/ |
/ |
/ |
6.5 5.5
/ |
/ |
/ |
B A
The hypotenuse is the side that does not touch the 90-degree angle.
It is also the longest side. In your triangle it is side BC and has a
length of 6.5
The adjacent side and the opposite side move around depending on what
angle you want to calculate. Say you want to find the value of angle
The adjacent side is the side that touches angle CBA (and isn't the
hypotenuse). In this case it is side AB because side AB touches angle
CBA and side AC does not.
The opposite side is the side that does not touch angle CBA (and also
isn't the hypotenuse). In this case it is side AC because side AC does
not touch angle CBA and side AB does.
And if you are trying to find angle BCA then BC is the hypotenuse,
AB is the opposite side, and AC is the adjacent side.
So what are these definitions good for? Well now we can define the
sine, cosine, and tangent:
sine: sin(angle) = opposite/hypotenuse
cosine: cos(angle) = adjacent/hypotenuse
tangent: tan(angle) = opposite/adjacent
Lets say you want to find the size of angle CBA. The hypotenuse is BC,
the adjacent side is AB, and the opposite side is AC.
We choose an equation from among sine, cosine, and tangent depending
on what information we know. If we know only two sides, we will pick
the equation that uses those two sides. But we know all three sides so
it doesn't matter.
So we will pick one:
sin(angle) = opposite/hypotenuse
sin(CBA) = AC/BC
sin(CBA) = 5.5/6.5
sin(CBA) = 0.846
Now how do we change this into something that equals angle CBA? Right
now it's the sine of CBA and that is not useful.
Well, you know how the square and square root cancel each other out:
sqrt(x squared) = x. It turns out that there are also functions that
undo sine, cosine, and tangent. They are called the arcsine,
arccosine, and arctangent:
arcsine: arcsin( sin(angle) ) = angle
arccosine: arccos( cos(angle) ) = angle
arctangent: arctan( tan(angle) ) = angle
So when we have an equation that looks like:
sin(CBA) = 0.846
We can do this:
sin(CBA) = 0.846
arcsin( sin(CBA) ) = arcsin( 0.846 )
CBA = arcsin( 0.846 )
Now, to do the arcsin you will need a calculator or a table or
arcsines or a slide rule.
On a calculator the arcsine is sometimes called the inverse sine, or
sometimes has the symbol of sin with a -1 exponent, and sometimes it's
unlabeled but is on the secondary function ogf the sine button.
For your second question, about triangle area:
When you don't know for sure that the triangle has a 90-degree angle,
you have to be sneaky. If you know for sure that it has a 90-degree
angle, then it's just half a rectangle (cut diagonally). But without
a 90-degree angle we need to use Heron's formula.
Heron also proves his famous formula:
If A is the area of a triangle with sides a, b, and c, and
s = (a+b+c)/2, then A = sqrt[s(s-a)(s-b)(s-c)].
Let's change your triangle to use sides named a, b and c:
a = 7.2cm
b = 10.8cm
c = 13cm
s = (a+b+c)/2
s = (7.2+10.8+13)/2
s = 15.5
A = sqrt[ s(s-a)(s-b)s-c) ]
A = sqrt[ 15.5 (15.5 - 7.2) (15.5 - 10.8) (15.5 - 13) ]
A = sqrt[ 15.5 (8.3) (4.7) (2.5) ]
A = 38.87978
Let me know if you still have questions.
- Doctor Jeremiah, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/54195.html","timestamp":"2014-04-24T10:36:11Z","content_type":null,"content_length":"10003","record_id":"<urn:uuid:2704c787-d285-4e2b-91db-bac4790219a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
For s&g/the sake of knowing: If the universe, for some crazy reason, was completely devoid of gravity, would we able to find the mass of anything? I'm aware that we can measure the mass of objects in
a massless environment by using a spring and looking at T=2(pi)(sqrt(m/k)), but that's based on the fact that we know the spring constant, as far as i'm aware, from experiments utilizing gravity on
• one year ago
• one year ago
Best Response
You've already chosen the best response.
(i.e. So what would happen if we didn't know the spring constant beforehand, and we couldn't, at least through gravity.)
Best Response
You've already chosen the best response.
inertia, things have mass independent of a local gravity field.
Best Response
You've already chosen the best response.
? Could you explain some more? :/
Best Response
You've already chosen the best response.
(I'm aware they still have mass, but I don't know if we could measure it.)
Best Response
You've already chosen the best response.
imagine an electron and a proton in deep space far away from any g field. the exert a force on each other. based on that force they accelerate according to F=ma.
Best Response
You've already chosen the best response.
Okay. So, inertia based on electrical properties? So, question on top of that, without electrical properties, would there be any way?
Best Response
You've already chosen the best response.
it's mass: the 'resistance' to change in velocity
Best Response
You've already chosen the best response.
Not sure what the implications were of you just said.
Best Response
You've already chosen the best response.
mass is intrinsic to particles. you don't need the* force of gravity to measure it. there are 4 forces, any of them will do.
Best Response
You've already chosen the best response.
Okay, so would it to be accurate to say that within the realm of classical mechanics, disavowing nuclear forces (I should've prefaced this with me saying that the experiment you would do to find
out something's mass would be macroscopic, and relatively "intuitive" as opposed to quantum mechanics), EM and Gravity are the only ways you could find something's mass?
Best Response
You've already chosen the best response.
a collision would probably do the trick too. collisions are problems involving mass and velocity that you can solve...
Best Response
You've already chosen the best response.
Could you give me a physical example of this (and relate the math to it, i'm guessing (blatantly assuming) you're talking about \[1/2mv ^{2} _{(1)} + 1/2mv ^{2} _{(2)} = E\] or something very
closely related? The reason I ask is I think, and again, this is just a thought, that most of the variables involved in elastic collisions are somehow derived through mass or dependent on mass.
I'm also just taking a second. \[v = d/t\] Okay, so you can determine velocity without mass. That could account for the velocities of both objects. So from here what could you do, using this
equation, (assuming that what you would do) to figure out an object's mass?
Best Response
You've already chosen the best response.
(By the way, thanks for this. I don't mean to be a bother by making this a super long thread or anything.)
Best Response
You've already chosen the best response.
momentum and energy are conserved in ideal collisions, so you could find mass if you measured velocities before and after the collision.
Best Response
You've already chosen the best response.
So, using: \[1/2mv ^{2} _{(x1)} + 1/2mv^{2} _{(y1)} = 1/2mv ^{2} _{(x2} + 1/2mv ^{2} _{(y2)}\]Mass could be determined. Just being totally clear.
Best Response
You've already chosen the best response.
that's conservation of energy, momentum is conserved also. in fact momentum is conserved in any collision, KE is just conserved in elastic collisions.
Best Response
You've already chosen the best response.
it's all interrelated anyway, you can't really get away from any one part of it. a particle is an excitation of 5 fields, it's a resonant excitation, it's stable and propagates. because only
certain resonances can be stable, it couples very precisely to fields based on a charge. coupling to the EM field gives the traditional notion of charge: electric charge, coupling to the strong
field gives 'color' charge, coupling to the Higg's field gives mass 'charge' (it's a little different than the charges of the other four fields). it's really just a bundle of coupling constants,
and each constant defines a set of properties. coupling to the higg's field defines its mass.
Best Response
You've already chosen the best response.
the fields make up space, (the exist even when there are no particles around), the way the space (fields) are bent by mass (the coupling to the higg's field) makes the gravitational force...
Best Response
You've already chosen the best response.
O.O I'm not at Higgs-field/Higgs Boson level stuff other than a very basic conceptual understanding. All I know at this point in the convo is that while the Higgs Field is likely, it's still
hypothetical and is at odds with SUSY regarding explaining dark matter.
Best Response
You've already chosen the best response.
(Despite this summer and CERN.)
Best Response
You've already chosen the best response.
well, they found the boson, and honestly, not much would work without the higg's field, mass would just be a random weird thing, two identical electrons could have wildly different masses for no
particular reason...
Best Response
You've already chosen the best response.
or no mass at all...
Best Response
You've already chosen the best response.
They likely found the Boson, last I heard. It's overwhelmingly likely that they did, but it's not definite. I have to ask, do you know any of this stuff through your career/major/what you
generally dedicate your life to, or are you just really good at Physics? lol.
Best Response
You've already chosen the best response.
armchair physicist / electrical engineer
Best Response
You've already chosen the best response.
YES that's exactly my double major! But i'm a freshman so I don't know what i'm doing yet, XD. Well, anyways, thank you for a very thought provoking convo and all, That gave me a pretty solid
answer of my question until it got into stuff I don't understand yet.
Best Response
You've already chosen the best response.
here's something you might not know that's related. the schrodinger wave equation, the one that describes a particle's probability to be in certain distribution or propagate... it's actually an
expression of F=ma: it's the space derivative of energy = the time derivative of momentum: potential = -constant*ma !
Best Response
You've already chosen the best response.
Well, again, thank you very much! (And yeah, I didn't know that.)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5061147ae4b0583d5cd29814","timestamp":"2014-04-21T10:30:58Z","content_type":null,"content_length":"93675","record_id":"<urn:uuid:bf78c99d-3db3-4bad-ae53-51928253af0e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistics Discrete and Continuous Distributions
Statistics Discrete and Continuous Distributions
Job Description
• The work needs to be clear and well organized.
• Clearly label each part of the project.
• Note, you can use Excel, Word, or do the project by paper and pencil / pen.
• You must show your work for the calculations. Simply providing an answer is not enough for the calculations. Please see the grading rubric for more information.
Normal Distribution
Jesse is curious to see what the possible daily demand will be for a new restaurant menu item. A similar meal item was introduced earlier in the year and its daily demand is normally distributed with
a mean of 100 and a standard deviation of 25. Assume that the daily demand of the new menu item will also be normally distributed with a mean of 100 and a standard deviation of 25. Help Jesse
determine the following probabilities and daily demand values.
1) Find the probability that daily demand will be between 110 and 130
2) Find the probability that daily demand will be between 80 and 120
3) Find the probability that daily demand will be greater than 130
4) Find the probability that daily demand will be less than 75
5) Find the probability that daily demand will be less than 90 or greater than 120
6) Find the value (let’s call it D) where the probability is 0.40 that demand will be less than this value
7) Find the value (let’s call it D) where the probability is 0.80 that demand will be greater than this value
Alan is reviewing the outstanding bills that have not been paid by his customers. Alan usually gives his customers 30 days to pay a bill. After 30 days the bill is deemed late. Assume the probability
is 30% that a customer will be late in paying a bill. For the next 15 bills that Alan reviews, find the following probabilities:
1) The probability that five will be late
2) The probability that at least five, but not more than ten will be late
3) The probability that more than three, but less than nine will be late
4) The probability that less than seven will be late
5) The probability that eight or more will be late
6) The expected number of late bills
Katie is curious about the number of people entering her store over a half hour period. Suppose that on average six people enter her store every 30 minutes. Help Katie determine the following
1) That exactly six customers will enter the store over the next 30 minutes
2) That less than five customers will enter the store over the next 30 minutes
3) That at least eight, but less than thirteen customers will enter the store over the next 30 minutes
4) That more than nine customers will enter the store over the next 30 minutes
Exponential Distribution.
Suppose the time between arrivals of customers at a store follows an exponential distribution with a mean of three minutes. Determine the following probabilities.
1) The time between arrivals is no more than seven minutes
2) The time between arrivals is between three and six minutes
3) The time between arrivals is greater than four minutes
|
{"url":"https://odesk.com/o/jobs/job/Statistics-Discrete-and-Continuous-Distributions_~01d6cae3a814a87d44/","timestamp":"2014-04-20T03:22:06Z","content_type":null,"content_length":"27527","record_id":"<urn:uuid:42b831fa-a275-4cbc-bbf9-ff06e17b639d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solve two algebra problems
February 2nd 2009, 09:53 PM #1
Feb 2009
Solve two algebra problems
Please help me to solve these equations:
X = -
X x 3x = -X + 4
Thank you !
Explain, please
Hello ShandyI don't understand what you mean. Can you write these more clearly? Use x^2 to mean $x^2$ if you like, and write the word 'times' to mean multiply, so that we don't think it's a
letter x.
please fix the format of your question. just use / to indicate fractions. example, type 1/x to mean $\frac 1x$ and 1/(x + 1) to mean $\frac 1{x + 1}$ etc, also, use ^ to indicate powers. type x^2
to mean $x^2$ for instance. you can use * to mean multiply. what you posted is confusing
Hello Shandy
I'm just guessing that your first equation might be
$x^2 = \frac{8}{15}$
If it is, the answers are
$x = \pm \sqrt{\frac{8}{15}}$
So get your calculator working! Divide 8 by 15. Then find the square root. Then put a $\pm$ sign in front, because there will be two answers.
The second one, I think, might be:
$x \times 3x = -x^2 + 4$
If so, then you solve it like this:
$x \times 3x = -x^2 + 4$
$\Rightarrow 3x^2 = -x^2 + 4$
$\Rightarrow 4x^2 = 4$
$\Rightarrow x^2 = 1$
$\Rightarrow x = \pm 1$
How am I doing? Did I guess right?
Looking at your post inside "Quote" tags, some of the original formatting is displayed while in message-entry mode. Assuming you are using "X" and "x" to actually be the same variable, your
equations appear to be:
. . . . .x^2 = 8/15
. . . . .(x)(3x) = -x^2 + 4
If so, then you can solve the first equation by taking the square root of each side of the equation, remembering the "plus-minus" on the right-hand side. You'll probably need to "rationalize the
denominator" to get the answer in the "right" format.
For the second one, simplify the left-hand side to get 3x^2. Get the variable term together, and then divide 4x^2 = 4 on both sides by 4. Then you can solve by taking the square root of either
Have fun!
February 2nd 2009, 10:00 PM #2
February 2nd 2009, 10:01 PM #3
February 3rd 2009, 12:29 AM #4
February 3rd 2009, 04:30 AM #5
MHF Contributor
Mar 2007
|
{"url":"http://mathhelpforum.com/algebra/71493-solve-two-algebra-problems.html","timestamp":"2014-04-20T09:01:40Z","content_type":null,"content_length":"49524","record_id":"<urn:uuid:21130cdc-baa4-489a-8df7-fde217cb8c41>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why are people so arrogant?
Mar-19-2012, 04:00 #1
Senior Member
Join Date
Dec 2011
This is not a thread about the question that these speakers are trying to answer, someone else already started such a thread as to the question of somethingness and nothingness.
My question is rather, why do people often dismiss this question as if it were really easy, and that the person asking this question is an idiot?
This conference was attended by people with impeccable credentials, professors at Yale, etc; they're undeniably intelligent, and think that this question is worth as worth asking, and yet most
people will dismiss this question is meaningless.
Why do you think people are so arrogant? This is a sociological question.
Basically comes down to a lack of good communication skills, which has little to do with whether you have a PHD or whatever. One is not born with communication skills, one learns them (in many
ways), it's a work in progress. Arrogance to me comes across as being part of poor communication - I extend it to lack of empathy, lack of understanding, inability to appreciate diversity (of
people, their opinions, etc.), building up ideologies to become hardened dogmas, lack of flexibility, ivory tower and clique mentalities, not speaking your mind and hiding behind various agendas,
etc. etc. etc...
Contrasts and Connections in Music
"There will be a moment or two of confusion, but if we all keep our heads, everything will be fine" - Cary Grant.
Basically comes down to a lack of good communication skills, which has little to do with whether you have a PHD or whatever. One is not born with communication skills, one learns them (in many
ways), it's a work in progress. Arrogance to me comes across as being part of poor communication - I extend it to lack of empathy, lack of understanding, inability to appreciate diversity (of
people, their opinions, etc.), building up ideologies to become hardened dogmas, lack of flexibility, ivory tower and clique mentalities, not speaking your mind and hiding behind various agendas,
etc. etc. etc...
This has nothing to do with my question as regards to why people dismiss the question.
Then what exactly is your question? Sorry I'm not clear.
Is it what's on that link "why is there anything?"
Or is it something like "why do people dismiss the question of questioning why someone is arrogant?"
Contrasts and Connections in Music
"There will be a moment or two of confusion, but if we all keep our heads, everything will be fine" - Cary Grant.
Actually I think there are two very different questions in the OP.
1) Why do people dismiss the question?
I don't know if arrogance has much to do with this question. Most people are enormously ignorant of issues surrounding this basic question. They are unaware of the progress already made and the
potential to understand issues related to the question. I think people's dismissal comes not from feeling they know so much but rather from not being able to imagine how we could learn anything
useful toward answering the question.
2) Why are people arrogant?
There are presumably psychological and evolutionary answers to this question, and I'm not sure what those answers are. Acting as though one is more knowledgeable than one actually is gives the
appearance of superiority over others. People who are perceived as better than others (more attractive, smarter, more athletic, etc.) often get treated better. Obviously arrogance can backfire
causing others to shun you, but arrogance must "work" often enough to make it a valuable strategy for some.
Edit: Don't worry, I answered the thread's question. I just took a round about way to do that, and I equivocated questions 1 & 2 because of my approach to the issue.
People dismiss the question, because they don't have a background in critical thinking. I mean no offense to people here, as if you aren't intelligent. You are in fact one of the more intelligent
groups I've been around, and I've been around some pretty smart cats. However, the chances are that if you don't have a background in critical thinking, that you follow absurd trains of thought
all the time. If "absurd" got you going, that was just some humor of mine. It's a technical term for logical impossibilities.
When I use the expression "critical thinking", I am using one of a few technical names/expressions for thinking that is formalized and divides itself between internal (how a proposition/argument
gels together with it's components) and external criticism
(how a proposition/argument corresponds with the outside world; i.e. observed phenomena or facts). Take these algebraic expressions as an example:
P1 implies P2. P3 implies P1. However, P2 does not imply P3, nor does P3 imply P2, and nor does P1 imply P3.
That looks fairly simple, right? Well, you probably make a mistake like "P1 implies P3" all of the time, most of those mistakes being informal fallacies. An "informal fallacy" is a logical error
that occurs when a statement with premises and a conclusion is given, but the premises do not actually imply the conclusion.
Here's an example:
N% of sample S has characteristic C.
(Where S is a sample unrepresentative of the population P.)
Therefore, N% of population P has characteristic C.
That was what you call a "weak analogy", because it had a biased sample.
This is a fallacy affecting statistical inferences, which are arguments of the following form:
N% of sample S has characteristic C.
(Where sample S is a subset of set P, the population.)
Therefore, N% of population P has characteristic C.
For example, suppose that an opaque bag is full of marbles, and you can win a prize by guessing the proportions of colors of the marbles in the bag. Assume, further, that you are allowed to stick
your hand into the bag and withdraw one fistful of marbles before making your guess. Suppose that you pull out ten marbles, six of which are black and four of which are white. The set of all
marbles in the bag is the population which you are going to guess about, and the ten marbles that you removed is the sample. You want to use the information in your sample to guess as closely as
possible the proportion of colors in the bag. You might draw the following conclusions:
60% of the marbles in the bag are black.
40% of the marbles in the bag are white.
Notice that if 100% of the sampled marbles were black, say, then you could infer that all the marbles in the bag are black, and that none of them are white. Thus, the type of inference usually
referred to as "induction by enumeration" is a type of statistical inference, even though it doesn't use percentages. Similarly, from the example we could just draw the vague conclusion that most
of the marbles are black and few of them are white.
The strength of a statistical inference is determined by the degree to which the sample is representative of the population, that is, how similar in the relevant respects the sample and
population are. For example, if we know in advance that all of the marbles in the bag are the same color, then we can conclude that the sample is perfectly representative of the color of the
population—though it might not represent other aspects, such as size. When a sample perfectly represents a population, statistical inferences are actually deductive enthymemes. Otherwise, they
are inductive inferences.
Moreover, since the strength of statistical inferences depend upon the similarity of the sample and population, they are really a species of argument from analogy, and the strength of the
inference varies directly with the strength of the analogy. Thus, a statistical inference will commit the Fallacy of Unrepresentative Sample when the similarity between the sample and population
is too weak to support the conclusion. There are two main ways that a sample can fail to sufficiently represent the population:
The sample is simply too small to represent the population, in which case the argument will commit the subfallacy of Hasty Generalization.
The sample is biased in some way as a result of not having been chosen randomly from the population. The Example is a famous case of such bias in a sample. It also illustrates that even a very
large sample can be biased; the important thing is representativeness, not size. Small samples can be representative, and even a sample of one is sufficient in some cases.
How many of us, without being careful and methodical when it comes to thinking through things, have made this kind of an error in judgment? People don't like that kind of a topic or discussion
direction, because regardless of how smart they are they probably commit fallacies regularly and are irritated by their lack of progress with other people.
So, questions like those asked by existentialists, seem stupid to the average person. The average person doesn't build a linear paper trail in their heads of "and", "if", "then", "or", "but",
"therefore", etc. let alone building series of premised arguments and algebraic functions. Arguing for many people has a lot to do with building straw man after straw man (to build a straw man is
to pretend/think you are criticizing your opponent's position, when you are in fact mistaken about what his/her arguments and/or position are), assuming motives, making hasty generalizations, or
improperly sourcing claims; baggage that gets carried along with a person's character level, education, and intelligence.
We are arrogant either because we are individuals or because we have been propped up to have the presumptions of arrogance. The human experience is so novel, that even though you and I can't help
but think fallaciously all the time, we are nonetheless going to be convinced of the models we've constructed to reckon with our experience. We have to stay sane, after all, and we don't have the
time to think like a professional philosopher about everything including the mundane.
In conclusion, I think question 1 and question 2 can be regarded the same, depending on the way you come at the issue. I came after a root cause of the issue, when someone can very well
differentiate answers for 1 and 2 by attacking the questions of character and maturity in people.
Last edited by Lukecash12; Mar-19-2012 at 10:15.
"Your mathematics are correct, but your physics are abominable..." Einstein
I meet and work with many, many arrogant people. At least people can be arrogant during certain situations, not necessarily all (I doubt many of them are as arrogant during their private times
with family, for example). This then leads to interesting questions as to why many that I meet are arrogant during those times when they do put on the air of arrogance.
The question, as interesting as it is, is readily dismissed because it sounds general, and naive. Reading your post carefully, it is interesting, but I would imagine the ones best qualified to
really explain the behavior are psychologists, and better, psychiatrists, not a behavioral sociologist with an undergraduate degree.
Personal pathologies, the social context of who is being arrogant and with or to whom, all come into play.
Unless there is a qualified psychiatrist roaming about the membership here, and they want to give us what would be a few moments of time (fifteen minutes can be around $250) I don't think you
will come up with other than intelligent stabs at an answer.
In the hours since this thread appeared, I've done some intense soul-searching, and reached the conclusion that I am arrogant due to an acute consciousness of my own inherent superiority to most
people in most regards.
For example, let us consider the art of breakdancing: with my freak on, I can bust a pretty mad move, getting right down with my bad self.
Likewise, massage: I'm, frankly speaking, the priest of petrissage, the embodiment of tapotement. (No: go back and reread that phrase with the correct pronunciation. Gitcher French on.)
Finally, my college roommates brewed their own beer.
Hence, and inevitably, arrogance. (My wife is throwing things at me.)
a remarkable capacity for not even hearing about
In the hours since this thread appeared, I've done some intense soul-searching, and reached the conclusion that I am arrogant due to an acute consciousness of my own inherent superiority to most
people in most regards.
For example, let us consider the art of breakdancing: with my freak on, I can bust a pretty mad move, getting right down with my bad self.
Likewise, massage: I'm, frankly speaking, the priest of petrissage, the embodiment of tapotement. (No: go back and reread that phrase with the correct pronunciation. Gitcher French on.)
Finally, my college roommates brewed their own beer.
Hence, and inevitably, arrogance. (My wife is throwing things at me.)
Or maybe you are lacking, in that you aren't big enough where it counts, and must compensate
Isn't it kind of arrogant to assume everyone's arrogant?
People who hide are afraid!
a remarkable capacity for not even hearing about
I didn't say everyone is arrogant.
I said anyone who dismisses the somethingness question easily is arrogant. That is a well defined group.
I also gave reasons for why I think the people in said clearly demarcated group are arrogant.
For example, I think all the people at that conference are not arrogant. Ergo, it is false that I think that everyone is arrogant.
Mar-19-2012, 04:06 #2
Senior Member
Join Date
Feb 2009
Blog Entries
Mar-19-2012, 04:11 #3
Senior Member
Join Date
Dec 2011
Mar-19-2012, 04:21 #4
Senior Member
Join Date
Feb 2009
Blog Entries
Mar-19-2012, 08:49 #5
Super Moderator
Join Date
Mar 2011
California, USA
Mar-19-2012, 10:02 #6
Mar-19-2012, 11:02 #7
Senior Member
Join Date
Jan 2010
25 Brook Street, Mayfair
Blog Entries
Mar-19-2012, 11:17 #8
Mar-19-2012, 13:31 #9
Senior Member
Join Date
Jan 2011
Blog Entries
Mar-19-2012, 13:49 #10
Senior Member
Join Date
May 2011
WA, U.S.
Blog Entries
Mar-19-2012, 13:58 #11
Mar-19-2012, 14:04 #12
Senior Member
Join Date
Jan 2011
Blog Entries
Mar-19-2012, 20:08 #13
Senior Member
Join Date
Dec 2011
|
{"url":"http://www.talkclassical.com/18548-why-people-so-arrogant.html","timestamp":"2014-04-20T08:57:08Z","content_type":null,"content_length":"108673","record_id":"<urn:uuid:b48f34c3-782b-468b-a8d7-484a1459ca42>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
am I correct or is Wolfram correct?
I think the delta is whether 4/5t = 40 is interpreted as 4/(5t) = 40 or (4/5)t = 40
The standard convention is that multiplication and division bind equally tightly even when multiplication is indicated by juxtaposition and that both are left-associative. That means the latter
interpretation is conventional.
Yes, but only when multiplication is explicitly indicated.
e.g.Even wolfram uses the convention if told 4/5*t
For implied multiplications I'm with Wolfram. Implied multiplications ought to bind tightest as a convention; that makes intuitive sense to me.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4246836","timestamp":"2014-04-17T09:49:09Z","content_type":null,"content_length":"49990","record_id":"<urn:uuid:455245b9-f73a-475c-8d31-ab2a8ca86615>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roadway Traffic Crash Mapping: A Space-Time
Modeling Approach
Shaw-Pin Miaou*
Texas Transportation Institute
Joon Jin Song*
Bani K. Mallick*
Texas A&M University
Mapping transforms spatial data into a visual form, enhancing the ability of users to observe, conceptualize, validate, and communicate information. Research efforts in the visualization of traffic
safety data, which are usually stored in large and complex databases, are quite limited at this time. This paper shows how hierarchical Bayes models, which are being vigorously researched for use in
disease mapping, can also be used to build model-based risk maps for area-based traffic crashes. County-level vehicle crash records and roadway data from Texas are used to illustrate the method. A
potential extension that uses hierarchical models to develop network-based risk maps is also discussed.
Transportation-related deaths and injuries constitute a major public health problem in the United States. Injuries and fatalities occur in all transportation modes, but crashes involving motor
vehicles account for almost 95% of all transportation fatalities and most injuries. Despite the progress made in roadway safety in the past several decades, tens of thousands of people are still
killed and millions of people are injured in motor vehicle crashes each year. For example, in 1999 nearly 42,000 people were killed in traffic crashes and over 3.2 million more were injured.
Motor vehicle fatalities are the leading cause of unintentional injury deaths, followed by falls, poisonings, and drownings (about 16,000, 10,000, and 4,400 deaths per year, respectively) (NSC 2002).
They are also responsible for as many pre-retirement years of life lost as cancer and heart disease, about 1.2 million years annually. In fact, motor vehicle crashes are the leading cause of death
for people aged 1 to 33. Societal economic losses from these crashes are huge, estimated by the National Highway Traffic Safety Administration to exceed $230 billion in 2000. Thus, much work remains
to be done to develop a better understanding of the causes of vehicle crashes their chains of events and operating environments and to develop countermeasures to reduce the frequency and severity of
these crashes (USDOT 1996 1999).
Safety is one of the U.S. Department of Transportation's (USDOT's) five current strategic goals, and Rodney Slater, a former Transportation Secretary stated: "Safety is a promise we keep together."
Indeed, roadway safety intersects with all five core functional areas within conventional highway engineering (planning, design, construction, operation, and maintenance) and crosscuts the boundaries
of other engineering (vehicle and material) and nonengineering areas (human factors, public health, law enforcement, education, and other social sciences). Thus, research in roadway safety requires
interdisciplinary skills and essential cooperation from various engineering and social science fields.
In 2002, a series of conferences was hosted by the Bureau of Transportation Statistics under the general title of "Safety in Numbers: Using Statistics to Make the Transportation System Safer." These
conferences supported the top strategic safety goal of promoting public health and safety "by working toward the elimination of transportation-related deaths, injuries, and property damage" (USDOT
Contributing Factors, Countermeasures, and Resources
Motor vehicle crashes are complex events involving the interactions of five major factors: drivers, traffic, roads, vehicles, and the environment (e.g., weather and lighting conditions) (e.g., Miaou
1996). Among these factors, driver error has been identified as the main contributing factor to a great percentage of vehicle crashes, and many research efforts are being undertaken to better
understand human and other synergistic factors that cause or facilitate crashes. These factors include operator impairment due to the use of alcohol and drugs, medical conditions, or human fatigue
and the operator's interaction with new technologies used on the vehicle.
Countermeasures to reduce the number and severity of vehicle crashes are being sought vigorously through various types of community, education, and law enforcement programs and improved roadway
design and vehicle safety technology. However, many of these programs have limited resources and need better tools for risk assessment, prioritization, and resource scheduling and allocation.
Recognizing that "to err is human" and that driver behavior is affected by virtually all elements of the roadway environment, highway engineers are constantly redesigning and rebuilding roadways to
meet higher safety standards. This includes designing and building roadways and roadsides that are more "forgiving" when an error is made, more conforming to the physical and operational demands of
the vehicle, and that better meet drivers' perceptions and expectations in order to reduce the frequency of human errors (TRB 1987). The relatively low fatality rate on the Interstate Highway System
(about half the fatality rate of the remainder of the nation's highways) is evidence of the impact of good design on highway safety (Evans 1991).
Many impediments keep highway engineers from achieving their design and operational goals, including a lack of resources and a vast highway system that needs to be built, operated, maintained,
audited, and improved. They must make incremental improvements over time and make difficult decisions on the tradeoffs among cost, safety, and other operational objectives. Consequently, knowing
where to improve and how to prioritize and schedule improvements is as important as knowing which roadway and roadside features and elements to add or improve. Tools for identifying, auditing,
ranking, and clinically evaluating problem sites; developing countermeasures; and allocating resources are essential for highway engineers who make these decisions.
Disease Mapping and Methods
In recent years, considerable progress has been made in developing methodology for disease mapping and ecological analysis, particularly in the application of hierarchical Bayes models with
spatial-temporal effects. This model-based development has led to a dramatic gain in the number and scope of applications in public health policy studies of risks from diseases such as leukemia,
pediatric asthma, and lung cancer (Carlin and Louis 1996; Knorr-Held and Besag 1997; Xia et al. 1997; Ghosh et al. 1999; Lawson et al. 1999; Zhu and Carlin 1999; Dey et al. 2000; Sun et al. 2000;
Lawson 2001; Green and Richardson 2001). A special issue of Statistics in Medicine entitled "Disease Mapping with a Focus on Evaluation" was also recently published to report this development (vol.
19, Issues 17 18, 2000).
Among other applications, disease maps have been used to
• describe the spatial variation in disease incidence for the formulation and validation of etiological hypotheses;
• identify and rank areas with potentially elevated risk and time trends so that action may be taken; and
• provide a quantitatively informative map of disease risk in a region to allow better risk assessment, prioritization, and resource allocation in public health.
Clearly, roadway traffic safety planning has similar requirements and can potentially benefit from these kinds of maps.
Studies have shown that risk estimation using hierarchical Bayes models has several advantages over estimation using classical methods. One important point that has been stressed by almost all of
these studies is that individual incidences of diseases of concern are relatively rare for a typical analysis unit such as census tract or county. As a result, estimates based on simple aggregation
techniques may be unreliable because of large variability from one analysis unit to another. This variability makes it difficult to distinguish chance variability from genuine differences in the
estimates and is sometimes misleading for analysis units with a small population size. Hierarchical Bayes models, however, especially those Poisson-based generalized linear models with spatial random
effects, have been shown to have the ability to account for the high variance of estimates in low population areas and at the same time clarify overall geographic trends and patterns (Ghosh et al.
1999; Sun et al. 2000).
Note that in the context of sample surveys the type of problem described above is commonly referred to as a small area, local area, or small domain estimation problem. Ghosh and Rao (1994) conducted
a comprehensive review of hierarchical Bayes estimations and found them favorable for dealing with small area estimation problems when compared with other statistical methods. Hierarchical models are
also gaining enormous popularity in fields such as education and sociology, in which data are often gathered in a nested or hierarchical fashion: for example, as students within classrooms within
schools (Goldstein 1999). In these fields, hierarchical models are often called multilevel models, variance component models, or random coefficients models.
The overall strength of the Bayesian approach is its ability to structure complicated models, inferential goals, and analyses. Among the hierarchical Bayes methods, three are most popular in disease
mapping studies: empirical Bayes (EB), linear Bayes (LB), and full Bayes methods. These methods offer different levels of flexibility in specifying model structures and complexity in computations. As
suggested by Lawson (2001): "While EB and LB methods can be implemented more easily, the use of full Bayesian methods has many advantages, not least of which is the ability to specify a variety of
components and prior distributions in the model set-up."
To many statistical practitioners, it is fair to say that the challenges they face dealing with real-world problems come more often from the difficulties of handling nonsampling errors and unobserved
heterogeneity (because of the multitude of factors that can produce them) than from handling sampling errors and heterogeneity due to observed covariates. One potential advantage of using the full
Bayes model is the flexibility that it can provide in dealing with and adjusting for the unobserved heterogeneity in space and time, whether it is structured or unstructured.
Objectives and Significance of Work
Mapping transforms spatial data into a visual form, enhancing the ability of users to observe, conceptualize, validate, and communicate information. Research efforts in the visualization of traffic
safety data, which are usually stored in large and complex databases, are quite limited at this time because of data and methodological constraints (Smith et al. 2001). As a result, it is common for
engineers and other traffic safety officials to analyze roadway safety data and make recommendations without actually "seeing" the spatial distribution of the data. This is not an optimal situation.
To the best of our knowledge, unlike the public health community, which has developed models for disease mapping, the roadway safety research community has not done much to develop model-based maps
for traffic crash data. One of the objectives of the study presented here was to initiate development of model-based mapping for roadway traffic crashes. Vehicle crash records and roadway inventory
data from Texas were used to illustrate the nature of the data, the structure of models, and results from the modeling.
Overall, TxDOT maintains nearly 80,000 centerline-miles of paved roadways, serving about 400 million vehicle-miles per day. Over 63% of the centerline-miles are rural two-lane roads that, on average,
carry fewer than 2,000 vehicles per day. These low volume rural roadways carry only about 8% of the total vehicle-miles on state-maintained (or on-system) highways and have less than 7% of the total
reported on-system vehicle crashes. Due to the low volume and relatively low crash frequency on these roads, it is often not deemed cost-effective to upgrade these roads to the preferred design
standards. However, vehicles on these roadways generally travel at high speeds and thus tend to have relatively more severe injuries when vehicle crashes occur. For example, in 1999, about 26% of the
Texas on-system crashes were fatal (K), incapacitating injury (A), and nonincapacitating injury (B) (or KAB) crashes, compared with over 40% of the crashes on rural, two-lane, low volume on-system
roads (Fitzpatrick et al. 2001). As a result, we have chosen to focus this study on crashes occurring on rural, two-lane, low-volume, on-system roads.
This paper is organized as follows: the next section briefly describes the sources and nature of the data analyzed in this study, followed by a quick review of modeling and computational techniques
and a discussion of Poisson-based hierarchical Bayes model with space-time effects and possible variants. Results from models of various levels of complexities are then presented and compared, and we
conclude with a discussion of future work.
The Texas Department of Transportation (TxDOT) currently has 25 geographic districts that are responsible for highway development. The state's 254 counties are divided among the districts (figure 1).
Each district includes 6 to 17 counties. District offices divide their work into area offices and area offices into local maintenance offices. The variety of climates and soil conditions in Texas
places differing demands on its highways, so design and maintenance, right-of-way acquisition, construction oversight, and transportation planning are primarily administered and accomplished locally.
Annual KAB crash frequencies for rural, two-lane, low volume on-system roads at the county level from 1992 to 1999 were used for modeling in this study. Figure 2 shows the number of reported KAB
crashes by county in 1999, while figure 3 shows total vehicle-miles incurred for the same year (in millions of vehicle-miles traveled, or MVMT). In a bubble plot, figure 4 shows the highest, lowest,
and average of the "raw" annual KAB crash rates by county (in number of crashes per MVMT). Note that two of the urban counties and one rural county were removed from the analysis for having no (or
almost no) rural two-lane roads with the level of traffic volume of interest, i.e., fewer than 2,000 vehicles per day on average.
As shown in figure 4, crash rates in most counties were stable over the eight-year period, while several counties exhibited marked differences between the high and the low. There is a clear east-west
divide in terms of the KAB crash rates, with eastern counties on average showing considerably higher rates. Rural roadways in the eastern counties are limited by the rolling terrain and tend to have
less driver-friendly characteristics, with more horizontal and vertical curves (figures 5 and 6), restricted sight distance, and less forgiving roadside development (e.g., trees closer to the
travelway and steeper side slopes). In addition, with more and larger urbanized areas in the east, rural roads tend to have higher roadside development scores, higher access density, and narrower
lanes and/or shoulders (Fitzpatrick et al. 2001). In general, northern and eastern counties have higher proportions of wet-weather-related crashes (figure 7). Also, on average, rural roads in eastern
counties were found to have more crashes at intersections than western counties (figure 8).
The National Highway System Designation Act of 1995 repealed the national maximum speed limit and returned authority to set speed limits to the states. In early 1996, speed limits on many Texas
highways during daylight hours were raised from 55 mph to 70 mph for passenger vehicles and 60 mph for trucks. In a study using monthly time series data from January 1991 to March 1997, it was shown
that for those roads on which speed limits were raised, the number of KAB crashes increased in five out of the six highway categories studied during the post-intervention periods (Griffin et al.
1998). The speed limit increase also coincided with a 14% jump in speed-related fatalities, from 1,230 in 1995 to 1,403 in 1996. The number of speed-related injuries increased during that period as
well: 3.3% for incapacitating injuries and 7.0% for nonincapacitating injuries. Thus, for the low volume roads considered by this study, we expected to see a change in KAB crash rates in 1996.
As part of our modeling efforts, we developed a Poisson hierarchical Bayes model for traffic crash risk mapping at the county level for state-maintained rural, two-lane, low volume roads (fewer than
2,000 vehicles per day) in Texas. In general, the model consists of six components:
1. an offset term (i.e., a covariate with a fixed regression coefficient equal to 1), representing the amount of travel occurring on these roads;
2. a fixed TxDOT district effect;
3. a fixed or random covariate effect component modeling the spatial variation in crash risk due to spatial differences in number of wet days, number of sharp horizontal curves, and degrees of
roadside hazards;
4. one random spatial effect component using the inverse of the Great Circle distance between the centroid of counties as the weights for determining spatial correlations;
5. a fixed or random time effect component representing year-to-year changes; and
6. an exchangeable random effect term, which, for the purpose of this study, can be deemed as a pure independent random local space-time variation that is independent of all other components in the
In this paper, we consider a fixed effect as an effect that is subject only to the uncertainty associated with an unstructured noninformative prior distribution with no unknown parameters and the
sampling variation.^1 A fixed effect can, however, vary by individual districts, counties, and time periods (see the discussion of model hierarchy). Note also that unlike the traditional traffic
crash prediction models (Maher and Summersgill 1996; Miaou 1996; and Hauer 1997), which were concerned principally with modeling the fixed effects for individual sites (e.g., road segments or
intersections), this study focuses more on exploring the structure of the random component of the model for area-based data.
The rediscovery by statisticians in the last 15+ years of the Markov chain Monte Carlo (MCMC) methods and new developments, including convergence diagnostic statistics, are revolutionizing the entire
statistical field (Besag et al. 1995; Gilks et al. 1996; Carlin and Louis 1996; Roberts and Rosenthal 1998; Robert and Casella 1999). At the same time, improved computer processing speed and lower
data-collection and storage costs are allowing more complex statistical models to be put into practice. These complex models are often hierarchical and high dimensional in their probabilistic and
functional structures. Furthermore, many models also need to include dynamics of unobserved and unobservable (or latent) variables; deal with data distributions that are heavily tailed, highly
overdispersed, or multimodal; and work with datasets with missing data points. MCMC provides a unified framework within which model identification and specification, parameter estimation, performance
evaluation, inference, prediction, and communication of complex models can be conducted in a consistent and coherent manner.
With today's desktop computing power, it is relatively easy to sample the posterior distributions using MCMC methods that are needed in full Bayes methods. The advantage of full Bayesian treatment is
that it takes into account the uncertainty associated with the estimates of the random-effect parameters and can provide exact measures of uncertainty. Maximum likelihood methods, on the other hand,
tend to overestimate precision, because they ignore this uncertainty. This advantage is especially important when the sample size is small. Other estimation methods for hierarchical models are also
available, e.g., iterative generalized least squares (IGLS), expected generalized least squares (EGLS), and generalized estimating equations (GEE). These estimation procedures tend to focus on
obtaining a consistent estimate of the fixed effect rather than exploring the structure of the random component of the model (Goldstein 1999).
For some problems, existing software packages such as WinBUGS (Spiegelhalter et al. 2000) and MLwiN (Yang et al. 1999) can provide Gibbs and other MCMC sampling for a variety of hierarchical Bayes
models. For the models presented in this paper, we relied solely on the WinBUGS codes. At present, however, the type of spatial and temporal models available in WinBUGS is somewhat limited and will
be discussed later.
We let the indices i, j, and t represent county, TxDOT district, and time period, respectively,
where i = 1,2,...,I; j = 1,2,...,J; and t = 1,2,...,T.
For the data analyzed, we have 251 counties, divided among 25 districts, and 8 years of annual data (i.e., I = 251, J = 25, and T = 8). As indicated earlier, each district may include 6 to 17
counties, which will be represented by county set D [j], where j = 1,2,...,25. That is, D [j ] is a set of indices representing counties administered by TxDOT district j.
We define variable Y [it] as the total number of reported KAB crashes on the rural road of interest in county i and year t. We also define v [it] as the observed total vehicle-miles traveled (VMT) in
county i and year t for the roads in discussion, representing the size of the population at risk. In addition, we define x [itk] as the k th covariate associated with county i and year t. Three
covariates were considered.
The first covariate x [it1] is a surrogate variable intended to represent the percentage of time that the road surface is wet due to rain, snow, etc. Not having detailed weather data, we chose to use
the proportion of KAB crashes that occurred under wet pavement conditions as a surrogate variable. In addition, we do not expect general weather characteristics to vary much between neighboring
counties. Therefore, the proportion for each county is computed as the average of this and six other neighboring counties that are close to the county in terms of their Great Circle distances. We do,
however, expect weather conditions to vary significantly from year to year. Thus, for each county i, we have x [it1] change with t.
The second covariate x [it2] is intended to represent spatial differences in the number of sharp horizontal curves in different counties. The actual inventory of horizontal curves on the highway
network is not currently available. However, when a traffic crash occurs, site characteristics including the horizontal curvature are coded in the traffic crash database. We chose to use the
proportion of KAB crashes that occurred on sharp horizontal curves in each county as a surrogate variable, and we define a sharp horizontal curve as any road segment having a horizontal curvature of
4 or higher degrees per 100-foot arc. Given that this roadway characteristic is mainly driven by terrain variations, we do not expect this characteristic to vary much between neighboring counties.
Therefore, as in the first covariate, the proportion for each county is computed as the average of this and six other neighboring counties that are close to the county in terms of their Great Circle
distances. Furthermore, for this type of road, we did not expect the proportion to vary in any significant way over the eight-year period in consideration. Thus, the average proportion from 1992 to
1999 was actually used for all t. In other words, for each county i, x [it2] are the same for all t.
The third covariate x [it3] is a surrogate variable intended to represent degrees of roadside hazards. As in the second covariate, the actual inventory of hazards (ditches, trees, and utility poles),
available clear zones, and geometry and surface type of roadsides are not available. Similar to the first covariate, a surrogate variable was devised to indicate the proportion of KAB crashes that
ran off roads and hit fixed objects on the roadside. We also do not expect this characteristic to vary much between neighboring counties over the eight-year period in consideration. Again, the
average proportion from 1992 to 1999 was used for all t, i.e., for each county i, x [it3] are the same for all t. Figure 9 shows the spatial distribution of this variable.
The use of these surrogate variables is purely data driven (as opposed to theory driven) and empirical in nature. We use the proportion of wet crashes (x [it1]) as an example to explain the use and
limitation of such surrogate measures in practice. First, variables such as "percentage of wet crashes" and "wet crashes to dry crashes ratio" are commonly used in wet-weather accident studies.
Examples in the literature include Coster (1987), Ivey and Griffin (1990), and Henry (2000). These authors reviewed various wet-weather accident studies, and the relationships between 1) skid numbers
(or friction values) of pavement and percentage of wet weather accidents, and 2) skid numbers and wet/dry pavement surfaces were quite well documented. Although they were conducted with limited data,
these wet weather accident studies also suggest that crash rates are higher during wet surface conditions than under dry surface conditions, and some indicate that traffic volumes are reduced by
about 10% to 20% during wet weather in rural areas (no significant reduction was found in urban areas).
Second, the use of percentage of wet crashes as a surrogate variable in this study to explain the variation of crash rates by county mixes several possible relationships and has limited explanatory
power. A positive correlation of percentage of wet crashes and crash rate mixes has at least two possible relationships: 1) the effect of wet surface conditions on crash rates, and 2) the effect of
rainfall (or other precipitation) on traffic volumes. Everything else being equal, if the wet surface crash rate is the same as the dry surface crash rate, then we do not expect this positive
correlation to be statistically significant in the model regardless of the relative traffic volumes during wet or dry surface conditions. We interpret a positive correlation as an indication that a
higher crash rate is indeed experienced during wet surface conditions than during dry conditions. However, because of the lack of data on traffic volumes by wet and dry surface conditions, we are not
able to quantify the difference in crash rates under the two surface conditions. This is the main limitation in using such a surrogate measure.
Probabilistic and Functional Structures
The space-time models considered in this study are similar to the hierarchical Bayes generalized linear model used in several disease mapping studies cited earlier. At the first level of hierarchy,
conditional on mean μ[it], Y[it] values are assumed to be mutually independent and Poisson distributed as
The mean of the Poisson is modeled as
μ[it] = ν[it]λ[it] (2)
where total VMT v [it ] is treated as an offset and λ[it] is the KAB crash rate. The rate, which has to be non-negative, is further structured as
where log is the natural logarithm, I(S) is the indicator function of the set S defined as
This makes the first term on the right hand side of equation 3 the intercept representing district effects at different years; x[itk] are covariates discussed earlier and their interactions; δ[t]
represents year-to-year time effects due, e.g., to speed limit, weather, and socioeconomic changes; φ[i] is a random spatial effect; e[it] is an exchangeable, unstructured, space-time random effect;
and α[jt] and β[k] are regression parameters to be estimated from the data. As defined earlier, D[j ] is a set of indices representing counties administered by TxDOT district j.
Many possible variations of equation 3 were and could potentially be considered in this study. For each component that was assumed to have a fixed effect, the second level of hierarchy was chosen to
be an appropriate noninformative prior. On the other hand, for each component that was assumed to have a random effect, the second level of hierarchy was a prior with certain probabilistic structure
that contained unknown parameters. The priors for these unknown parameters (called hyperpriors) constitute the third level of the hierarchy. What follows are discussions of the variation of models
considered by this study, some limitations of the WinBUGS software, and possible extensions of the models considered.
The intercept term, which represents the district effect over time, was assumed to have fixed effects with noninformative normal priors. For the covariates x[itk], we considered both fixed and random
effects. That is, β[k] was assumed to be either a fixed value or random variable. The three covariates discussed earlier and three of their interactive terms, x[it4] = x[it1]x[it2], x[it5] = x[it1]x[
it3], and x[it6] = x[it2]x[it3], were included in the model. It is important to note that the values of these covariates were centered for better numerical performance. Noninformative normal priors
were also assumed for fixed-effect models. For the random-effect model, β[k], k = 1,2,...,6, are assumed to be independent and normally distributed with mean N (
With 251 counties and 8 years of data, the data are considered to be quite rich spatially but rather limited temporally, as are data in many disease mapping studies. Because of this limitation, we
only considered two simple temporal effects for δ[t] : fixed effects varying by t (or a year-wise fixed-effect model) and an order-one autoregressive model (AR(1)) with the same coefficient for all
t. Again, noninformative priors were used for both models. For the model to be identifiable, in the fixed-effect model, δ[1] was set to zero, and in the AR(1) model, δ[1] was set to be an unknown
fixed constant. From the fixed effect, we expected to see a change in δ[t] at t = 5 (1996), due in part to the speed limit increase in that year.
Recent disease mapping research has focused on developing more flexible, yet parsimonious, spatial models that have attractive statistical properties. Based on the Markov random field (MRF) theory,
Besag's conditional autoregressive (CAR) model (Besag 1974 and 1975) and its variants are by far the most popular ones adopted in disease mapping. We considered several Gaussian CAR models, all of
which have the following general form
where [i] given φ[-i];
φ[-i] represents all φ except φ[i],
C[i ] is a set of counties representing "neighbors" of county i,
η is a fixed-effect parameter across all i, and
ω[ii*] is a positive weighting factor associated with the county pair (i, i*).
This equation is shown to be equivalent to
In our study, we had
where d[ii*] is the Great Circle distance between the centroid of county i and i*, and c is a constant parameter equal to 1 or 2 (note that d[ii*] ranges roughly from 30 to 700 miles.)
With regard to the number of neighbors, we adopted a more generous definition by allowing every other county i* (≠i) to be a neighbor of county i.
In theory, we could treat the constant c as an unknown parameter and estimate it from the data. However, in the current version of WinBUGS, the weights of the built-in CAR spatial model do not allow
unknown parameters (Spiegelhalter et al. 2000), which we found to be a limitation for our application. In a separate attempt to find a good range of the decay constant for the inverse distance weight
in the CAR model, we adopted a simpler model that included only the offset, the yearwise time effect, and the Gaussian CAR components. We estimated the same model with different c values between 0
and 4 and found that model performance was best achieved when the decay constant was set between 1 and 2 (based on the deviance information criterion to be discussed shortly). Weights with an
exponential form ω[ii*] = exp(-cd[ii*]) were also examined but are not reported in this paper.
We also explored the L-1 CAR models of the following form:
where η is a fixed-effect parameter the same for all i. Weights with the same c as in the Gaussian CAR models were considered. WinBUGS constrains the sum of φ[i] to zero to make both the Gaussian CAR
and L-1 CAR spatial models identifiable. A non-informative gamma distribution was used as hyperpriors for η in equations 5 and 6.
The spatial correlation structure represented by equations 5 and 6 is considered global in the sense that the distribution functions and associated parameters (c and η) do not change by i. More
sophisticated models allowing spatial correlation structure to be adaptive or location specific are being actively researched (e.g., Lawson 2000; Green and Richardson 2001). Still, computational
challenges seem to be keeping researchers from exploring more flexible, yet parsimonious, space-time interactive effects, and more research in this area needs to be encouraged (Sun et al. 2000).
For the exchangeable random effects, we considered two commonly used distributions. One distribution assumed e[it] to be independent and identically distributed (iid) as
Another distribution assumed an iid one-parameter gamma distribution as
which has a mean equal to 1 and a variance 1/ψ. The use of a one-parameter gamma distribution (instead of a two-parameter gamma) ensures that all model parameters are identifiable. Again,
non-informative inverse gamma and gamma distributions were used as hyperpriors for
Deviance Information Criterion and Variants
The deviance information criterion (DIC) has been proposed to compare the fit and complexity (measured by the effective number of parameters) of hierarchical models in which the number of parameters
is not clearly defined (Spiegelhalter et al. 1998; Spiegelhalter et al. 2002). DIC is a generalization of the well-known Akaike Information Criterion (AIC) and is based on the posterior distribution
of the deviance statistic
f(y) is some standardizing function of the data alone. For the Poisson model, f(y) is usually set as the saturated likelihood, i.e.,
where μ is a vector of the statistical means of vector y.
DIC is defined as a classical estimate of fit plus twice the effective number of parameters, which gives
p [D ] is the effective number of parameters for the model; and
As with AIC, models with lower DIC values are preferred. From equation 9, we can see that the effective number of parameters p[D] is defined as the difference between the posterior mean of the
deviance p[D] can be trivially obtained from an MCMC analysis by monitoring both θ and D(θ) during the simulation. For the random-effect model considered in equations 1 through 3, the parameter
vector θ should include α[jt], β[k], δ[t], φ[i] and e[it] for all i, j, k, and t.
In addition to DIC values and associated quantities p[D], we also used some goodness-of-fit measures that attempted to standardize DIC in some fashion. This includes DIC divided by sample size n and
where DIC[model] is the DIC value for the model under evaluation;
DIC[max] is the maximum DIC value under a fixed one-parameter model; and
DIC[ref] is a DIC value from a reference model that, ideally, represents some expected lower bound of the Poisson hierarchical model for a given dataset.
Clearly, R^2 goodness-of-fit measure for regression models. Through simulations, Miaou (1996) evaluated several similar measures using AIC for overdispersed Poisson models. Since DIC is known to be
noninvariant with respect to the scale of the data (Spiegelhalter et al. 1998; Spiegelhalter et al. 2002), an analytical development of DIC[ref] is difficult. However, we know that for a model with a
good fit, n (Spiegelhalter et al. 2002). We, therefore, chose DIC[ref] = n as a conservative measure for computing
Another goodness-of-fit indicator considered is 1 / ψ, which is the variance of exp(e[it]) under the gamma model, indicating the extent of overdispersion due to exchangeable random effects. In
theory, this value could go to zero when such effects vanish. Thus, similar to
where (1/ψ)[model] is the variance of exp(e[it]) for the model under consideration, and
(1/ψ)[max] is the amount of overdispersion under the simplest model.
In essence, 1/ψ)[ref], the expected lower bound, is set to zero.
Table 1 lists 42 models of various complexities examined by this study. These models include simplified versions of the general model presented in equations 2 and 3, as well as models for reference
purposes, e.g., models 1 to 3. Model 1 is a saturated model, in which the estimates of the Poisson means y[it]. Model 2, expressed as Alpha0, is a one-parameter Poisson model without the offset, and
model 3 is another one-parameter model with the offset. Essentially, model 2 focuses on traffic crash frequency and model 3 on traffic crash rate.
In table 1, the following symbols are used:
• Alpha(j) stands for fixed district effects.
• Beta.Fix and Beta.N respectively represent fixed covariate effects and random covariate effects with independent normal priors.
• Time.Fix and Time.AR1 respectively stand for fixed time and AR(1) time effects.
• For the random spatial effects, Space.CAR.N1 and Space.CAR.L1, represent the Gaussian and L-1 CAR models shown in equations 5 and 6, respectively, and both have a decay constant c equal to 1.
• Space.CAR.N2 and Space.CAR.L2 represent similar spatial models with a decay constant c equal to 2.
• The components e.N and e.Gam represent exchangeable random effects as presented in equations 7 and 8, respectively.
We experienced some computational difficulties for the models that included the Beta.N component when we tried to include all six main and interactive effects. Therefore, for all models with the
Beta.N component, we only included the three main effects.
In computing [max] is defined as the maximum DIC value under a fixed one-parameter model, which is model 2 in the table when crash frequency is the focus and model 3 when crash rate is the focus.
Similarly, in computing [max] is set as the amount of overdispersion under the simplest model with an e.Gam error component, which is model 11 for models focusing on the crash rate.
As a rule, in our development we started with simpler models, and the posterior means of the estimated parameters of these simple models were then used to produce initial values for the MCMC runs of
more complex models. In general, the models presented in the table are ordered by increasing complexity: intercepts only, intercepts + covariate effect, intercepts + covariate effect + exchangeable
effect, intercepts + covariate effect + exchangeable effect + spatial/temporal effects, and so on. Models 7 to 9 and the last eight models include a more complex fixed-effect intercept term. The
models are presented in the table in line with the order in which they were estimated with the WinBUGS codes.
The MCMC simulations usually reached convergence quite quickly. Depending on the complexity of the models, for typical runs, we performed 10,000 to 20,000 iterations of simulations and removed the
first 2,000 to 5,000 iterations as burn ins. As in other iterative parameter estimation approaches, good initial estimates are always the key to convergence. For some of the models, we have hundreds
of parameters and MCMC monitoring plots based on the Gelman-Rubin statistics (which are part of the output from the WinBUGS codes). Because estimated parameters usually converge rather quickly, their
convergence plots, which are not particularly interesting to show, are not presented here. Table 2 shows some statistics of the estimated posterior density of a selected number of parameters for
model 27, which was one of the best models in terms of the DIC value and other performance measures discussed above. Also, figure 10 presents estimated posterior mean crash rates, as well as their
2.5 and 97.5 percentiles, in a bubble plot for 1999 by county.
From table 2, one can see that the fixed-time effect δ[t] jumps from about 0 in previous years to about 0.05 in t = 4 (1995) and has another increase to about 0.09 at t = 5 (1996). The value comes
down somewhat (about 0.06) in 1998 (t = 7) and 1999 (t = 8) but is still significantly higher than those in the preintervention periods. It has been suggested that the jump in 1995 was perhaps due to
higher driving speeds by drivers in anticipation of a speed limit increase, and higher crash rates in 1996 were due in part to the speed limit increase and less favorable winter weather (Griffin et
al. 1998). Lower δ[t] values in 1998 and 1999 may suggest that drivers had adjusted themselves and become more adapted to driving at higher speeds.
From the same model (model 27), estimates of α[j], i.e., district effects, range from about 0.5 to 1.5, indicating significant district-level variations in crash risk. The covariate effects β[k]
indicate that the horizontal curve variable is the most influential and statistically significant variable in explaining the crash rate variations over space. Wet pavement condition is the
second-most significant variable. The ran-off-road fixed-object variable is not a statistically significant variable, which suggests that ran-off-road fixed-object crash risk is correlated with and
perhaps exacerbated by the presence of sharp horizontal curves and wet pavement conditions.
From DIC and other performance measures in table 1, several observations can be made:
• For the exchangeable random effect, models with a gamma assumption (equation 8) are preferred over those with a normal assumption (equation 7). This is observed by comparing the performance of,
e.g., model 15 with model 14, model 18 with model 17, and model 27 with model 26.
• Models with fixed covariate effects are favored over their random-effect counterparts. This is seen by comparing, e.g., model 25 with model 24 and model 33 with model 34.
• Models with fixed time effects (e.g., model 23) performed better than those with AR(1) time effects (e.g., model 22).
• Models with separate district and time effects (α[j] and δ[t]) are preferred over those with joint district time effects (α[jt]). For example, we can compare the performance of model 27 with
model 42 and model 40 with model 24.
• For comparable model structures, adding a spatial component decreases the DIC value quite significantly, which indicates the importance of the spatial component in the model. As an example, we
can compare model 17 with model 20. Except for the spatial component, these two models have the same structures (in intercept terms, covariate effects, and the error component). Model 17 does not
have any spatial component, while model 20 includes a normal CAR model. The DIC value drops from 3,287 for model 17 to 2,755 for model 20, a very significant reduction when compared with the
differences in DIC values for various models presented in table 1. Other comparisons that would give the same conclusion include model 19 vs. model 22 or model 38 with models 40 and 42.
• No particular spatial CAR models considered by this study, i.e., CAR.N1, CAR.L1, CAR.N2, or CAR.L2, were clearly favored over other CAR models.
• Despite the empirical nature of the two goodness-of-fit measures
Most of the methodologies developed in disease mapping were intended for area-based data, e.g., number of cancer cases in a county or census tract during a study period. While we demonstrate the use
of some of these methodologies for roadway traffic crashes at the county level, we recognize that, fundamentally, traffic crashes are network-based data, whether they are intersection,
intersection-related, driveway access-related, or nonintersection crashes. Figure 11 gives an example of the locations of KAB crashes on the state-maintained highway network of a Texas county in
Thus, an obvious extension of the current study is to develop risk maps for traffic crashes on road networks. The problem is essentially one of developing hierarchical models for Poisson events on a
network (or a graph). We expect that, in different applications, these maps may need to be developed by roadway functional classes, vehicle configurations, types of crashes (e.g., those involving
drunk drivers), and crash severity types (e.g., fatal, injury, and noninjury crashes). We also expect these network-based maps to be useful for roadway safety planners and engineers to 1) estimate
the cost and benefit of improving or upgrading various design and operational features of the roadway, 2) identify and rank potential problem roadway locations (or hotspots) that require immediate
inspection and remedial action, and 3) monitor and evaluate the safety performance of improvement projects after the construction is completed. Such maps need to be constructed from quality
accident-, traffic-, and roadway-related databases and with scientifically grounded data visualization and modeling tools.
Modeling and mapping of traffic crash risk need to face all the challenges just as in the field of disease mapping, i.e., multilevel data and functional structures, small areas of occurrence of
studied events at each analysis unit, and strong unobserved heterogeneity. The hierarchical nature of the data can be described as follows: In a typical roadway network, other than the fact that
roadway networks are connected or configured in specific ways, individual road entities are classified by key geometric characteristics (e.g., segments, intersections, and ramps), nested within
roadway functional or design classifications, further nested within operational and geographical units, and subsequently nested within various administrative and planning organizations. Strong
unobserved heterogeneity is expected because of the unobserved driver behaviors at individual roadway entities that are responsible for a large percentage of crash events.
Every state maintains databases on vehicle crash records and roadway inventory data. We hope that the results of our study using Texas data will motivate the development of similar studies in other
states. We also envision that the network-based hierarchical models we propose can potentially be utilized in other transportation modes and in computer and communication network studies to further
the exploration and interpretation of incidence data. Furthermore, the hierarchical Bayes models with spatial random effects described in this paper can be used to develop more efficient sampling
surveys in transportation that alleviate multilevel and small-area problems. Finally, the models have been shown to have the ability to account for the high variance of estimates in low-population
areas and at the same time clarify overall geographic trends and patterns, which make them good tools for addressing some of the equity issues required by the Transportation Equity Act for the 21st
This research was supported in part by the Bureau of Transportation Statistics, U.S. Department of Transportation, via transportation statistics research grant number DTTS-00-G-B005-TX. The authors
are, however, solely responsible for the contents and views expressed in this paper.
Besag, J. 1974. Spatial Interaction and the Statistical Analysis of Lattice Systems (with Discussion). Journal of the Royal Statistical Society. Series B 36:192 236.
Besag, J. 1975. Statistical Analysis of Non-Lattice Data. Statistician 24(3):179 195.
Besag, J., P. Green, D. Higdon, and K. Mengersen. 1995. Bayesian Computation and Stochastic Systems (with Discussion). Statistical Science 10:3 66.
Carlin, B.P. and T.A. Louis. 1996. Bayes and Empirical Bayes Methods for Data Analysis. London, England: Chapman and Hall/CRC.
Coster, J.A. 1987. Literature Survey of Investigations Performed to Determine the Skid Resistance/Accident Relationship, Technical Report RP/37. National Institute for Transport and Road Research,
South Africa.
Dey, D., S. Ghosh, and B.K. Mallick. 2000. Bayesian Generalized Linear Model. New York, NY: Marcel Dekker.
Evans, L. 1991. Traffic Safety and the Driver. New York, NY: Van Nostrand Reinhold.
Fitzpatrick, K., A.H., Parham, M.A. Brewer, and S.P. Miaou. 2001. Characteristics of and Potential Treatments for Crashes on Low-Volume, Rural Two-Lane Highways in Texas, Report Number 4048-1. Texas
Transportation Institute, College Station, TX.
Gelman, A., J.B. Carlin, H.S. Stern, and D.B. Rubin. 1995. Bayesian Data Analysis, 1st ed. Boca Raton, FL: CRC Press.
Ghosh, M., and J.N.K. Rao. 1994. Small Area Estimation: An Appraisal. Statistical Science 9(1):55 76.
Ghosh, M., K. Natarajan, L.A. Waller, and D. Kim. 1999. Hierarchical Bayes for the Analysis of Spatial Data: An Application to Disease Mapping. Journal of Statistical Planning and Inference 75:305
Gilks, W.R., S. Richardson, and D.J. Spiegelhalter. 1996. Markov Chain Monte Carlo in Practice. London, England: Chapman and Hall.
Goldstein, H. 1999. Multilevel Statistical Models, 1st Internet edition. Available at http://www.ioe.ac.uk/multilevel/.
Green, P.J. and S. Richardson. 2001. Hidden Markov Models and Disease Mapping, Working Paper. Department of Mathematics, University of Bristol, United Kingdom.
Griffin, L.I., O. Pendleton, and D.E. Morris. 1998. An Evaluation of the Safety Consequences of Raising the Speed Limit on Texas Highways to 70 Miles per Hour, Technical Report. Texas Transportation
Institute, College Station, TX.
Hauer, E. 1997. Observational Before-After Studies in Road Safety. New York, NY: Pergamon Press.
Henry, J.J. 2000. Design and Testing of Pavement Friction Characteristics, NCHRP Project 20-5, Synthesis of Highway Practice, Topic 30-11. Washington, DC: Transportation Research Board.
Ivey, D.L. and L.I. Griffin, III. 1990. Proposed Program to Reduce Skid Initiated Accidents in Texas. College Station, TX: Texas Transportation Institute.
Knorr-Held, L. and J. Besag. 1997. Modelling Risk from a Disease in Time and Space, Technical Report. Department of Statistics, University of Washington.
Lawson, A.B., A. Biggeri, D. Bohning, E. Lessafre, J.F. Viel, and R. Bertollini, eds. 1999. Disease Mapping and Risk Assessment for Public Health. Chichester, UK: Wiley.
Lawson, A.B. 2000. Cluster Modelling of Disease Incidence via RJMCMC Methods: A Comparative Evaluation. Statistics in Medicine 19:2361 2375.
____. 2001. Tutorial in Biostatistics: Disease Map Reconstruction. Statistics in Medicine 20:2183 2204.
Maher, M.J. and L. Summersgill. 1996. A Comprehensive Methodology for the Fitting of Predictive Accident Models. Accident Analysis & Prevention 28(3):281 296.
Miaou, S.P. 1996. Measuring the Goodness-of-Fit of Accident Prediction Models, FHWA-RD-96-040. Prepared for the Federal Highway Administration, U.S. Department of Transportation.
National Safety Council (NSC). 2002. Report on Injuries in America, 2001. Available at http://www.nsc.org/library/rept2000.htm, as of July 2003.
Robert, C.P. and G. Casella. 1999. Monte Carlo Statistical Methods. New York, NY: Springer-Verlag.
Roberts, G.O. and J.S. Rosenthal. 1998. Markov Chain Monte Carlo: Some Practical Implications of Theoretical Results. Canadian Journal of Statistics 26:5 31.
Smith, R.C., D.L. Harkey, and B. Harris. 2001. Implementation of GIS-Based Highway Safety Analyses: Bridging the Gap, FHWA-RD-01-039. Prepared for the Federal Highway Administration, U.S. Department
of Transportation.
Spiegelhalter, D.J., N. Best, and B.P. Carlin. 1998. Bayesian Deviance, the Effective Number of Parameters, and the Comparison of Arbitrarily Complex Models, Research Report 98-009. Division of
Biostatistics, University of Minnesota.
Spiegelhalter, D.J., A. Thomas, and N.G. Best. 2000. WinBUGS Version 1.3 User Manual. Cambridge, UK: MRC Biostatistics Unit. Available at http://www.mrc-cam.ac.uk/bugs.
Spiegelhalter, D.J., N. Best, B.P. Carlin, and A. Linde. 2002. Bayesian Measure of Model Complexity and Fit. Journal of Royal Statistical Society 64(3):1 34.
Sun, D., R.K. Tsutakawa, H. Kim, and Z. He. 2000. Spatio-Temporal Interaction with Disease Mapping. Statistics in Medicine 19:2015 2035.
Transportation Research Board (TRB). 1987. Designing Safer Roads: Practices for Resurfacing, Restoration, and Rehabilitation, Special Report 214. Washington, DC: National Research Council.
U.S. Department of Transportation (USDOT), Bureau of Transportation Statistics (BTS). 1996 1999. Transportation Statistics Annual Report. Washington, DC.
____. 2002. Safety in Numbers Conferences. Available at http://www.bts.gov/sdi/conferences, as of August 2003.
Xia, H., B.P. Carlin, and L.A. Waller. 1997. Hierarchical Models for Mapping Ohio Lung Cancer Rates. Environmetrics 8:107 120.
Yang, M., J. Rasbash, H. Goldstein, and M. Barbosa. 1999. MLwiN Macros for Advanced Multilevel Modelling, Version 2.0: Multilevel Models Project. Institute of Education, University of London.
Available at http://www.ioe.ac.uk/multilevel/, as of December 1999.
Zhu, L. and B.P. Carlin. 1999. Comparing Hierarchical Models for Spatio-Temporally Misaligned Data Using the DIC Criterion, Research Report 99-006. Division of Biostatistics, University of Minnesota.
Address for Correspondence and End Notes
Author Addresses: Corresponding author: Shaw-Pin Miaou, Research Scientist, Texas Transportation Institute, Texas A&M University System, 3135 TAMU, College Station, TX 77843-3135. Email:
Joon Jin Song, Research Assistant, Department of Statistics, Texas A&M University, 3143 TAMU, College Station, TX 77843-3143. Email: j-song@ttimail.tamu.edu.
Bani K. Mallick, Professor, Department of Statistics, Texas A&M University, 3143 TAMU, College Station, TX 77843-3143. Email: bmallick@stat.tamu.edu.
KEYWORDS: Bayes models, risk, space-time models, traffic safety.
^1.Bayesian Data Analysis (Gelman et al. 1995) provides a Bayesian interpretation of fixed and random effects.
|
{"url":"http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/journal_of_transportation_and_statistics/volume_06_number_01/html/paper_03/index.html","timestamp":"2014-04-20T06:08:49Z","content_type":null,"content_length":"105165","record_id":"<urn:uuid:93ff3f3d-dc58-4cc0-8d5c-dc3ea363fcdc>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Air Resistance & Terminal Velocity
Air Resistance & Terminal Velocity
(AP Level)
The purpose of this page is to take a more mathematical look at air (fluid) resistance (also called drag or the drag force) and terminal velocity. Previously, we saw that the air resistance force on
an object depends primarily on
• the relative velocity of the object and the fluid
• the shape of the object
• the density of the fluid
• other properties of the object, such as surface texture, as well as other properties of the fluid, such as viscosity
The Air Resistance (Drag) Force
To keep our discussion under control, we will restrict our discussion to air resistance forces proportional to v and v^2.
Many sources attempt to treat all of the non-velocity influences on the drag force separately. So, we have a constant/variable that represents the influence of the shape of the object on the air
resistance (drag) force, a constant/variable that represents effect of the density of the fluid on the air resistance force, etc. This results in a very large, very complicated looking expression for
the air resistance force. If your life isn't complicated enough, I recommend that you switch to one of these treatments. My choice (and the choice of many others, too - it's not my idea) is to lump
all of these other factors into one constant - let's call it "b". So, the shape of the object influences the value of "b", the density of the fluid influences "b", and so on.
This means that we can write the air resistance force (or drag force) as f[drag] = bv for very small, slow objects, or f[drag] = bv^2 for "human-size objects, depending on the situation.
Terminal Velocity
bv[term] = mg
where v[term] is the terminal velocity. This means that:
In practice, it is easier (and more precise) to measure or estimate the terminal velocity of an object than to calculate the coefficient b. So this expression may be more useful in practice written
If f[drag] = bv^2, the expression for terminal velocity becomes:
Example 1:
What are the dimensions of "b" in each expression for f[drag]?
(a) If f[drag] = bv[term], then
(b) If f[drag] = bv^2, then
Example 2:
A tiny particle of mass 4 x 10^-4 kg (so f[drag] = bv) has a drag coefficient, b = 3.3 x 10^-2 kg/s. What is this particle's terminal velocity?
At terminal velocity,
bv[term] = mg
Example 3:
A skydiver of mass 50 kg (f[drag] = bv^2) has a terminal velocity of 60 m/s. What is the drag coefficient, b, for this skydiver?
At terminal velocity,
last update January 25, 2008 by JL Stanbrough
|
{"url":"http://www.batesville.k12.in.us/Physics/APPhyNet/Dynamics/Newton's%20Laws/air_resistance/air_resistance_ap.htm","timestamp":"2014-04-18T23:16:06Z","content_type":null,"content_length":"7982","record_id":"<urn:uuid:ca19f1ba-96e7-4d40-acc7-b9887d47cf50>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A PAC-Bayesian Margin Bound for Linear Classifiers
A PAC-Bayesian Margin Bound for Linear Classifiers (2002)
Download Links
Other Repositories/Bibliography
by Ralf Herbrich , Thore Graepel
author = {Ralf Herbrich and Thore Graepel},
title = {A PAC-Bayesian Margin Bound for Linear Classifiers},
year = {2002}
We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training sample. The result is obtained in a PAC-Bayesian framework and is based on
geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound, which was developed in the luckiness framework, and
scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and---for maximum
margins---to a vanishing complexity term. In contrast to previous results, however, the new bound does depend on the dimensionality of feature space. The analysis shows that the classical margin is
too coarse a measure for the essential quantity that controls the generalisation error: the fraction of hypothesis space consistent with the training sample. The practical relevance of the result
lies in the fact that the well-known support vector machine is optimal with respect to the new bound only if the feature vectors in the training sample are all of the same length. As a consequence we
recommend to use SVMs on normalised feature vectors only. Numerical simulations support this recommendation and demonstrate that the new error bound can be used for the purpose of model selection.
8946 Statistical Learning Theory - Vapnik - 1998
2027 Learning with Kernels - Schölkopf, Smola - 2001
1694 A Theory of the Learnable - Valiant - 1984
991 A Probabilistic Theory of Pattern Recognition - Devroye, Gyorfi, et al. - 1996
946 On the uniform convergence of relative frequencies of events to their probabilities - Vapnik, Chervonenkis - 1968
803 Estimation of Dependencies Based on Empirical Data - Vapnik - 1979
721 Boosting the margin: A new explanation for the effectiveness of voting methods - Schapire, Freund, et al.
258 M.: Structural risk minimization over data-dependent hierarchies - Shawe-Taylor, Bartlett, et al. - 1926
238 On the density of families of sets - Sauer - 1972
208 Scalesensitive dimensions, uniform convergence, and learnability - Alon, Ben-David, et al. - 1997
197 Efficient distribution-free learning of probabilistic concepts - Kearns, Schapire - 1994
177 The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network - Bartlett - 1998
106 A theory of learning and generalization - Vidyasagar - 1997
102 Some PAC-Bayesian theorems - McAllester - 1998
78 Learning kernel classifiers: theory and algorithms - Herbrich - 2002
75 Boosting the margin: A new explanation for the eectiveness of voting methods - Schapire, Freund, et al. - 1998
69 C.: Bayes point machines - Herbrich, Graepel, et al. - 2001
28 A PAC analysis of a Bayesian estimator - Shawe-Taylor, Williamson - 1997
25 Robust bounds on the generalization from the margin distribution - Shawe-Taylor, Cristianini - 1998
23 Generalization bounds via eigenvalues of the gram matrix - Schölkopf, Shawe-Taylor, et al. - 1999
17 Kernel-dependent support vector error bounds - Scholkopf, Shawe-Taylor, et al. - 1999
9 The kernel gibbs sampler - Graepel, Herbrich - 2000
7 The sample complexity of pattern classi�cation with neural networks: The size of the weights is more important than the size of the network - Bartlett - 1998
6 Cristianini N. Robust Bounds on Generalization from the Margin Distribution - Shawe-Taylor - 1998
6 A Theory of Learning in Artificial Neural Networks - Anthony, Bartlett - 1999
5 G•• Tables of Indefinite Integrals - Bois - 1961
1 Lszl Gyr, and Gbor Lugosi, A Probabilistic Theory of Pattern Recognition - Devroye - 1996
1 Tables of Indenite Integrals - Bois - 1961
1 A PAC analysis of a Bayesian estimator,” tech - Shawe-Taylor, Williamson - 1997
1 Gibbs estimators,” Tech. Rep. LMENS-98-21, École normale supérieure, Département de mathématiques et applications (DMA - Catoni - 1998
|
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.5046","timestamp":"2014-04-18T19:09:04Z","content_type":null,"content_length":"32079","record_id":"<urn:uuid:4806b2a9-9487-42f8-b7e6-3abfbe69ea08>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Continuous automorphism groups of normed vector spaces?
up vote 11 down vote favorite
Consider the metric space on, say, ℝ^2 induced by the various $L^p$ norms, and the group of isometries from that space into itself that preserve the origin. When $p=2$ I get the continuous group of
rotations, but when $p\in\{1,3,4,5,...\infty\}$ it looks like I just get $D_8$, the symmetry group of the square. Question: what's going on here? Why is 2 so special? Are there other natural norms on
ℝ^2 (or on ℝ^n) besides the euclidean one that give interesting isometry groups?
fa.functional-analysis banach-spaces convex-geometry lie-groups
Fair enough. I was wondering about whether there was something more exotic than pasting together norms I already knew about. – Jason Reed Feb 1 '10 at 18:28
add comment
3 Answers
active oldest votes
The following answer gives a partial description of the isometry groups of finite-dimensional normed spaces.
I assume that an isometry is a bijection preserving the distance function. By the Mazur-Ulam theorem it then follows that an isometry is a linear transformation composed with a translation.
Thus we may assume without loss of generality that an isometry fixes the origin, so the isometry group is a subgroup of $GL(n)$.
Then the isometry group of any (real) finite-dimensional normed space is conjugate in $GL(n)$ to a closed subgroup of $O(n)$ that contain $-id$. This is seen as follows.
Consider the John ellipsoid $E$ of the unit ball $B$ of some $n$-dimensional normed space. This is the ellipsoid of largest volume contained in $B$ and, crucially, it is the unique such
After some choice of basis we may assume that $E$ is the Euclidean ball. An isometry maps $B$ onto $B$, so it must map the John ellipsoid to the John ellipsoid. It follows that the isometry
group is a subgroup of $O(n)$ containing $-id$. This subgroup is clearly closed, hence compact.
up vote 9
down vote The converse is surely false. The following is an attempt at constructing a norm from such a subgroup. Fix a Euclidean unit vector $v$. Then its $Gv$ is a compact set of Euclidean unit
vectors, symmetric with respect to the origin. Its convex hull $Gv$ is still compact and symmetric, so gives a unit ball $B_0$ of some norm on the linear span of $Gv$. If this linear span
is not all of $\mathbb{R}^n$, then the unit ball has to be made full-dimensional in a sufficiently rough way, so as not to add any more isometries.
However, as pointed out by Leonid Kovalev in the comments, there are closed subgroups of $O(n)$, such as $U(n)$, where this construction gives a norm with a strictly larger isometry group
(in the case of $U(n)$, the Euclidean norm).
As pointed out by Bill Johnson in a comment to his answer, it was shown by Gordon and Loewy that any $finite$ subgroup of $O(n)$ that contains $-id$ is the isometry group of some norm on $\
mathbb{R}^n$. It's still my guess that the only way you can get infinite isometry groups (in the finite-dimensional case) is by having Euclidean subspaces, and for the norm to be so
symmetric that it shares all the symmetries of this subspace.
There is no problem making the unit ball full dimensional, since you can include the orbit under $G$ of the unit vector basis. This is no loss of generality by Auerbach's lemma. Also,
this construction, if it works would give a unit ball that is inside the Euclidean ball. The Euclidean ball would be the ellipsoid of minimal volume containing the unit ball (i.e., the
polar of the John ellipsoid). – Bill Johnson Feb 1 '10 at 15:54
Why was this answer accepted? Konrad suggested an approach but did not give an answer. – Bill Johnson Feb 1 '10 at 16:53
Sorry, I may have misunderstood the norms of what "accepting" an answer is supposed to mean. – Jason Reed Feb 1 '10 at 18:29
Could somebody give a good description of the closed subgroups of $O(n)$? Perhaps this deserves to be a new question. In the two-dimensional case, the group is either finite (and
dihedral) or the whole $O(2)$. For general $n$, perhaps there is an orthogonal decomposition of the space such that the orbit of a unit vector in each component is either finite or the
whole unit sphere. – Konrad Swanepoel Feb 1 '10 at 19:00
add comment
I think what groups can be the isometry group of a finite dimensional normed space are classified, maybe by Y. Gordon and/or D.R. Lewis. I don't have access to emath from home but will
check the reference tomorrow if no one has answered by then.
up vote 8
down vote BTW: Banach-spaces would be a more appropriate tag IMO.
1 I see Leonid added tags, which is something I don't know how to do. – Bill Johnson Jan 31 '10 at 23:29
2 It seems to me that someone like Bill Johnson should be given the 500 reputation points automatically. – Deane Yang Jan 31 '10 at 23:46
3 Thanks, Leonid (and thanks for the vote of confidence, Deane). One other comment: Any group which is the group of isometries for some n dimensional normed space $X$ must be a
(necessarily compact) subgroup of the orthogonal group because isometries of it preserve the ellipsoid of maximal volume inside the unit ball of $X$. – Bill Johnson Feb 1 '10 at 0:22
@Bill: I missed your comment, which is essentially my answer. – Konrad Swanepoel Feb 1 '10 at 0:33
Gordon and Loewy in Math. Annalen 241, 159-180 (1979) consider the question: If $G$ is a group of linear operators on $R^n$ which contains $I$ and $-I$, is it the group of isometries
of some norm on $R^n$? Among other results, they prove that the answer is yes if $G$ is finite. – Bill Johnson Feb 2 '10 at 22:17
show 1 more comment
Consider the following norm on $\mathbb{R}^{2}$: $||(x,y)||$ := $|x|+|y|$ if $xy\leq0$; $||(x,y)||$ := $|y|$ if $xy\geq0$ and $|y|$ $\geq3|x|$; $||(x,y)||$ := $|x|+\frac{2}{3}|y|$ if
up vote 3 down $xy$ $>0$ and $|y|$ $\leq3|x|$. Then the group of isometries is { $\pm I\ $}.
Bill Davis proved in the 1970s that any (I think separable) Banach space can be equivalently renormed so that the only isometries are $\pm I$. – Bill Johnson Feb 5 '10 at 3:21
Obviously I should not have relied on my memory. Thanks for the correction, Leonid. – Bill Johnson Feb 9 '10 at 21:35
1 This is true for all [real] Banach spaces (separable or not), due to K.Jarosz siue.edu/MATH/kj_papers/AnyBanach.pdf . – Ady Feb 10 '10 at 22:09
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis banach-spaces convex-geometry lie-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/13596/continuous-automorphism-groups-of-normed-vector-spaces?sort=newest","timestamp":"2014-04-21T09:59:42Z","content_type":null,"content_length":"75471","record_id":"<urn:uuid:7dcb82cf-9519-43dd-b176-d5fd801a9f53>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A question about RPIM shapefunction
Submitted by 11hours on Thu, 2013-02-28 22:06.
I am solving a simple solid mechanics problem by meshless method. use RPIM to calculate shape function.
when i calculate the derivatives of shape function, i found the derivative is not close to zero at the compute point, when the point is on the boundry of problem domain.
My question is, should the derivative be zero if the node is on the boundry?
Submitted by
on Thu, 2013-02-28 22:51.
fig.1 derivatives of shape function respect to x when the point for computing is in domain.
the derivative at the comput point is very close to zero.
Fig.2 derivatives of shape function respect to Y when the point for computing is on the lower boundry of domain.
the derivative at the comput point is not close to zero.
also, the derivative at the comput point is much bigger than zero.
You are right, if the node is inside the domain then the shape function
derivative is zero at the node but if the node is on the boundary then
the derivative of the shape function is not zero at the node. If you
plot the shape function and look at its slope you will also see this and
the reason is that the shape function is symmetric about a node when it
is inside the domain but this is not true for a node on the boundary.
Submitted by
on Fri, 2013-03-01 22:29.
Thank you zahoorswati!
I see your point. The shape function derivative is discontinuous on the normal direction of boundary.
My compute result shows that, I can get the right displacement and stress inside the domain, but can't get them on the boundary. So I thought may be the shape function has some problem.
I will check on my code.
Submitted by
on Fri, 2013-03-01 22:29.
why can't I submit reply.
Recent comments
|
{"url":"http://imechanica.org/node/14271","timestamp":"2014-04-24T12:26:43Z","content_type":null,"content_length":"25924","record_id":"<urn:uuid:e5479604-256f-44d2-a60d-9aa9dbd7d0c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(Download) ICSE: Class XII Syllabus - 2013 "Physics"
ICSE (Class XII)
Syllabus (2013)
Subject: Physics
There will be two papers in the subject.
Paper I: Theory - 3 hour ... 70 marks
Paper II: Practical - 3 hours ... 20 marks
Project Work ... 7 marks
Practical File ... 3 marks
PAPER I -THEORY- 70 Marks
Paper I shall be of 3 hours duration and be divided into two parts.
Part I (20 marks): This part will consist of compulsory short answer questions, testing knowledge, application and skills relating to elementary/fundamental aspects of the entire syllabus.
Part II (50 marks): This part will be divided into three Sections A, B and C. There shall be three questions in Section A (each carrying 9 marks) and candidates are required to answer two questions
from this Section.
There shall be three questions in Section B (each carrying 8 marks) and candidates are required to answer two questions from this Section. There shall be three questions in Section C (each carrying 8
marks) and candidates are required to answer two questions from this Section. Therefore, candidates are expected to answer six questions in Part II.
Note: Unless otherwise specified, only S. I. units are to be used while teaching and learning, as well as for answering questions.
1. Electrostatics
(i) Coulomb's law, S.I. unit of charge; permittivity of free space.
(ii) Concept of electric field E = F/qo; Gauss' theorem and its applications.
(iii) Electric dipole; electric field at a point on the axis and perpendicular bisector of a dipole; electric dipole moment; torque on a dipole in a uniform electric field.
(iv) Electric lines of force.
(v) Electric potential and potential energy; potential due to a point charge and due to a dipole; potential energy of an electric dipole in an electric field. Van de Graff generator.
(vi) Capacitance of a conductor C = Q/V, the farad; capacitance of a parallel-plate capacitor;
(vii) Dielectrics (elementary ideas only); permittivity and relative permittivity of a dielectric (∈r = ∈/∈o). Effects on pd, charge and capacitance
2. Current Electricity.
(i) Steady currents; sources of current, simple cells, secondary cells.
(ii) Potential difference as the power supplied divided by the current; Ohm's law and its limitations; Combinations of resistors in series and parallel; Electric energy and power.
(iii) Mechanism of flow of current in metals, drift velocity of charges. Resistance and resistivity and their relation to drift velocity of electrons; description of resistivity and conductivity
based on electron theory; effect of temperature on resistance, colour coding of resistance.
(iv) Electromotive force in a cell; internal resistance and back emf. Combination of cells in series and parallel.
(v) Kirchoff's laws and their simple applications to circuits with resistors and sources of emf; Wheatstone bridge, metre -bridge and potentiometer; use for comparison of emf and determination of
internal resistance of sources of current; use of resistors (shunts and multipliers) in ammeters and voltmeters.
(vi) Electrical Power
(vii) Thermoelectricity; Seebeck effect; measurement of thermo emf; its variation with temperature. Peltier effect.
3. Magnetism
(i) Magnetic field B, definition from magnetic force on a moving charge; magnetic field lines. Superposition of magnetic fields; magnetic field and magnetic flux density; the earth's magnetic field;
Magnetic field of a magnetic dipole; tangent law.
(ii) Properties of dia, para and ferromagnetic substances; susceptibility and relative permeability
4. Electromagnetism
(i) Oersted's experiment; Biot-Savart law, the tesla; magnetic field near a long straight wire, at the centre of a circular loop, and at a point on the axis of a circular coil carrying current and a
solenoid. Amperes circuital law and its application to obtain magnetic field due to a long straight wire; tangent galvanometer.
(ii) Force on a moving charge in a magnetic field; force on a current carrying conductor kept in a magnetic field; force between two parallel current carrying wires; definition of the ampere based on
the force between two current carrying wires. Cyclotron (simple idea).
(iii) A current loop as a magnetic dipole; magnetic dipole moment; torque on a current loop; moving coil galvanometer.
(iv) Electromagnetic induction, magnetic flux and induced emf; Faraday's law and Lenz's law; transformers; eddy currents.
(v) Mutual and self inductance: the henry. Growth and decay of current in LR circuit (dc) (graphical approach), time constant.
(vi) Simple a.c. generators. Principle, description, theory and use
(v) Comparison of a.c. with d.c. Variation in current and voltage with time for a.c. and d.c.
5. Alternating Current Circuits
(i) Change of voltage and current with time, the phase difference; peak and rms values of voltage and current; their relation in sinusoidal case.
(ii) Variation of voltage and current in a.c. circuits consisting of only resistors, only inductors and only capacitors (phasor representation), phase lag and phase lead.
(iii) The LCR series circuit: phasor diagram, expression for V or I; phase lag/lead; impedance of a series LCR circuit (arrived at by phasor diagram); Special cases for RL and RC circuits.
6. Wave Optics
(i) Complete electromagnetic spectrum from radio waves to gamma rays; transverse nature of electromagnetic waves, Huygen's principle; laws of reflection and refraction from Huygen's principle. Speed
of light.
(ii) Conditions for interference of light, interference of monochromatic light by double slit; measurement of wave length. Fresnel’s biprism.
(iii) Single slit Fraunhofer diffraction (elementary explanation).
(iv) Plane polarised electromagnetic wave (elementary idea), polarisation of light by reflection. Brewster's law; polaroids.
7. Ray Optics and Optical Instruments
(i) Refraction of light at a plane interface (Snell's law); total internal reflection and critical angle; total reflecting prisms and optical fibres.
(ii) Refraction through a prism, minimum deviation and derivation of relation between n, A and δmin.
(iii) Refraction at a single spherical surface (relation between n1, n2, u, v and R); refraction through thin lens (lens maker's formula and formula relating u, v, f, n, R1 and R2); combined focal
length of two thin lenses in contact. Combination of lenses and mirrors [Silvering of lens excluded].
(vi) Simple astronomical telescope (refracting and reflecting), magnifying power and resolving power of a simple astronomical telescope.
(vii) Human Eye, Defects of vision and their correction.
8. Electrons and Photons
(i) Cathode rays: measurement of e/m for electrons. Millikan’s oil drop experiment.
(ii) Photo electric effect, quantization of radiation; Einstein's equation; threshold frequency; work function; energy and momentum of photon. Determination of Planck’s Constant.
(iii) Wave particle duality, De Broglie equation, phenomenon of electron diffraction (informative only).
9. Atoms
(i) Charge and size of nuclei (α-particle scattering); atomic structure; Bohr's postulates, Bohr's quantization condition; radii of Bohr orbits for hydrogen atom; energy of the hydrogen atom in the
nth state; line spectra of hydrogen and calculation of E and f for different lines.
(ii) Production of X-rays; maximum frequency for a given tube potential. Characteristic and continuous X -rays. Moseley’s law.
10. Nuclei
(i) Atomic masses; unified atomic mass unit u and its value in MeV; the neutron; composition and size of nucleus; mass defect and binding energy.
(ii) Radioactivity: nature and radioactive decay law, half-life, mean life and decay constant. Nuclear reactions.
11. Nuclear Energy
(i) Energy - mass equivalence.
(ii) Nuclear fission; chain reaction; principle of operation of a nuclear reactor.
(iii) Nuclear fusion; thermonuclear fusion as the source of the sun's energy.
12. Semiconductor Devices
(i) Energy bands in solids; energy band diagrams for distinction between conductors, insulators and semi-conductors - intrinsic and extrinsic; electrons and holes in semiconductors.
(ii) Junction diode; depletion region; forward and reverse biasing current - voltage characteristics; pn diode as a half wave and a full wave rectifier; solar cell, LED and photodiode. Zener diode
and voltage regulation.
(iii) The junction transistor; npn and pnp transistors; current gain in a transistor; transistor (common emitter) amplifier (only circuit diagram and qualitative treatment) and oscillator.
(iv) Elementary idea of discreet and integrated circuits, analogue and digital circuits. Logic gates (symbols; working with truth tables; applications and uses) - NOT, OR, AND, NOR, NAND.
PRACTICAL WORK- 20 Marks
The experiments for laboratory work and practical examinations are mostly from two groups;
(i) experiments based on ray optics and
(ii) experiments based on current electricity. The main skill required in group
(i) is to remove parallax between a needle and the real image of another needle. In group
Courtesy: cisce.org
Work With CBSEPORTAL : K-12 Content Developers, Freelancers Required
|
{"url":"http://cbseportal.com/exam/cisce/class-12/syllabus/physics","timestamp":"2014-04-17T06:41:08Z","content_type":null,"content_length":"33701","record_id":"<urn:uuid:b485fdaf-d9a9-4dcb-9611-fb7bee492699>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Merchantville Prealgebra Tutor
...I studied French in high school and I am now studying both Greek and Latin. In addition, I've spent time as an SAT tutor where I taught students vocabulary and tips for memorizing vocabulary.
While studying Latin, I've developed a greater understanding of English grammar.
10 Subjects: including prealgebra, algebra 1, vocabulary, grammar
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because
this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including prealgebra, physics, calculus, geometry
...Student success determined how many sessions were needed, and student feedback was an integral part of the program. An important part of the development and implementation of the program was
not only ensuring that students' needs were met, but that relationships were built between the students a...
19 Subjects: including prealgebra, reading, algebra 1, geometry
I have over 20 years of experience teaching a wide variety of subjects to all age groups. I started teaching when I was a freshman in college to graduate students, then I went on to teaching
Algebra to middle school kids. But one of my favorite subjects to tutor are 4th and 5th grade Elementary.
19 Subjects: including prealgebra, reading, Spanish, geometry
...I am a patient and knowledgeable tutor for high-school and college level students in math and science. Each student starts out in a different place and has unique needs and strengths. Science
and math courses can be overwhelming and I will help you build the knowledge base to make them manageable.
10 Subjects: including prealgebra, chemistry, physics, algebra 1
|
{"url":"http://www.purplemath.com/Merchantville_prealgebra_tutors.php","timestamp":"2014-04-19T20:17:09Z","content_type":null,"content_length":"24357","record_id":"<urn:uuid:8db4e3fb-fcf2-4218-a9a6-072e133188ee>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
reactive-0.11: Push-pull functional reactive programming Source code Contents Index
FRP.Reactive.Reactive Stability experimental
Maintainer conal@conal.net
Simple reactive values. Adds some extra functionality on top of FRP.Reactive.PrimReactive
module FRP.Reactive.PrimReactive
type ImpBounds t = Improving (AddBounds t)
exactNB :: ImpBounds t -> t
type TimeT = Double
type ITime = ImpBounds TimeT
type Future = FutureG ITime
traceF :: Functor f => (a -> String) -> f a -> f a
type Event = EventG ITime
withTimeE :: Ord t => EventG (ImpBounds t) d -> EventG (ImpBounds t) (d, t)
withTimeE_ :: Ord t => EventG (ImpBounds t) d -> EventG (ImpBounds t) t
atTime :: TimeT -> Event ()
atTimes :: [TimeT] -> Event ()
listE :: [(TimeT, a)] -> Event a
zipE :: (Ord t, Bounded t) => (c, d) -> (EventG t c, EventG t d) -> EventG t (c, d)
scanlE :: (Ord t, Bounded t) => (a -> b -> a) -> a -> EventG t b -> EventG t a
monoidE :: (Ord t, Bounded t, Monoid o) => EventG t o -> EventG t o
firstRestE :: (Ord t, Bounded t) => EventG t a -> (a, EventG t a)
firstE :: (Ord t, Bounded t) => EventG t a -> a
restE :: (Ord t, Bounded t) => EventG t a -> EventG t a
remainderR :: (Ord t, Bounded t) => EventG t a -> ReactiveG t (EventG t a)
snapRemainderE :: (Ord t, Bounded t) => EventG t b -> EventG t a -> EventG t (a, EventG t b)
onceRestE :: (Ord t, Bounded t) => EventG t a -> EventG t (a, EventG t a)
withPrevE :: (Ord t, Bounded t) => EventG t a -> EventG t (a, a)
withPrevEWith :: (Ord t, Bounded t) => (a -> a -> b) -> EventG t a -> EventG t b
withNextE :: (Ord t, Bounded t) => EventG t a -> EventG t (a, a)
withNextEWith :: (Ord t, Bounded t) => (a -> a -> b) -> EventG t a -> EventG t b
mealy :: (Ord t, Bounded t) => s -> (s -> s) -> EventG t b -> EventG t (b, s)
mealy_ :: (Ord t, Bounded t) => s -> (s -> s) -> EventG t b -> EventG t s
countE :: (Ord t, Bounded t, Num n) => EventG t b -> EventG t (b, n)
countE_ :: (Ord t, Bounded t, Num n) => EventG t b -> EventG t n
diffE :: (Ord t, Bounded t, AffineSpace a) => EventG t a -> EventG t (Diff a)
type Reactive = ReactiveG ITime
snapshot_ :: (Ord t, Bounded t) => ReactiveG t b -> EventG t a -> EventG t b
snapshot :: (Ord t, Bounded t) => ReactiveG t b -> EventG t a -> EventG t (a, b)
whenE :: (Ord t, Bounded t) => EventG t a -> ReactiveG t Bool -> EventG t a
scanlR :: (Ord t, Bounded t) => (a -> b -> a) -> a -> EventG t b -> ReactiveG t a
monoidR :: (Ord t, Bounded t, Monoid a) => EventG t a -> ReactiveG t a
eitherE :: (Ord t, Bounded t) => EventG t a -> EventG t b -> EventG t (Either a b)
maybeR :: (Ord t, Bounded t) => EventG t a -> EventG t b -> ReactiveG t (Maybe a)
flipFlop :: (Ord t, Bounded t) => EventG t a -> EventG t b -> ReactiveG t Bool
countR :: (Ord t, Bounded t, Num n) => EventG t a -> ReactiveG t n
splitE :: (Ord t, Bounded t) => EventG t b -> EventG t a -> EventG t (a, EventG t b)
switchE :: (Ord t, Bounded t) => EventG t (EventG t a) -> EventG t a
integral :: forall v t. (VectorSpace v, AffineSpace t, Scalar v ~ Diff t) => t -> Event t -> Reactive v -> Reactive v
sumR :: (Ord t, Bounded t) => AdditiveGroup v => EventG t v -> ReactiveG t v
exact :: Improving a -> a
batch :: TestBatch
module FRP.Reactive.PrimReactive
type ImpBounds t = Improving (AddBounds t) Source
exactNB :: ImpBounds t -> t Source
Exact & finite content of an ImpBounds
type TimeT = Double Source
The type of time values with additional min & max elements.
type ITime = ImpBounds TimeT Source
Improving times, as used for time values in Event, Reactive, and ReactiveB.
type Future = FutureG ITime Source
Type of future values. Specializes FutureG.
traceF :: Functor f => (a -> String) -> f a -> f a Source
Trace the elements of a functor type.
type Event = EventG ITime Source
Events, specialized to improving doubles for time
withTimeE :: Ord t => EventG (ImpBounds t) d -> EventG (ImpBounds t) (d, t) Source
Access occurrence times in an event. See withTimeGE for more general notions of time.
withTimeE :: Event a -> Event (a, TimeT)
withTimeE_ :: Ord t => EventG (ImpBounds t) d -> EventG (ImpBounds t) t Source
Access occurrence times in an event. Discard the rest. See also withTimeE.
withTimeE_ :: Event a -> Event TimeT
atTime :: TimeT -> Event () Source
Single-occurrence event at given time. See atTimes and atTimeG.
atTimes :: [TimeT] -> Event () Source
Event occuring at given times. See also atTime and atTimeG.
listE :: [(TimeT, a)] -> Event a Source
Convert a temporally monotonic list of timed values to an event. See also the generalization listEG
zipE :: (Ord t, Bounded t) => (c, d) -> (EventG t c, EventG t d) -> EventG t (c, d) Source
Generate a pair-valued event, given a pair of initial values and a pair of events. See also pair on Reactive. Not quite a zip, because of the initial pair required.
scanlE :: (Ord t, Bounded t) => (a -> b -> a) -> a -> EventG t b -> EventG t a Source
Like scanl for events.
monoidE :: (Ord t, Bounded t, Monoid o) => EventG t o -> EventG t o Source
Accumulate values from a monoid-typed event. Specialization of scanlE, using mappend and mempty.
firstRestE :: (Ord t, Bounded t) => EventG t a -> (a, EventG t a) Source
Decompose an event into its first occurrence value and a remainder event. See also firstE and restE.
firstE :: (Ord t, Bounded t) => EventG t a -> a Source
Extract the first occurrence value of an event. See also firstRestE and restE.
restE :: (Ord t, Bounded t) => EventG t a -> EventG t a Source
Extract the remainder an event, after its first occurrence. See also firstRestE and firstE.
remainderR :: (Ord t, Bounded t) => EventG t a -> ReactiveG t (EventG t a) Source
Remaining part of an event. See also withRestE.
snapRemainderE :: (Ord t, Bounded t) => EventG t b -> EventG t a -> EventG t (a, EventG t b) Source
Tack remainders a second event onto values of a first event. Occurs when the first event occurs.
onceRestE :: (Ord t, Bounded t) => EventG t a -> EventG t (a, EventG t a) Source
Convert an event into a single-occurrence event, whose occurrence contains the remainder.
withPrevE :: (Ord t, Bounded t) => EventG t a -> EventG t (a, a) Source
Pair each event value with the previous one. The second result is the old one. Nothing will come out for the first occurrence of e, but if you have an initial value a, you can do withPrevE (pure a
mappend e).
withPrevEWith :: (Ord t, Bounded t) => (a -> a -> b) -> EventG t a -> EventG t b Source
Same as withPrevE, but allow a function to combine the values. Provided for convenience.
withNextE :: (Ord t, Bounded t) => EventG t a -> EventG t (a, a) Source
Pair each event value with the next one one. The second result is the next one.
withNextEWith :: (Ord t, Bounded t) => (a -> a -> b) -> EventG t a -> EventG t b Source
Same as withNextE, but allow a function to combine the values. Provided for convenience.
mealy :: (Ord t, Bounded t) => s -> (s -> s) -> EventG t b -> EventG t (b, s) Source
Mealy-style state machine, given initial value and transition function. Carries along event data. See also mealy_.
mealy_ :: (Ord t, Bounded t) => s -> (s -> s) -> EventG t b -> EventG t s Source
Mealy-style state machine, given initial value and transition function. Forgetful version of mealy.
countE :: (Ord t, Bounded t, Num n) => EventG t b -> EventG t (b, n) Source
Count occurrences of an event, remembering the occurrence values. See also countE_.
countE_ :: (Ord t, Bounded t, Num n) => EventG t b -> EventG t n Source
Count occurrences of an event, forgetting the occurrence values. See also countE.
diffE :: (Ord t, Bounded t, AffineSpace a) => EventG t a -> EventG t (Diff a) Source
Difference of successive event occurrences. See withPrevE for a trick to supply an initial previous value.
Reactive values
type Reactive = ReactiveG ITime Source
Reactive values, specialized to improving doubles for time
snapshot_ :: (Ord t, Bounded t) => ReactiveG t b -> EventG t a -> EventG t b Source
Like snapshot but discarding event data (often a is '()').
snapshot :: (Ord t, Bounded t) => ReactiveG t b -> EventG t a -> EventG t (a, b) Source
Snapshot a reactive value whenever an event occurs.
whenE :: (Ord t, Bounded t) => EventG t a -> ReactiveG t Bool -> EventG t a Source
Filter an event according to whether a reactive boolean is true.
scanlR :: (Ord t, Bounded t) => (a -> b -> a) -> a -> EventG t b -> ReactiveG t a Source
Like scanl for reactive values. See also scanlE.
monoidR :: (Ord t, Bounded t, Monoid a) => EventG t a -> ReactiveG t a Source
Accumulate values from a monoid-valued event. Specialization of scanlE, using mappend and mempty. See also monoidE.
eitherE :: (Ord t, Bounded t) => EventG t a -> EventG t b -> EventG t (Either a b) Source
Combine two events into one.
maybeR :: (Ord t, Bounded t) => EventG t a -> EventG t b -> ReactiveG t (Maybe a) Source
Start out blank (Nothing), latching onto each new a, and blanking on each b. If you just want to latch and not blank, then use mempty for lose.
flipFlop :: (Ord t, Bounded t) => EventG t a -> EventG t b -> ReactiveG t Bool Source
Flip-flopping reactive value. Turns true when ea occurs and false when eb occurs.
countR :: (Ord t, Bounded t, Num n) => EventG t a -> ReactiveG t n Source
Count occurrences of an event. See also countE.
splitE :: (Ord t, Bounded t) => EventG t b -> EventG t a -> EventG t (a, EventG t b) Source
Partition an event into segments.
switchE :: (Ord t, Bounded t) => EventG t (EventG t a) -> EventG t a Source
Switch from one event to another, as they occur. (Doesn't merge, as join does.)
integral :: forall v t. (VectorSpace v, AffineSpace t, Scalar v ~ Diff t) => t -> Event t -> Reactive v -> Reactive v Source
Euler integral.
sumR :: (Ord t, Bounded t) => AdditiveGroup v => EventG t v -> ReactiveG t v Source
exact :: Improving a -> a Source
Produced by Haddock version 2.4.2
|
{"url":"http://hackage.haskell.org/package/reactive-0.11/docs/FRP-Reactive-Reactive.html","timestamp":"2014-04-16T10:30:57Z","content_type":null,"content_length":"67527","record_id":"<urn:uuid:187be77b-4bcb-4579-8692-7587d2c5a9ec>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ecological Archives
Ecological Archives A022-023-A2
Richard Bischof and Jon E. Swenson. 2012. Linking noninvasive genetic sampling and traditional monitoring to aid management of a transborder carnivore population. Ecological Applications 22:361–373.
Appendix B. Cross-validation.
With the validation data (see main text), we conducted tests to answer the following questions:
1. How does the distribution of the 95% kernel home range size estimate for the cross-validation data set (including only females that produced cubs) compare with that of the set of home ranges used
during simulations in the model?
Result: The distributions of the 95% kernel home range sizes of the cross-validation set (females with cubs, N = 13) and the simulation set (N = 35) show substantial overlap. No significant
difference between their means was detected (t = 1.49, df = 16.62, P = 0.155, Fig. B1).
2. How does the actual proportion of utilization distributions (UD) attributable to the southern bear region in Sweden (for reproducing females) compare with the model-simulated proportion for the
same region?
Result: The predicted proportion of annual UDs for cross-validation females that falls within the southern bear region in Sweden (81.3%) corresponded closely with the proportion predicted
through using the home range simulation submodel of the model (84.8%, Fig. B1).
3. How does the actual number of reproductions in the cross-validation dataset compare with the predicted number of reproductions using the logistic regression model described in the main document?
Result: The number of observed reproductions (11.6 reproductions for 91 bear-years) and the number of reproductions (13) predicted using the logistic regression model (applied to the known
age structure) were comparable (Fig. B1).
FIG. B1. Cross-validation results: left panel - a comparison of the distribution of log-transformed 95% kernel home range sizes of the cross-validation data set (red line, mean: red triangle) and the
home range sample used during simulations (black line, mean: black triangle); middle panel - the distribution of the model-predicted proportion of utilization distributions within a focal area
(southern bear region in Sweden) and the value of the actual proportion of utilization by cross-validation bears-years within the same area (red triangle); right panel – distribution of the
GLM-predicted number of reproductions (black line, mean: black triangle) and the number of actual reproductions during cross-validation bear-years (red triangle). 95% confidence limits for the
distributions are marked by dashed vertical lines.
[Back to A022-023]
|
{"url":"http://esapubs.org/archive/appl/A022/023/appendix-B.htm","timestamp":"2014-04-18T06:25:43Z","content_type":null,"content_length":"3832","record_id":"<urn:uuid:1b21e223-9f1f-4d58-8697-bbc7e0217ecf>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A property of periodic words
up vote 3 down vote favorite
Question is edited Perhaps this formulation is clearer.
It is well known that if a power of a primitive (i.e. not a proper power) word $u$ contains two different occurrences of a word $v$, $|v|>|u|$, then the occurrences are shifts of each other by a
multiple of $|u|$ (formally: $u^n\equiv pvq\equiv p'vq', |p|-|p'|\in |u|\mathbb{Z}$). What is the simplest proof of that fact? Is there a simple proof without using Fine-Wilf?
Added. A sketch of a proof. We can assume that $v$ starts with $u$, $v\equiv u^sw$ for $s\ge 1$. Then $v\equiv u^sw\equiv u_1u^sw_1$ such that $|u_1|>0$, $|u_1|+|w_1|=|w|$. Therefore $w\equiv w_2w_1$
for some $w_2$ whence $u^sw_2\equiv u_1u^s$. Then a standard fact from "combinatorics on words" gives that $u_1\equiv xy, w_2\equiv yx, u^s\equiv (xy)^mx$ for some words $x,y$ and some $m\ge 1$ (the
fact is: if $pq=qr$, then $p\equiv p_1p_2, q\equiv (p_1p_2)^zp_1, q\equiv p_2p_1$ for some $p_1,p_2,z$). Therefore $u_1u^s\equiv xy(xy)^mx$ is periodic with periods $u$ and $xy$. Its length is at
least $|xy|+|u|$. Therefore by Fine-Wilf it is periodic with period $t$ such that $|t|$ divides $|xy|=|u_1|$ and $|u|$. Hence $u$ is a power of a proper subword of $u$, a contradiction.
Is this really the right condition? 2 things seem wrong: (1) Do you mean $|p|-|p'|\not\in|u|\mathbb Z$? (2) If $v$ has the property in the question, so does any subword of $v$ that is longer than
$u$. Can't you just build a prime length $v$ like this, which then can't be a power of any word. – Anthony Quas Mar 22 '13 at 4:17
@Antony: (1) Yes, it is $|p|-|p'|$. Thanks! (2) I will add a sketch of a proof. – Mark Sapir Mar 22 '13 at 4:26
Sorry, but what is that $\equiv$ sign? Equality? – darij grinberg Mar 22 '13 at 4:47
@Darij: ≡ - letter-by-letter equality. In group theory, $"="$ usually means "freely equal", "$\equiv$" means "graphicaly equal". – Mark Sapir Mar 22 '13 at 4:57
add comment
1 Answer
active oldest votes
As you observe, we may assume that $p'$ is empty, i.e., that $v$ starts with a power of $u$. If $p$ is not a multiple of $u$, then writing $u$ as a circular word, it means you can find
two district places in the circle where you can read $u$. So $u=xy=yx$ for some $x,y$ (namely if $|p|$ is $r$ mod $|u|$ take $x$ to be the suffix of $p$ of length $r$ and $y$ the
up vote 4 down prefix of $v$ of length $|u|-r$). But then x,y are powers of a common element and hence u is not primitive.
vote accepted
@Ben: Thank you! – Mark Sapir Mar 22 '13 at 13:54
You're welcome. – Benjamin Steinberg Mar 22 '13 at 14:15
add comment
Not the answer you're looking for? Browse other questions tagged combinatorics-on-words or ask your own question.
|
{"url":"http://mathoverflow.net/questions/125231/a-property-of-periodic-words","timestamp":"2014-04-16T16:20:42Z","content_type":null,"content_length":"56962","record_id":"<urn:uuid:aa347f50-6476-46b3-9728-47380b35716a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - Predicate logic quesion
Predicate logic quesion
1. The problem statement, all variables and given/known data
Let p(n) and q(n) be predicates. For each pair of statements below, determine
whether the two statments are logically equivalent. Justify your answers.
(i) ∀n (p(n) ∧ q(n))
(ii) (∀n p(n)) ∧ (∀n q(n))
(i) ∃n st (p(n) ∧ q(n))
(ii) (∃n st p(n)) ∧ (∃n st q(n))
There are several questions in the same vein but these two are examples
2. Relevant equations
3. The attempt at a solution
I'm having a hard time wrapping my head around this problem. All the problems that I've worked on before are for individual values of n, I don't know how to go about proving or disproving for
questions like this. I know I can prove they are equivalent by showing i) <-> ii) but I can't even tell whether the statements are equivalent or not let alone writing the proof.
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
Re: Predicate logic quesion
Quote by GTL (Post 2945889)
1. The problem statement, all variables and given/known data
Let p(n) and q(n) be predicates. For each pair of statements below, determine
whether the two statments are logically equivalent. Justify your answers.
(i) ∀n (p(n) ∧ q(n))
(ii) (∀n p(n)) ∧ (∀n q(n))
(i) ∃n st (p(n) ∧ q(n))
(ii) (∃n st p(n)) ∧ (∃n st q(n))
There are several questions in the same vein but these two are examples
2. Relevant equations
What are the symbols that don't render correctly (the ones showing as boxes)?
Quote by GTL (Post 2945889)
3. The attempt at a solution
I'm having a hard time wrapping my head around this problem. All the problems that I've worked on before are for individual values of n, I don't know how to go about proving or disproving for
questions like this. I know I can prove they are equivalent by showing i) <-> ii) but I can't even tell whether the statements are equivalent or not let alone writing the proof.
Re: Predicate logic quesion
1 Attachment(s)
Quote by Mark44 (Post 2946121)
What are the symbols that don't render correctly (the ones showing as boxes)?
Symbols are not rendering properly? hmm they seem to be working on my system. All the symbols in part a are Universal Quantifiers (upside down 'A') and all the symbols in part b are Existential
Operators (backwards E). I have attached the screencap of the question if that makes it any clearer
Re: Predicate logic quesion
What methods can you use for justifications? Can you use informal reasoning, or should you use some kind of axiomatic proof?
In any case, do you have any intuition as which pairs are/aren't equivalent?
If you think a pair is NOT equivalent, then you can give a counter example.
If you think a pair IS equivalent, then you should be able to prove an implication in both directions.
But tell me your intuition first.
Cheers -- sylas
PS. Most computers should display the characters okay. They are Unicode characters 0x2200 (∀) and 0x2203 (∃). You need a font that includes them, and the capacity to manage unicode.
Re: Predicate logic quesion
Quote by sylas (Post 2946166)
What methods can you use for justifications? Can you use informal reasoning, or should you use some kind of axiomatic proof?
In any case, do you have any intuition as which pairs are/aren't equivalent?
If you think a pair is NOT equivalent, then you can give a counter example.
If you think a pair IS equivalent, then you should be able to prove an implication in both directions.
But tell me your intuition first.
Cheers -- sylas
PS. Most computers should display the characters okay. They are Unicode characters 0x2200 (∀) and 0x2203 (∃). You need a font that includes them, and the capacity to manage unicode.
I am reasonably confident that pair a) is equivalent. My reasoning is that if i) ∀n (p(n) ∧ q(n)) is true for certain values of n then it means that that same value of n makes p(n) true and q(n)
true, which would make the ii) true as well. If i) is false then it means that either p(n) or q(n) computed false for a certain value of n, this would make ii) false as well. Again, I don't know if
this reasoning is sound or not. This question had 2 more parts one involving
(universal quantifier used here for those who can't see the symbol)
(i) ∀n (p(n) V q(n))
(ii) (∀n p(n)) V (∀n q(n))
These two statements, I think are also equivalent based on similar reasoning to that of a). The questions with the existential operators also *seem* to be equivalent to me, for example for b) the
existential operator question if there is a number n such that i) is true then that would make ii) true as well and vice-verse as both p(n) and q(n) need to be true for the whole thing to be true
(similar logic can to be used if one of the statements computes false implieing either p(n) or q(n) is false and hence making both statements false) therefore logically equivalent. But I don't think
I'm right as I find it hard to believe that they'll give four examples of a certain kind of question in the problem set and have the answer to each one of them be: "they are logically equivalent"
without a single disproof. (part c is the exact same question as part b except with '^' replaced with 'V').
Just in the process of writing this post some ideas are starting to become clearer in my head and I can even see an outline of a proof in there but I'm just not confident in my reasoning to be sure
that my thought process is sound, esp since I think that each pair is logically equivalent which I find hard to believe in the context of the assignment; it leads me to conclude that there is some
grave error in my reasoning which could mean that everything I did with this question could be potentially wrong and I'm totally off.
(as for methods of justification, I don't even know any details about axioms beyond what they mean in the broadest sense lol. This is for an introductory Discrete Math course so I don't think the
proof needs to be too hardcore as long as it makes sense)
Re: Predicate logic quesion
Quote by GTL (Post 2946227)
I am reasonably confident that pair a) is equivalent. My reasoning is that if i) ∀n (p(n) ∧ q(n)) is true for certain values of n then it means that that same value of n makes p(n) true and q(n)
true, which would make the ii) true as well. If i) is false then it means that either p(n) or q(n) computed false for a certain value of n, this would make ii) false as well.
That would seem to be a good enough justification to me. The reasoning IS sound, even if not formalized.
However, your intuitions are leading astray in the other questions. For example, in this one, try to spell it out again, from scratch, and in BOTH directions.
(i) ∀n (p(n) V q(n))
(ii) (∀n p(n)) V (∀n q(n))
These two statements, I think are also equivalent based on similar reasoning to that of a).
Similar reasoning won't cut it. You need to give your reasons, standing by themselves and independently of what you said in (a). They are different formulae; the reasoning does not transpose that
The questions with the existential operators also *seem* to be equivalent to me, for example for b) the existential operator question if there is a number n such that i) is true then that would make
ii) true as well and vice-verse as both p(n) and q(n) need to be true for the whole thing to be true (similar logic can to be used if one of the statements computes false implieing either p(n) or q
(n) is false and hence making both statements false) therefore logically equivalent. But I don't think I'm right as I find it hard to believe that they'll give four examples of a certain kind of
question in the problem set and have the answer to each one of them be: "they are logically equivalent" without a single disproof. (part c is the exact same question as part b except with '^'
replaced with 'V').
Careful there. I'll won't tell you what's wrong here, but think it through carefully, first for going from (b i) to (b ii), and then think it through all over again for going from (b ii) to (b i).
(as for methods of justification, I don't even know any details about axioms beyond what they mean in the broadest sense lol. This is for an introductory Discrete Math course so I don't think the
proof needs to be too hardcore as long as it makes sense)
That seems fine. It's a really good idea to have the intuitions clear in your head, and then later on that can let you find axioms if you need to be more formal. For now... informal justification is
Try again... :-) sylas
All times are GMT -5. The time now is 12:52 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums
|
{"url":"http://www.physicsforums.com/printthread.php?t=440436","timestamp":"2014-04-20T05:52:49Z","content_type":null,"content_length":"17820","record_id":"<urn:uuid:56f8aeae-0c00-4628-80d8-4b45d5464e6b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Change of Base Formula - Problem 1
The change of base formula for logarithms is an easy to us formula which allows us to evaluate logs other than base 10 or base e. So what it allows us to do is take a log of any base and convert it
into a quotient where the bases can be whatever we want them to be. Typically we choose base 10 or base e because that is what our calculator is but we could choose any base that we wanted to.
For this particular example we’re going to use the change of base formula in a slightly different way. Normally when we have logarithms we try to figure out what each term is. Try to figure out what
log base 3 of 64 is. And we do that by saying what power of 3 will give us 64? The problem is that we don’t know anything. The same thing with the log base 3 of 8 we don’t know what power of 3 is,
sorry we don’t know what power, 3 to what power gives us 8, get that clarified. What we can do is actually change our base formula backwards. So instead of going from a single logarithm to 2, we can
go to 2 back to 1.
Our logs are the same base, which tells that we are in this form and so all we need to do is write it as a single log. The log of the denominator, the 8 comes up to the base and the log of the
numerator stays as it is. What we’ve done is we’ve taken this quotient and rewritten it using the change of base formula as a single log. We now know what this is, log base 8 of 64, is saying 8 to
what power is 64, that’s just going to be equal to 2. The change of base formula doesn’t always have to be taken a single log and writing it as 2 we could also use it going to the other direction
taking two logarithms of the same base and condensing it down to one.
change of base log logarithm
|
{"url":"https://www.brightstorm.com/math/precalculus/exponential-and-logarithmic-functions/change-of-base-formula-problem-1/","timestamp":"2014-04-19T19:35:31Z","content_type":null,"content_length":"73532","record_id":"<urn:uuid:0747373d-de01-4fb6-8cc7-878190965a9b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|