content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Summary: Fermat's Last Theorem: From Integers to
Elliptic Curves
Manindra Agarwal
IIT Kanpur
December 2005
Manindra Agarwal (IIT Kanpur) Fermat's Last Theorem December 2005 1 / 30
Fermat's Last Theorem
There are no non-zero integer solutions of the equation xn + yn = zn
when n > 2.
Manindra Agarwal (IIT Kanpur) Fermat's Last Theorem December 2005 2 / 30
Fermat's Last Theorem
Towards the end of his life, Pierre de Fermat (1601-1665) wrote in the
margin of a book:
I have discovered a truely remarkable proof of this theorem, but this
margin is too small to write it down.
After more than 300 years, when the proof was finally written, it did take a
little more than a margin to write.
Manindra Agarwal (IIT Kanpur) Fermat's Last Theorem December 2005 3 / 30
Fermat's Last Theorem | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/092/2784032.html","timestamp":"2014-04-19T04:49:32Z","content_type":null,"content_length":"7949","record_id":"<urn:uuid:577d304f-d194-49dd-b00e-65e65b23cc5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Godels theorems end in paradox
Replies: 4 Last Post: Feb 3, 2013 11:41 PM
Messages: [ Previous | Next ]
fom Re: Godels theorems end in paradox
Posted: Feb 3, 2013 11:09 PM
Posts: 1,969
Registered: 12/4/12 On 2/3/2013 11:31 AM, christian.bau wrote:
> On Feb 3, 3:09 pm, Aatu Koskensilta <aatu.koskensi...@uta.fi> wrote:
>> As an account of the first incompleteness theorem this is of course a
>> huge improvement over Australia's leading erotic poet's attempt, but
>> taken literally -- and when it comes to these matters we should strive
>> to say things that are, literally speaking, true and accurate -- it is
>> more or less nonsense nevertheless.
> Please explain. What I wrote is of course the starting point only, but
> it is completely accurate.
Make a simple generalization.... get criticized
Make every effort to be rigorous and precise.... get criticized
Aatu is usually forthright. Hope he gives you some
I don't know about "completely accurate," however. Even
the Wikipedia page is very careful about how they explain
the theorems.
Date Subject Author
2/3/13 Re: Godels theorems end in paradox Aatu Koskensilta
2/3/13 Re: Godels theorems end in paradox gnasher729
2/3/13 Re: Godels theorems end in paradox fom
2/3/13 Re: Godels theorems end in paradox anne423 | {"url":"http://mathforum.org/kb/message.jspa?messageID=8247719","timestamp":"2014-04-17T16:09:33Z","content_type":null,"content_length":"20400","record_id":"<urn:uuid:200a8c18-32b5-4639-9124-657ae47c2080>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Beginner's Guide to Exchange Rates and the Foreign Exchange Market
[Part 1: Exchange Rates - What are they and how are they calculated?]
by Mike Moffatt
Like most other rates in economics, the exchange rate is essentially a price and can be analyzed in the same way we would a price. Take a typical supermarket price, say lemons are selling at the
price of 3 for a dollar or 33 cents each. Then we can think of the dollar-to-lemon exchange rate as being 3 lemons because if we give up one dollar, we can get three lemons in return. Similarly, the
lemon-to-dollar exchange rate is 1/3 of a dollar or 33 cents, because if you sell a lemon, you will get 33 cents in return.
So when we speak of an X-to-Y exchange rate of Z, this means that if we give up 1 unit of X, we get Z units of Y in return. If we want to know the Y-to-X exchange rate, we calculate it using the
simple exchange rate formula:
Y-to-X exchange rate = 1 / X-to-Y exchange rate
Of course, the exchange rates we read in the paper or hear on radio or TV are not prices for X and Y or for oranges and lemons. Instead they're relative prices for different currencies, but they work
in the same fashion. On February 26, 2003 the U.S.-to-Japan exchange rate was 117 yen, so this means that you can purchase 117 Japanese yen in exchange for 1 U.S. dollar. To figure out how many U.S.
dollars you can get for 1 Japanese yen, we can just use the formula:
Japan-to-U.S. exchange rate = 1 / U.S.-to-Japan exchange rate
Japan-to-U.S. exchange rate = 1 / 117 = .00854
So this tells us that one Japanese yen is worth .00854 U.S. dollars, which is less than a penny.
Similarly if the Canadian dollar is worth .67 U.S. dollars, we have a Canada-to-U.S exchange rate of .67. If we want to know how many Canadian dollars we can buy with 1 U.S. dollar, we use the
U.S.-to-Canada exchange rate = 1/Canada-to-U.S. Exchange rate
U.S.-to-Canada exchange rate = 1/0.67 = 1.4925
So one U.S. dollar can get us $1.49 in Canadian funds.
To see why these relationships must hold, we'll look at the wonderful world of arbitrage.
Next page > Part 2: Exchange Rates - Arbitrage > Page 1, 2, 3, 4, 5, 6, 7, 8. | {"url":"http://economics.about.com/cs/money/l/aa022703a.htm","timestamp":"2014-04-17T06:41:36Z","content_type":null,"content_length":"40763","record_id":"<urn:uuid:3f178493-3375-4add-9010-e9bcbb7eb7a6>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hartsdale Math Tutor
Hi. I am a certified math teacher with 12 years teaching experience. Prior to teaching I was a financial analyst with a major corporation.
8 Subjects: including algebra 2, SAT math, algebra 1, prealgebra
...I worked for the tutoring center at SUNY Oswego while I was an undergraduate. I work with students of all ages and abilities. I prefer one-on-one tutoring and enjoy engaging my students with
fun and challenging math problems.
5 Subjects: including algebra 1, geometry, prealgebra, SAT math
...Finally, I believe relating the subject matter being taught to their everyday experience helps to keep students interested. I graduated from Columbia University in 1986, and earned a PhD. in
Sociology from Binghamton University in 1999. I have five years of teaching experience at the college level.
19 Subjects: including prealgebra, algebra 1, algebra 2, SAT math
As I was studying at LaGuardia Community College, my Engineering degree has trained me to cover most National Curriculum subjects, and my subject specialisms are Design and Technology and English.
I was working with the mathematics department and tutoring pre-algebra, algebra and pre-calculus. I have been able to extend both my practical and theoretical knowledge.
11 Subjects: including geometry, prealgebra, precalculus, reading
I have a Ph.D. in Developmental and Cellular Biology with over 10 years of Postdoctoral Teaching and Research experience at top notch educational institutions. I am very hard-working, personable,
intelligent, and committed to your success. I am currently teaching science to elementary through coll...
12 Subjects: including algebra 1, precalculus, biology, SAT math | {"url":"http://www.purplemath.com/hartsdale_math_tutors.php","timestamp":"2014-04-18T00:49:57Z","content_type":null,"content_length":"23610","record_id":"<urn:uuid:5484cfee-cca1-45da-910f-66d612a0dd00>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
-Scranton Preparatory School - The Jesuit Prep School of Northeast Pennsylvania
The primary objective of the mathematics program is to contribute to the total education of the student with course offerings that present a strong preparation, both in concepts and skills, for his
or her future needs. The following programs are subject to constant study and review and revisions are made when necessary.
This course introduces the student to the basic structure of mathematics through a thorough study of the real number system. An understanding of the concepts and mastery of necessary skills is
emphasized throughout. The need for precision and exactness in expression and thought is constantly stressed. Other topics covered are equations, inequalities, rational and irrational expressions.
In this course the aims begun in Algebra I are continued and carried out to a greater degree. This is accomplished through the study of triangles, quadrilaterals, polygons, circles, prisms, pyramids,
cylinders, cones and spheres. The studentsβ power of spatial visualization is developed through the integration of space geometry with plane geometry throughout the course.
In the Analytical Geometry course, material is presented from the Vector and Cartesian viewpoints. This course includes a thorough treatment of vectors, lines and conic sections in a plane.
In the Elementary Functions course, the following functions are covered in detail: polynomial, logarithmic, and exponential. The basic concepts of calculus are presented and used in the study of
these functions.
This one semester course is primarily for students whose college courses will not be in math-oriented fields. Therefore, its goal is to give these students a basic understanding of probability and
statistics to prepare them for college courses such as economics, business, education and sociology.
These are college level courses which stress theory, mechanics and applications in differential and integral calculus. They prepare the student for future college math courses and applications in
related fields. They cover the material which satisfies the agreement with the University of
MATH III AND IV - AN ALTERNATE PROGRAM
This program is designed primarily for the student who, at the end of second year, is not strong in mathematics. The material of the regular Math III program - Algebra II and Trigonometry - is
extended over a three-semester period. A course in Probability and Statistics in the second semester of fourth year completes the program. This program gives the student the necessary college
preparatory mathematics should his future interest be in some mathematically oriented field.
There are two ways for students to qualify for the accelerated math program. The first is for students who have taken a course in Algebra I in eighth grade. Based on the results of a qualifying
examination administered at Prep in May, students are given the opportunity to begin their math program with Algebra II in the first year, thus enabling them to complete the pre-calculus program at
the end of their junior year.
Another group of students will qualify for this programs based on their school record at the end of freshman year.
All students in the accelerated program will take Geometry in sophomore year followed by an integrated course in Algebra II, Trigonometry, and Analytic Geometry in junior year. These students will
then choose A.P. Calculus or Honors Calculus for senior year.
To remain in the accelerated program, students must maintain a sound academic record. | {"url":"http://scrantonprep.com/Academics/Curriculum/Math.htm","timestamp":"2014-04-18T19:13:44Z","content_type":null,"content_length":"23279","record_id":"<urn:uuid:6f04fb2d-5960-4375-86be-97d8ca161f80>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spatial and Temporal Transferability of Trip Generation Demand
Models in Israel
ADRIAN V. COTRUS ^1
JOSEPH N. PRASHKER ^2
YORAM SHIFTAN ^3,*
This research investigates the transferability of person-level disaggregate trip generation models (TGMs) in time and space using two model specifications: multinomial linear regression and Tobit.
The models are estimated for the Tel Aviv and Haifa metropolitan areas based on data from the 1984 and 1996/97 Israeli National Travel Habits Surveys. The paper emphasizes that Tobit models perform
better than regression or discrete choice models in estimating nontravelers. Furthermore, the paper notes that variables and file structures in household surveys need to be consistent. Results of the
study show that the estimated regression and Tobit disaggregate person-level TGMs are statistically different in space and in time. In spite of the transferred forecasts, the aggregate forecasts were
also similar.
KEYWORDS: Trip generation, transferability, multinomial linear regression, Tobit model.
Trip generation models (TGMs) are used as a first step in classical four-step travel demand modeling and, therefore, any over- or underprediction of trip generation rates can cause errors throughout
the entire transportation planning process. Inappropriate decisionmaking due to these types of errors can account for premature investments in infrastructure in the case of overprediction and loss of
labor hours, pollution, and low levels of service in the case of underprediction.
TGMs are usually estimated based on periodic surveys of the travel habits of individuals or households. They are expensive and difficult to perform and are not conducted often. For our research, we
used the last Israeli National Travel Habits Surveys collected in 1984 and 1996/97. Transportation planners use models previously estimated and sometimes in different contexts. The planners can
perform forecasts for the same areas and, if justifiable, transfer the models to other areas. Hence, it is important to know whether these models can be transferred in time and in space.
Recently, researchers and planning agencies began to implement tour-based activity modeling systems rather than trip-based modeling systems.^1 The advantage in using an activity modeling approach is
the ability to model each individual's tours. However, in this paper, we were only able to investigate the stability of individual predictions of trips. This research presents the characteristics of
trip generation in Israel and tries to answer the question of whether linear regression and Tobit TGMs can be transferred in time and space, given the dynamic changes in metropolitan areas and
socioeconomic characteristics. The TGMs estimated and analyzed in this research include only vehicular trips.
We estimated the models for the geographically diverse metropolitan areas of Haifa and Tel Aviv in Israel and tested them for transferability in space and in time. The topography of Haifa is hilly,
with the core of the city poorly connected to the rest of the metropolitan area. Tel Aviv lies on level topography, with a well connected road network. The metropolitan areas also differ
structurally: Tel Aviv is interconnected like a spider web, including several minor cores with high-density population and employment concentrations. On the other hand, Haifa's less connected road
network and rolling terrain give it a lower level of accessibility. When comparing the areas by land use, the Tel Aviv metropolitan area consists of neighborhoods that combine residential shopping
and personal business areas. Haifa, on the other hand, contains highly separated areas with each area consisting of a uniform land use.
Given the difference in accessibility and land use, the calibrated models were restricted to demographic and socioeconomic variables. The average number of daily trips per person was higher in the
Haifa metropolitan area (2.14 in 1984, and 2.03 in 1996/97) than in the Tel Aviv metropolitan area (1.83 in 1984, and 1.91 in 1996/97). The difference in the average trip rates may be explained based
on the variation in land uses. The lack of mixed land uses in Haifa may encourage the generation of more trips. Furthermore, Haifa's hilly topography may encourage more vehicular trips than in Tel
Aviv, where shorter trips are probably done on foot. The comparison between these metropolitan areas and the calibrated models is possible due to the similarity of the distribution of most
demographic and socioeconomic variables. (See table 1 for a partial presentation of the comparison.) A more detailed comparison of Haifa and Tel Aviv characteristics is presented in Cotrus (2001).
Over the last few decades, several papers have discussed the transferability of trip generation models. The debate among researchers, in general, focused not only on the transferability of models in
space and time but also on the model specification and level of aggregation. The aggregation levels are usually defined as area (zonal), household, and person. Estimating the models (see, e.g.,
Ortuzar and Willumsen 1994) at more disaggregate levels improves the transferability of TGM.
Atherton and Ben-Akiva (1976) emphasized that disaggregated models tend to maintain the variance and behavioral context of the response variable and, therefore, are expected to give better estimates
when transferred. Downes and Gyenes (1976) pointed out that when the explanatory power of the model is of interest rather than the aggregate forecasts, the disaggregate level should be selected.
Wilmot (1995) indicated that disaggregate models are preferred because of their independence from zonal definitions. In Supernak et al. (1983) and Supernak (1987), the person level was preferred for
TGM because of the identity of the response factor (trip) and the generative (the person). One advantage of disaggregate person-level models is the reduced amount of data required for model
estimation. (For more details, see Fleet and Robertson 1968; and Ortuzar and Willumsen 1994.) Other types of model specification techniques include cross-classification, regression, logit-based
models, artificial neural networks, fuzzy logic, and simulations.
A number of studies found spatial transferability of models satisfactory (Wilmot 1995; Atherton and Ben-Akiva 1976; Supernak 1982, 1984; Duah and Hall 1997; Walker and Olanipekun 1989; Rose and
Koppelman 1984; Caldwell and Demetsky 1980; and Kannel and Heathington 1973). On the other hand, Smith and Cleveland (1976) and Daor (1981) found spatial transferability unsatisfactory. We should
emphasize that Smith and Cleveland pointed out that although the explanatory variables are distinctive, their effects vary in space. A number of researchers found the transferability of models in
time (i.e., their temporal stability) satisfactory (Downes and Gyenes 1976; Yunker 1976; Walker and Peng 1991; Kannel and Heathington 1973; and Karasmaa and Pursula 1997). Unsatisfactory results,
however, were obtained in other studies (Doubleday 1977; Smith and Cleveland 1976; and Copley and Lowe 1981).
While several international studies explored model transferability in time and space, in Israel the transferability of discrete mode choice models has been the main focus (Prashker 1982; Silman
1981). This study deals with the investigation of trip generation characteristics but also provides local estimates of TGMs and their validation for transferability in time and space. The study also
explores the implementation of Tobit models in TGM.
We often approach trip generation from an economic viewpoint, where trips are defined as the product and the person/household as the customer. The strongest argument to model trips on a disaggregate
level is that any zonal outcome is based on the aggregation of several customers, ignoring the heterogeneity among them. The explanatory variables for the power of consumption of each person/
household can be found in several categories including demographic, geographic, and economic.
As discussed above, several approaches exist to model trip generation, including regression-based models such as multiple linear regression (Wilmot 1995) and cross-classification (Walker and
Olanipekun 1989); discrete choice models such as probit, logit, and ordered probit (Zhao 2000); simulations such as Smash, Amos, and the Starchild System; fuzzy logic models; and artificial neural
networks (Huisken 2000). Clearly, the issue of trip generation can be approached from several directions and tested for transferability in time and space. Therefore, researchers will choose the
modeling approach based on the size of the database at hand, the nature and structure of the variables, the aggregation level desired, as well as other considerations. The main problem with using a
regression model is the treatment of trip rates as continuous rather than discrete variables. Discrete choice models and spatially ordered response models may better account for the behavioral
process of trip generation. However, due to practical reasons, in most models to date, the dependent variable is treated as a continuous variable. For this reason, we perform our analysis on such
In this research, we first estimated regression models for each metropolitan area for each year, taking into account the inconsistency of the household surveys (table 2). We then estimated Tobit TGMs
based on the same variables and tested whether these models are suitable for trip generation estimation and for transferability. The regression model form is presented in equation 1.
y [i] = Ξ± + Ξ² [1]x [1, i] + Ξ² [2]x [2, i] + Ξ² [k] x [k, i] + ΞΎ [i]
β i = 1, , n
β k = 1, 2, k (1)
y[i] = trip rate generated by individual i,
x[k,i]= explanatory variable k for individual i,
n = the number of observations,
k = number of explanatory variables, and
ΞΎ[i] = error term of the ith observation.
Hald (1949) first presented the model that, in its final form, is called the Tobit model (1958). Tobit models differentiate from regression models by the incorporation of truncated or censored
dependent variables. Tobit analysis assumes that the dependent variable has a number of its values clustered at a limiting value, usually zero. The Tobit model can be presented as a discrete/
continuous model that first makes a discrete choice of passing the threshold and second, if passed, a continuous choice regarding the value above the threshold. This approach is appropriate for trip
generation, as an individual must decide whether to make any trips and, if so, how many trips to make.
Tobit analysis uses all observations when estimating the regression line, including those at the limit (no trips) and those above the limit (those who chose to travel). As shown by McDonald and
Moffitt (1980), Tobit analysis can be used to determine the changes in the value of the dependent variable if it is above the limit, as well as changes in the probability of being above the limit.
Since the surveys include observations at the limit (i.e., persons that are not traveling), it was also interesting to find out how well the Tobit model can predict persons doing no travel at all.
The Tobit model form is presented in equation 2:
y [i] = X [i] Ξ² + ΞΎ [i] if X [i] Ξ² + ΞΎ [i] > 0
y [i] = 0 if X [i] Ξ² + ΞΎ [i] β€ 0
β i = 1, 2, 3, , (N - 1), N (2)
N = number of observations,
y[i]= trip rate generated by observation i,
X[i] = vector of independent variables,
Ξ² = vector of coefficients, and
ΞΎ = independently distributed error term ~ (0, Ο^2).
Because Tobit models have not been used previously in the context of trip generation, this research investigates their suitability for that purpose. We also compared the predictions obtained using
regression models with those produced using the analogous Tobit model. The best specification of the regression model is not necessarily the best specification of the Tobit model. However, in order
to allow for basic comparisons of the model parameters, we estimated the regression models first; then, after the determination of the final variables in the model, we estimated Tobit models with the
same variables.
All models were estimated at the disaggregate person level. At the person level of modeling, we maintained the heterogeneity among observations and kept a good identity between the consumer of the
product (the person) and the outcome (number of daily trips taken by the person). As discussed above, disaggregate models tend to show better transfer results than aggregate models and also
incorporate the power to understand and control the production of trips. The models were estimated for a 24-hour period^2 and tested for transferability in space and in time. Figure 1 presents the
sequence of the analysis.
Statistical tests were conducted to determine the spatial and temporal stability of the estimated models by assessing the transferability of the coefficients from one area to another, and for each
metropolitan area between the two survey years. Transferability was also tested by comparing the overall aggregate prediction obtained by the transferred model with the local model. Furthermore, we
analyzed the ability of Tobit models to represent and evaluate nontravelers, that is, people who do not generate trips based on the given survey data.
Data Sources and Descriptions
The Israeli Central Bureau of Statistics (CBS) conducted some limited scope^3 Traveling Habits Surveys in the 1960s. Comprehensive National Traveling Habits Surveys have been conducted by CBS every
12 years since 1972. Because the 1972 survey is not available on magnetic media, it was not possible to do a computer-based statistical analysis. Therefore, we based this research on the 1984 and
1996/97 household surveys.
The main problems we encountered in doing this research were related to the inconsistency in the investigated variables, the structure of the surveys, the definition of variables, the period of
investigation, the geographic deployment, and the database structure (see table 2 again). The 1984 and 1996/97 household surveys differ in several ways: the geographic deployment (number and size of
jurisdictions in the survey), the size of the survey (number of households), the definitions of the investigation period, and the variables that were excluded from the surveys. For example, income is
included in the 1984 survey but is omitted from the 1996/97 survey.
Despite definition and database differences in the two surveys (1984 was an activity survey and 1996/97 was a trip survey), we were able to bring the variables in the models to a common basis. In
particular, the 1984 survey included bicycle and walking trips among the means to accomplish the activities, while the later survey excluded them. To resolve this difference, we excluded walking and
biking trips from the 1984 database; only motorized trips were considered for each person.
The 1984 survey files included data for 5,420 persons in the Tel Aviv metropolitan area and 4,056 persons in the Haifa metropolitan area. The final files used for model calibration after sieving
incomplete and anomalous observations totaled 4,385 and 3,258 persons, respectively. The 1996/97 files included data for 20,436 persons in the Tel Aviv metropolitan area and 6,417 persons in the
Haifa metropolitan area. The final files used for model calibration totaled 15,729 and 5,041 persons, respectively.
The selected trip generation models included six categorical variables: age, car availability, possession of a driver's license, employment, education, and status in the household. Data fell into
five age categories: 8-13, 14-17, 18-29, 30-64, and 65 and over. Ortuzar and Willumsen (1994) found that life cycle variables were an important factor for explaining trip generation. Different trip
rates can be expected for households and people at various stages of life. Furthermore, age should correlate with employment, having a driver's license, and marital status. Car availability included
three categories: 0, 1, and β₯ 2 cars in the household. Clearly, households with more cars available will generate more trips. The driver's license category has only one variable: whether the person
has a license (including motorcycle) or not.
The employment variable indicates whether the person was employed or not. Employed persons were expected to generate more trips, because they usually make at least two trips: to and from work.
Household status refers to whether the person defines himself or herself as the head of the household. This variable indicates the responsibility and availability of household resources as an
incentive for consumption of trips.
Finally, four education categories were defined based on the number of years of study (0, 1-8, 9-12, and 13 or more). The literature shows good correlation between education and income. In the
absence of a pure economic indicator, education is used also as a proxy for income. Respondents with higher education (hence higher income) were expected to generate more trips. All variables were
found significant and the coefficients corresponded with our expectations.
Table 3 shows the estimation results for the regression models for 1984 for both metropolitan areas. As can be seen from the table, all coefficients were found to be significant at the 95% level. The
number of observations remaining in the estimation process resulted from the limited scope of this survey and the elimination of incomplete observations in the original database.
Estimation results for these models show that all variables affected trip generation as expected. The education coefficients show that people with higher education generated more trips. This can be
explained not only by the assumption of the relationship between education and income but also by assuming that a person with higher education is more likely to pursue culture and perhaps leisure
activities. Also, as expected, persons with driver's licenses and employed persons tended to generate more trips than the equivalent nonworking and/or nondriving persons. Heads of household tended to
generate more trips, as assumed, because of the responsibility and availability of resources. The coefficients of the age categories indicate that persons aged 14 to 17 travel more than people with
similar characteristics of other age groups, probably because they are young and active and have less household or work responsibilities.
The overall R^2 of the 1984 models was 0.33 for the Tel Aviv model and 0.34 for the equivalent Haifa model. These R^2 values are modest but not anomalous for trip generation modeling. They indicate
that a substantial portion of trip generation can be explained by nonhousehold factors, such as relative location of residence, employment, and other parameters. Statistical Z tests (assuming known
variances, normal distributions, and independence of populations) for the transferability of the coefficients (without updating) show that, at a 95% level of confidence, the coefficients differ,
except for the coefficient defining the head of household. To verify the results we also conducted Chow tests for the transferability of the models. The calculated statistic was 7.86 in comparison
with the critical 1.72 F-value (at a 95% level of confidence), and yield the same conclusion that the 1984 models are not transferable in space.
Table 4 shows the results of transferring the models in space by showing the predictions from the estimated models and the 1984 database, each applied for both metropolitan areas. The table shows
that the Haifa model overpredicts the actual trip rate when applied to the Tel Aviv database, in comparison with the Tel Aviv model applied to the Haifa database. This was expected, as the Haifa trip
rate is higher than that for Tel Aviv.
Table 5 presents the estimation results for the 1984 Tobit models using the 1984 household survey data. When we transferred the estimated Tobit models in space and used them to predict the average
trip rate in the other city, we found that the Haifa Tobit model overestimated the trips in Tel Aviv by 21.9% (table 6). When we used the estimated Tel Aviv Tobit model to predict the average trip
rate for Haifa, we found that it underestimated the total trips by 27.5%. However, transferability t-tests at a 95% level of confidence showed that most of the coefficients are not significantly
different in space, except for the license variable and the 8-13 and 30-64 age category variables. On the other hand, Ο^2 tests at the 95% level of confidence strengthen the alternative hypothesis,
that the models vary in space (x ^2[13, 0.95] = 22.36 < 91.198) .
An important issue was to find out whether the Tobit model could explain and capture the nontravelers in the population. As can be seen in table 7, the results are not consistent for the two models.
The Haifa 1984 model correctly estimated only 13.5% of the observed nontravelers in the Haifa data and 18.5% in the Tel Aviv data. The Tel Aviv model obtained better results estimating correctly
41.9% of the observed nontravelers in the Tel Aviv data and 34.7% in the Haifa data. These results encourage further research.
Table 8 presents the estimation results of the 1996/97 regression models. As can be seen, the 1996/97 coefficients differ substantially from those of 1984, and most of the coefficients are
significant at the 95% confidence level. The best model specification for 1984 was found to be also the best specification for the 1996/97 model, indicating that the most important variables
affecting trip generation are similar in both models. The main problem raised during the basic comparison was the difference in the geographic scope (definition of the metropolitan survey area) for
the two.
The overall R^2 of the 1996/97 models, 0.21 for the Tel Aviv model and 0.23 for the equivalent Haifa model, are even smaller than the values achieved for the 1984 models. But they are still not
anomalous in the field of trip generation modeling. Statistical Z-tests conducted at the 95% level of confidence show that none of the coefficients are the same for the two metropolitan areas; that
is, the coefficients differ in space. To verify the results, we conducted Chow tests for the transferability of the models. The calculated statistic was 6.91 in comparison with the 1.72 tabular F
-value (at a 95% level of confidence), thus yielding the same conclusion, that the 1996/97 models are not transferable in space.
Table 9 presents the predicted average daily trips using the 1996/97 data for each model and each metropolitan area. As in 1984, the 1996/97 Haifa model overpredicts trips compared with the Tel Aviv
model, however, the differences are smaller. One should remember that in regression models, the regression line always passes through the average (center of gravity of the observations). Since the
observed average number of trips (in Tel Aviv and Haifa) was equal in the 1996/97 metropolitan files, the estimation difference was expected to be smaller.
However, the similar predicted aggregate trip rates indicate overrepresentation of particular sections of the population. For example, the calculated average car availability per household in
metropolitan Tel Aviv was higher in the 1996/97 surveys than in metropolitan Haifa (0.60 > 0.55), but in 1984 the average car availability per household was almost the same (0.489 β 0.484).
Statistically, different definitions of the sampling areas could affect the transferability of the estimated models
Table 10 shows the Tobit model results for 1996/97. A comparison with table 8 shows the resemblance in the effect of the explanatory variables and the difference in the magnitude of the coefficients
between the Tobit and the regression models. Transferring the models in space and evaluating the estimated average trip rate from the models, for 1996/97, we found that the Haifa Tobit model
overestimated the trips in the Tel Aviv file by 7.2% and the Tel Aviv Tobit model underestimated the trips in the Haifa file by 7.1% (table 6). These values are quite similar to the over- and
underprediction of the equivalent regression models shown in table 9. Spatial transferability t-tests held at the 95% level of confidence show that most of the coefficients are not significantly
different between the metropolitan areas, except age categories and the education "non-educated" category. Ο^2 tests for the spatial transferability of the models at the same level of confidence
reach the same conclusion (x ^2[13, 0.95] = 22.36 << 82.01).
In table 11, the 1996/97 Tobit model prediction of nontravelers is even worse than in the analogous 1984 models. When trying to represent nontravelers, the Haifa 1996/97 Tobit model captured only
6.8% of the observed nontravelers in the Tel Aviv file and 11.8% in the Haifa file. The Tel Aviv 1996/97 Tobit model captured only 3.8% of the nontravelers in the Haifa file and 9.8% in the Tel Aviv
file. A point worth mentioning is the resemblance in the proportion of nontravelers in the two surveys (about 35% of the persons represented in the sample files did not generate trips). Finally,
about 70% to 75% of the estimated nontravelers are observed nontravelers.
Table 12 shows the estimation results for temporal transferability of the regression and Tobit models. When we tested for temporal transferability using the 1984 models to predict 1996/97 trip rates,
we observed that the 1984 Tel Aviv regression model underestimated the observed total number of trips in Tel Aviv in 1996/97 by 7%. The Haifa 1984 regression model overestimated the observed total
number of trips in 1996/97 Haifa data by only 2.8%. Taking into account that the average number of daily trips in the Haifa 1984 survey was 2.17 and in the 1996/97 survey it was 2.07, the difference
is not surprising. However, it may also be affected by the different definition of the geographic scope of the two household surveys.
Chow tests of the temporal stability of the 1984 models compared with the 1996/97 show that the statistic for the 1984 Tel Aviv model was 4.53, bigger than the 1.72 tabular F-statistic at the 95%
level of confidence, meaning that the 1984 coefficients differ from the 1996/97 coefficients. The statistic for the temporal stability of the 1984 Haifa model was 8.39 compared with the 1.72 tabular
F, reaching the same conclusion. Transferability Ο^2 tests of the Tobit models in time show that, at the 95% level of confidence, we can reject the null hypothesis; that is, the models for the two
time points are different (x ^2[13, 0.95] = 22.36 < 128.72).
In our research, statistical tests indicated that the regression and Tobit models estimated for two metropolitan areas and two time periods differ statistically in time and in space. One exception
was the Tobit transferability in space, where the coefficients from the two models for the same year were not significantly different. The distinction cannot be well explained, but it might be due
partially to geographic, demographic, socioeconomic, and spatial structure differences between the two metropolitan areas. The smaller sample size and scope of the 1984 household survey compared with
the 1996/97 household survey (as shown in table 2) did not allow us to represent the ethnicity of the survey participants, a variable believed to be related to trip generation. Also, the
incorporation in the models of a pure economic variable such as income was not possible, because it was not included in the 1996/97 survey.
We ascribed the temporal instability of the estimated models to changes in the structure and development of the metropolitan areas of Tel Aviv and Haifa, changes in lifestyle and socioeconomic
variables that are not all accounted for in the model, as well as the inconsistency of the two surveys. A partial explanation may be that 1984 was an economically unstable year, featuring high
inflation rates and uncertainty, while 1996/97 was considered to be economically stable.
An important conclusion based on our results is that in order for trip generation models to be transferable they need to account for variables not included in the current models: income, land use and
spatial structure, the economy, the transportation system and accessibility, and more detailed socioeconomic and life style variables. If we could estimate a perfect disaggregate model accounting for
all factors that affect trip generation and with appropriate segmentation, it would likely be transferable. With this data lacking, models are not transferable, because unobserved variables affect
coefficients of observed variables with which they are correlated.
Another conclusion is that household surveys conducted on a regular basis will be more useful if the design stays constant. Differences in the structure, variables, range, investigation period,
definition of the variables, and database structure affect the transferability of the estimated models.
We also would emphasize the need for further research on the implementation of Tobit models in the context of trip generation. Tobit models tend to represent the mechanism of trip generation more
realistically, capturing and estimating (partially) nontravelers. As a combination of regression and discrete choice models, the Tobit model may be more suitable for implementation in TGM than
discrete choice or regression models, particularly because Tobit is better formulated to differentiate nontravelers from travelers. The underestimation of nontravelers may be partly due to the fact
that we did not necessarily estimate the best Tobit model.
For the linear regression models, almost all variables were significant at the 95% confidence level, but the coefficients were shown to vary in time and space. For the Tobit model, while almost all
variables were significant at the 95% confidence level, the coefficients of the models of the two metropolitan areas were statistically similar but they differed in time for each city.
The nature of the local household surveys raises a need to validate the results of this study in future research. In particular, further research can identify what makes two study areas "similar
enough" to justify transferring a model from one to the other. We also suggest further research incorporating Tobit models in TGM and for investigating the characteristics of nontravelers.
Atherton, T. and M. Ben-Akiva 1976. Transferability and Updating of Disaggregate Travel Demand Models. Transportation Research Record 610:12-18.
Caldwell III, L.C. and M.J. Demetsky. 1980. Transferability of Trip Generation Models. Transportation Research Record 751:56-62.
Copley, G. and S.R. Lowe. 1981. The Temporal Stability of Trip Rates: Some Findings and Implications. Transportation Analysis and Models: Proceedings of Seminar N. London, England: PTRC Education and
Research Services, Ltd.
Cotrus, A. 2001. Analysis of Trip Generation Characteristics in Israel for the Years 1984, 1996/7 and Spatial and Temporal Transferability of Trip Generation Models, Ph.D. thesis, Technion, Israel
Institute of Technology, Haifa, Israel.
Daor, E. 1981. The Transferability of Independent Variables in Trip Generation Models. Transportation Analysis and Models: Proceedings of Seminar N. London, England: PTRC Education and Research
Services, Ltd.
Doubleday, C. 1977. Some Studies of the Temporal Stability of Person Trip Generation Models. Transportation Research 11:255-263.
Downes, J.D. and L. Gyenes. 1976. Temporal Stability and Forecasting Ability of Trip Generation Models in Reading, TRRL Report 726. Crowthorne, Berkshire: Transport and Road Research Laboratory.
Duah, K.A. and F.L. Hall. 1997. Spatial Transferability of an Ordered Response Model of Trip Generation. Transportation Research A 31:389-402.
Fleet, C. and S. Robertson. 1968. Trip Generation in the Transportation Planning Process. Highway Research Board Record 240:11-31.
Hald, A. 1949. Maximum Likelihood Estimation of Parameters of a Normal Distribution Which is Truncated at a Known Point. Skandinavisk Aktuarietidskrift 32:119-134.
Huisken, G. 2000. Neural Networks and Fuzzy Logic to Improve Trip Generation Modeling, paper presented at the 79th Annual Meetings of the Transportation Research Board, Jan. 9-13, 2000, Washington,
Kannel, E.J. and K.W. Heathington. 1973. Temporal Stability of Trip Generation Relations. Highway Research Record 472:17-27.
Karasmaa, N. and M. Pursula. 1997. Empirical Studies of Transferability of Helsinki Metropolitan Area Travel Forecasting Models. Transportation Research Record 1607:38-44.
McDonald, J.F. and R.A. Moffitt. 1980. The Uses of Tobit Analysis. The Review of Economics and Statistics 62(2):318-321.
Ortuzar, J. and L.G. Willumsen. 1994. Trip Generation Modeling. Modeling Transportation, 2nd ed. New York, NY: Wiley.
Prashker, J. 1982. Transferability of Disaggregate Modal-Split ModelsInvestigation of Models in the Metropolitan Area of Tel Aviv, manuscript, p. 91. Israel Institute of Transportation.
Rose, G. and F.S. Koppelman. 1984. Transferability of Disaggregate Trip Generation Models. Proceedings of the 9th International Symposium on Transportation and Traffic Theory. Utrecht, Netherlands:
VNU Press.
Silman, L.A. 1981. The Time Stability of a Modal-Split Model for Tel Aviv. Environment and Planning A 13:751-762.
Smith, R.L. and D.E. Cleveland. 1976. Time Stability Analysis of Trip Generation and Predistribution ModalChoice Models. Transportation Research Record 569:76-86.
Supernak, J. 1982. Transportation Modeling: Lessons from the Past and Tasks for the Future. Transportation Analysis and Models, Proceedings of Seminar Q. London, England: PTRC Education and Research
Services, Ltd.
____. 1984. Disaggregate Models of Mode Choices: An Assessment of Performance and Suggestions for Improvement. Proceedings of the 9th International Symposium on Transportation and Traffic Theory.
Utrecht, Netherlands: VNU Press.
____. 1987. A Method for Estimating Long-Term Changes in Time-of-Day Travel Demand. Transportation Research Record 1138:18-26.
Supernak, J., A. Talvitie, and A. DeJohn. 1983. Person-Category Trip Generation Model. Transportation Research Record 944:74-83.
Tobin, J. 1958. Estimation of Relationships for Limited Dependent Variables. Econometrica 26:24-36.
Walker, W.T. and O.A. Olanipekun. 1989. Interregional Stability of Household Trip Generation Rates from the 1986 New Jersey Home Interview Survey. Transportation Research Record 1220:47-57.
Walker, W.T. and H. Peng. 1991. Long Range Temporal Stability of Trip Generation Rates Based on Selected Cross-Classification Models in the Delaware Valley Region. Transportation Research Record
Wilmot, C.G. 1995. Evidence of Transferability of Trip Generation Models. Journal of Transportation Engineering 9:405-410.
Yunker, K.R. 1976. Tests of the Temporal Stability of Travel Simulation Models in Southeastern Wisconsin. Transportation Research Record 610:1-5.
Zhao, H. 2000. Comparison of Two Alternatives for Trip Generation, paper presented at the 79th Annual Meetings of the Transportation Research Board, Jan. 9-13, 2000, Washington, DC.
1. Tours refer to a sequence of trips usually starting and ending at home. Trips refer to just the movement between an origin and destination.
2. The 1984 household survey contained 1.5 days of data for each person; the 1996/97 survey contained 3 to 4 days of data for each person.
3. These surveys were restricted to work-related activities only.
^1 A. Cotrus, Department of Civil Engineering, Technion, Israel Institute of Technology, Haifa 32000, Israel. E-mail: cotrus@walla.co.il
^2 J. Prashker, Transportation Research Institute,Technion, Israel Institute of Technology, Haifa 32000, Israel. E-mail: prashker@netvision.net.il
^3 Corresponding Author: Y. Shiftan, Transportation Research Institute,Technion, Israel Institute of Technology, Haifa 32000, Israel. E-mail: shiftan@tx.technion.ac.il | {"url":"http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/journal_of_transportation_and_statistics/volume_08_number_01/html/paper_04/index.html","timestamp":"2014-04-18T03:32:07Z","content_type":null,"content_length":"82107","record_id":"<urn:uuid:a0f27f11-3ba0-49dc-96b9-e208b5f31932>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
β’ across
MIT Grad Student
Online now
β’ laura*
Helped 1,000 students
Online now
β’ Hero
College Math Guru
Online now
Here's the question you clicked on:
Estimate a 15% tip for a $26.80 meal
β’ one year ago
β’ one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
β’ Teamwork 19 Teammate
β’ Problem Solving 19 Hero
β’ Engagement 19 Mad Hatter
β’ You have blocked this person.
β’ β You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50eca22fe4b07cd2b649101f","timestamp":"2014-04-16T19:54:14Z","content_type":null,"content_length":"39517","record_id":"<urn:uuid:99f21397-8483-4650-8346-f1d4453f67b2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Damped Oscillator equation - Energy
You can't just differentiate E the way you did and prove the theorem. You need to incorporate the basic diff eq representing a damped spring-mass system. The expression for E represents ANY
spring-mass system, damped or not, linear or not, etc.
The only thing I can think of is to solve the diff eq (it's a simple 2nd order one with constant coeff). Apply an initial condition to x = x0. Derive the solution x(t) and then substitute in E, take
dE/dt and there you are.
(BTW why is y used in the diff eq and x in E?)
Dick's comment is well taken! Not only his, but I noticed the dimensions don't make sense. dE/dt has dimensions of FLT^-1 whereas -mvx' has dimensions of MF where
M = mass
F = force = MLT^-2
L = length
T = time.
Thus -mvx' has the wrong dimensions to be dE/dt. | {"url":"http://www.physicsforums.com/showthread.php?p=4190717","timestamp":"2014-04-21T04:37:20Z","content_type":null,"content_length":"48965","record_id":"<urn:uuid:d82b7700-75b3-4d46-9427-6e9830d55d13>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite measure on the power set
up vote 4 down vote favorite
Let $X$ be an uncountable set, and let $\Omega$ be the power set of $X$, viewed as a $\sigma$-algebra. Does there exist a positive $\sigma$-additive measure of finite total mass on $(X, \Omega)$ such
that each point of $X$ has measure zero?
set-theory measure-theory real-analysis
Maybe I'm missing something, but couldn't you just take a free ultrafilter on $X$ and let the measure of $S\subseteq X$ equal $1$ is $S$ belongs to the ultrafilter, and $0$ otherwise? β Philip
Brooker Jul 31 '12 at 5:55
@Philip: a measure on a $\sigma$-algebra is usually required to be countably additive, not just finitely additive. β Trevor Wilson Jul 31 '12 at 6:03
@Trevor Wilson: ah, yes, of course. That makes the problem much more interesting! Thanks. β Philip Brooker Jul 31 '12 at 9:15
add comment
2 Answers
active oldest votes
I assume you mean a $\sigma$-additive measure. This is Ulam's measure problem. A positive answer is closely tied up to the existence of real-valued measurable cardinals, so it is
equiconsistent with the existence of a measurable cardinal, which is a large cardinal assumption significantly beyond the usual axioms of set theory.
You can see a quick write up of the argument here. A good reference is the beginning of David Fremlin, "Real-valued measurable cardinals", in Set Theory of the reals, Haim Judah, ed.,
Israel Mathematical Conference Proceedings 6, Bar-Ilan University (1993), 151β304, that I also mention in the notes linked to above.
In short (this is expanded in the notes): If $(X,\mathcal P(X),\lambda)$ is such a measure space, we may as well assume (by concentrating on an appropriate subset, which may be of
smaller size than $X$, and renormalizing) that $\lambda$ is a probability measure. Its additivity is the smallest cardinal $\kappa$ such that the measure of the disjoint union of some
up vote 11 collection of $\kappa$ many disjoint subsets of $Y$ is not the sum of the measures of the sets in the union. (So the additivity is at least $\aleph_1$, and it is well-defined, since we
down vote are assuming that $\lambda(X)>0$.)
Then we can in fact assume $X=\kappa$ (identifying cardinals with sets of ordinals). If $\lambda$ is non-atomic (meaning, for any $E\subseteq\kappa$, if $\lambda(E)>0$ then there is $F\
subset E$ with $0<\lambda(F)<\lambda(E)$), then $\lambda$ is (atomlessly) real valued measurable. On the one hand, these cardinals are not too large: $\kappa\le|\mathbb R|$. On the
other, $\kappa$ must be weakly inaccessible, and in fact limit of weakly inaccessibles that themselves are limit of weakly inaccessibles, etc. This is very very large.
The other possibility is that $\lambda$ is atomic. Then, after further renormalization, $\lambda$ can be identified with the characteristic function of a non-principal $\kappa$-complete
ultrafilter, that is, $\kappa$ is measurable.
Ack! This is the second time in two years that I've accidentally bumped into some serious set theory / foundations in my research. @Andres - Thanks for putting a name to this
question and pointing me to some literature. And yes, I did mean $\sigma$-additive. Is $W = E$ in your third paragraph, line 6? β Xander Faber Jul 31 '12 at 7:00
I don't have any sense of whether or not a given set $X$ can be put in bijection with a measurable cardinal (again identifying cardinals with sets of ordinals). So for example, is it
known when $X = \mathbb{R}$? I assume this case was the original motivation for the question. β Xander Faber Jul 31 '12 at 7:17
Xander, measurable cardinals are much bigger than $\mathbb R$. It is the so-called real valued measurables that could possibly be $\leq|\mathbb R|$ and that are connected to measures
on $\mathcal P(\mathbb R)$. β Stefan Geschke Jul 31 '12 at 8:02
I should add to my comment that the reason Andres mentions measurable cardinals is that their equiconsistency with real valued measurables shows that you cannot construct a $\
sigma$-additive measure on $\mathcal P(\mathbb R)$ without the help of some strong additional axioms. β Stefan Geschke Jul 31 '12 at 8:08
Xander: Yes, $W$ was a typo for $E$. Fixed now. Thanks. It may perhaps be worth pointing out two remarks: 1. If there is a real-valued measurable $\kappa$, then any $X$ with $|X|\ge\
1 kappa$ admits such a measure: Simply concentrate it on subsets of $Y$, where $Y$ is a subset of $X$ of size $\kappa$; and on subsets of $Y$ simply assign a measure via a bijection
with $\kappa$. 2. In fact, if there is an atomless real-valued measurable, and $X=\mathbb R$, we can find a measure on all subsets of $X$ that extends Lebesgue measure. β Andres
Caicedo Jul 31 '12 at 14:50
show 1 more comment
Just to complement Andres's excellent answer with another reference, you can find a nice summary of the status of this question, as well as further references, in chapter 1.12(x) of
Bogachev's monograph "Measure Theory I".
up vote 1
down vote The (very) short summary is that in all concrete cases the answer is no.
Regarding your summary, I would dispute your statement, since as Andres explains, it is known to be (relatively) consistent with the axioms of set theory that the reals $\mathbb{R}$
support such a measure, and this would seem to count as a concrete case. So we don't really seem to know that the answer is no in all concrete cases. β Joel David Hamkins Nov 2 '13 at
add comment
Not the answer you're looking for? Browse other questions tagged set-theory measure-theory real-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/103583/finite-measure-on-the-power-set?sort=votes","timestamp":"2014-04-21T12:41:57Z","content_type":null,"content_length":"67130","record_id":"<urn:uuid:419f6643-c492-4597-a10d-7eebc50d8dde>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximum Likelihood vs. Sequential Normalized Maximum Likelihood in On-line Density Estimation
Wojciech Kotlowski and Peter Grunwald
In: COLT 2011, 9-11 Jul 2011, Budapest, Hungary.
The paper considers sequential prediction of individual sequences with log loss (online density estimation) using an exponential family of distributions. We first analyze the regret of the maximum
likelihood ("follow the leader") strategy. We find that this strategy is (1) suboptimal and (2) requires an additional assumption about boundedness of the data sequence. We then show that both
problems can be be addressed by adding the currently predicted outcome to the calculation of the maximum likelihood, followed by normalization of the distribution. The strategy obtained in this way
is known in the literature as the sequential normalized maximum likelihood or last-step minimax strategy. We show for the first time that for general exponential families, the regret is bounded by
the familiar (k=2) log n and thus optimal up to O(1). We also show the relationship to the Bayes strategy with Jeffreys' prior. | {"url":"http://eprints.pascal-network.org/archive/00009278/","timestamp":"2014-04-19T19:39:00Z","content_type":null,"content_length":"6682","record_id":"<urn:uuid:08782a93-2ab3-4284-abd9-5a5253c5d286>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Periodic homogenization with an interface : the multi-dimensional case
The Library
Periodic homogenization with an interface : the multi-dimensional case
Hairer, Martin and Manson, Charles. (2011) Periodic homogenization with an interface : the multi-dimensional case. Annals of Probability, Volume 39 (Number 2). pp. 648-682. ISSN 0091-1798
Full text not available from this repository.
We consider a diffusion process with coefficients that are periodic outside of an "interface region" of finite thickness. The question investigated in this article is the limiting long time/large
scale behavior of such a process under diffusive resealing. It is clear that outside of the interface, the limiting process must behave like Brownian motion, with diffusion matrices given by the
standard theory of homogenization. The interesting behavior therefore occurs on the interface. Our main result is that the limiting process is a semimartingale whose bounded variation part is
proportional to the local time spent on the interface. The proportionality vector can have nonzero components parallel to the interface, so that the limiting diffusion is not necessarily reversible.
We also exhibit an explicit way of identifying its parameters in terms of the coefficients of the original diffusion. Similarly to the one-dimensional case, our method of proof relies on the
framework provided by Freidlin and Wentzell [Ann. Probab. 21 (1993) 2215-2245] for diffusion processes on a graph in order to identify the generator of the limiting process.
Item Type: Journal Article
Subjects: Q Science > QA Mathematics
Divisions: Faculty of Science > Mathematics
Library of Homogenization (Differential equations), Diffusion processes
Journal or Annals of Probability
Publisher: Institute of Mathematical Statistics
ISSN: 0091-1798
Date: March 2011
Volume: Volume 39
Number: Number 2
Page Range: pp. 648-682
Identification 10.1214/10-AOP564
Status: Peer Reviewed
Publication Published
Access rights Restricted or Subscription Access
to Published
Funder: Engineering and Physical Sciences Research Council (EPSRC)
Grant number: EP/D071593/1 (EPSRC)
[1] ALLAIRE, G. andAMAR, M. (1999). Boundary layer tails in periodic homogenization. ESAIM Control Optim. Calc. Var. 4 209β243 (electronic). MR1696289 [2] BEN AROUS, G. and ΛCERNΓ, J.
(2007). Scaling limit for trap models on Zd . Ann. Probab. 35 2356β2384. MR2353391 [3] BAHLALI, K., ELOUAFLIN, A. and PARDOUX, E. (2009). Homogenization of semilinear PDEs with
discontinuous averaged coefficients. Electron. J. Probab. 14 477β499. MR2480550 [4] BENSOUSSAN, A., LIONS, J.-L. and PAPANICOLAOU, G. (1978). Asymptotic Analysis for Periodic
Structures. Studies in Mathematics and Its Applications 5. North-Holland, Amsterdam. MR503330 [5] BENCHΓRIF-MADANI, A. and PARDOUX, Γ. (2005). Homogenization of a diffusion with
locally periodic coefficients. In SΓ©minaire de ProbabilitΓ©s XXXVIII. Lecture Notes in Math. 1857 363β392. Springer, Berlin. MR2126985 [6] BOGACHEV, V. I. (2007). Measure Theory, Vol.
I, II. Springer, Berlin. MR2267655 [7] BASS, R. F. and PARDOUX, Γ. (1987). Uniqueness for diffusions with piecewise constant coefficients. Probab. Theory Related Fields 76 557β572.
MR917679 [8] BORODIN, A. N. and SALMINEN, P. (1996). Handbook of Brownian MotionβFacts and Formulae. BirkhΓ€user, Basel. MR1477407 [9] DELLACHERIE, C. and MEYER, P.-A. (1983).
ProbabilitΓ©s et Potentiel. Chapitres IX Γ XI, Revised ed. Hermann, Paris. MR727641 [10] DA PRATO, G. and ZABCZYK, J. (1996). Ergodicity for Infinite-Dimensional Systems. London
Mathematical Society Lecture Note Series 229. Cambridge Univ. Press, Cambridge. MR1417491 [11] ETHIER, S. N. and KURTZ, T. G. (1986). Markov Processes: Characterization and
Convergence. Wiley, New York. MR838085 [12] FREIDLIN, M. I. andWENTZELL, A. D. (1993). Diffusion processes on graphs and the averaging principle. Ann. Probab. 21 2215β2245. MR1245308
[13] FREIDLIN, M. I. and WENTZELL, A. D. (2006). Long-time behavior of weakly coupled oscillators. J. Stat. Phys. 123 1311β1337. MR2253881 [14] GRΒ΄ ARD-VARET, D. and MASMOUDI, N.
(2008). Homogenization in polygonal domains. Preprint, Paris 7 and NYU. [15] HAIRER, M. (2009). Ergodic properties for a class of non-Markovian processes. In Trends in Stochastic
References: Analysis. London Math. Soc. Lecture Note Ser. 353 65β98. Cambridge Univ. Press, Cambridge. MR2562151 [16] HASβMINSKIΛI, R. Z. (1960). Ergodic properties of recurrent diffusion
processes and stabilization of the solution of the Cauchy problem for parabolic equations. Teor. Verojatnost. i Primenen. 5 196β214. MR0133871 [17] HAIRER, M. and MANSON, C. (2010).
Periodic homogenization with an interface: The onedimensional case. Stochastic Process. Appl. 120 1589β1605. [18] KHASMINSKII, R. and KRYLOV, N. (2001). On averaging principle for
diffusion processes with null-recurrent fast component. Stochastic Process. Appl. 93 229β240. MR1828773 [19] LEJAY, A. (2006). On the constructions of the skew Brownian motion.
Probab. Surv. 3 413β466 (electronic). MR2280299 [20] MEYN, S. P. and TWEEDIE, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London. MR1287609 [21] OLLA, S. (1994).
Lectures on Homogenization of Diffusion Processes in Random Fields. Publications de lβΓcole Doctorale, Γcole Polytechnique. [22] OLLA, S. and SIRI, P. (2004). Homogenization of a bond
diffusion in a locally ergodic random environment. Stochastic Process. Appl. 109 317β326. MR2031772[23] PAVLIOTIS, G. A. and STUART, A. M. (2008). Multiscale Methods: Averaging and
Homogenization. Texts in Applied Mathematics 53. Springer, New York. MR2382139 [24] PAPANICOLAOU, G. C. and VARADHAN, S. R. S. (1981). Boundary value problems with rapidly oscillating
random coefficients. In Random Fields, Vol. I, II (Esztergom, 1979). Colloquia Mathematica Societatis JΓ‘nos Bolyai 27 835β873. North-Holland, Amsterdam. MR712714 [25] RHODES, R.
(2009). Diffusion in a locally stationary random environment. Probab. Theory Related Fields 143 545β568. MR2475672 [26] REVUZ, D. andYOR, M. (1991). Continuous Martingales and
Brownian Motion. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 293. Springer, Berlin. MR1083357 [27] SEIDLER, J. (2001). A note on
the strong Feller property. Unpublished lecture notes. [28] STROOCK, D. W. and VARADHAN, S. R. S. (1979). Multidimensional Diffusion Processes. Grundlehren der Mathematischen
Wissenschaften [Fundamental Principles of Mathematical Sciences] 233. Springer, Berlin. MR532498
URI: http://wrap.warwick.ac.uk/id/eprint/41501
Data sourced from Thomson Reuters' Web of Knowledge
Actions (login required) | {"url":"http://wrap.warwick.ac.uk/41501/","timestamp":"2014-04-25T09:33:22Z","content_type":null,"content_length":"45696","record_id":"<urn:uuid:c8405e19-3556-43d5-8adc-a10d6cc1cf5e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
5dy/dx + 2x = 3.
April 6th 2011, 08:16 AM
5dy/dx + 2x = 3.
The problem is $5\frac{dy}{dx}$ +2x = 3
by re-arranging I get the following
y= $\frac{3}{5} -\frac{2x}{5}$ dx
but need help with finding the general solution
i integrated an came up with.
y = $\frac{3^2}{6} - \frac{2x^2}{6}$ +C
p.s. what is the source code for integral sign?
April 6th 2011, 08:19 AM
\int is $\int.$
You should write your rearrangement as
$dy=\left(\dfrac{3}{5}-\dfrac{2x}{5}\right)dx,$ and then integrate. I'm not sure I buy your integration. Try again.
April 6th 2011, 08:46 AM
okey i have tried again...
but cannot see where this 2nd part comes from.
My answer
is different from the solution in the book. The solution in the book is
can you show me where im goin wrong please. many thanks :)
April 6th 2011, 08:50 AM
Where did the 7 come from?
April 6th 2011, 09:14 AM
Why the questions marks? You had before
$dy= \left(\dfrac{3}{5}- \dfrac{2x}{5}\right)dx$
(except that you had "y" instead of "dy")
Now integrate both sides: the integral of a constant is that constant times x and the integral of x is $(1/2)x^2$
but cannot see where this 2nd part comes from.
My answer
$\dfrac{3x}{5}$ is correct. The integral of $\frac{2}{5}x$ is $\left(\frac{2}{5}\right)\left(\frac{1}{2}x^2\right )= \frac{1}{5}x^2$
Perhaps you misread the $\frac{2}{5}\cdot\frac{1}{2}$ as $\frac{2}{5+ 2}$.
is different from the solution in the book. The solution in the book is
can you show me where im goin wrong please. many thanks :)
April 6th 2011, 09:20 AM
need to go back over integration. thanks guys
April 6th 2011, 11:15 AM
You're welcome for my contribution. | {"url":"http://mathhelpforum.com/differential-equations/177035-5dy-dx-2x-3-a-print.html","timestamp":"2014-04-18T14:48:02Z","content_type":null,"content_length":"10663","record_id":"<urn:uuid:b9b1b6c1-aab1-401c-b824-747efbee92c6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paul S.
All of Paulβs current tutoring subjects are listed at the left. You can read more about Paulβs qualifications in specific subjects below.
Discrete Math
Discrete mathematics is an interesting, useful, and unique area of mathematics. Topics discussed in discrete math cover probability, set theory, logic, graph theory, combinatorics, and more.
Applications of discrete math are numerous: for example, graph theory can be used to arrive at possible DNA sequences based on the fragments obtained; combinatorics can be used to find the number of
isomers of organic compounds. Further examples undoubtedly exist--those that I have mentioned are from my own experience.
Equally important, however, is the analytical thinking that this area offers. While most may be quick to say that discrete math may appear simple, with a little practice and an open mind, it can
introduce a different way of looking at problems. In the grand scheme of analytical thinking, this fact alone makes it an important asset for any analytical discipline.
Linear Algebra
Linear algebra is a very interesting discipline in mathematics. It's not uncommon to use matrices to find quick solutions to large systems of equations. In fact, matrices can be very helpful in
various modeling problems, such as the amount of deflection experienced by a board of wood. Linear algebra is essentially the algebra of matrices and systems of equations, and with it comes a unique
set of mathematics. For large systems of equations, or systems with quite a few variables, linear algebra is a very powerful tool for finding the needed solutions quickly, without having to
necessarily isolate each variable individually, and resubstituting into other equations until each variable is solved for.
I have been using Macintosh computers for about 4 years now. While my knowledge of them is not the "be-all, end-all" of Macintosh computing, they are fairly easy to become acclimated to. In fact,
many operations that people are used to using in Windows can be done in Macintosh as well--the only difference is that some keys are named differently. Macintosh also does support Windows in various
ways: Microsoft Office programs are commonly tweaked to run nicely on a Macintosh, and for Windows-only programs that have no Macintosh equivalent, there's Boot Camp for running Windows. In this day
and age, Macintosh computers are becoming so flexible and versatile that in terms of function, the gap between Windows and Macintosh is narrowing quickly. If anything, the only gap that might remain
is the fact that Windows has a program for almost every idea under the sun, while Macintosh is still walking the path to get there.
Organic Chemistry
Organic chemistry is commonly summed up as the chemistry of compounds containing carbon and hydrogen. While it can be true, it's quite an oversimplification. Organic chemistry, in a roundabout way,
is what keeps us alive as living beings, gives us some of the most commonly-used materials, and increasingly important in this age, provides new ideas for technology and energy. From pharmaceuticals
to materials, organic chemistry is one of the central disciplines in the general field of chemistry; this much I know from my own experiences as a synthetic chemist. From carbon and hydrogen to
nitrogen and oxygen; amines and alcohols to ketones and aldehydes, organic chemistry is a detailed discipline in three dimensions. For most, practice makes perfect for this type of class, whether
it's about learning concepts or memorizing reactions.
Public Speaking
As an apprentice educator and scientist, public speaking is one thing I'm accustomed to from years of weekly presentations and seminars (and I'm an otherwise fairly quiet person myself). Whether shy,
nervous, or lacking confidence, public speaking is an important aspect to master in today's world, and should not be left unattended. If I can get used to it, so can you; the key is mostly practice,
with a watchful eye or two on the lookout for your best interest.
As a New York State high school student, I have taken and excelled in various Regents exams while simultaneously involved in an Advanced Placement program. While the structure of courses and exams
have changed over the years, the material covered remains the same. Exams that I have taken within my listed specialties include: Math I (now Integrated Algebra I), Math II (now Geometry), Math III
(now Algebra II and Trigonometry), Chemistry, Physics (2002, the year I took it, was the year of the Physics Exam Controversy, in which grades were curved because of suspected faulty question writing
. I received a 91 before the curve.), French and Comprehensive English.
Study Skills
In the realm of teaching and tutoring, assisting students with study skills is more of a diagnostic process. In order to make improvements, students have to describe how they prepare for exams, take
notes, attempt solving problems, et cetera. The teacher or tutor, however, has to be able to figure out changes based on the student's approach to academics--there's usually more than one right way
to study any given subject. Although it may be more trial-and-error than a straightforward "here's what to do for this problem", this approach can expose students to varying views on studying. These
varying perspectives can provide a main plan and even a backup plan for students to follow, which can be very helpful in the long run. In that respect, utile study skills are more like a railway
system--a student may have one train of thought, but at a junction, other routes may be equally helpful; it's always good to have options. | {"url":"http://www.wyzant.com/Tutors/NY/Albany/7710354/Subjects.aspx","timestamp":"2014-04-17T13:17:11Z","content_type":null,"content_length":"98068","record_id":"<urn:uuid:cc60bc32-4e8f-4029-ad2a-83064d5da0d4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
finitist prejudices
Stephen G Simpson simpson at math.psu.edu
Mon Mar 8 14:35:39 EST 1999
Joe Shipman 05 Mar 1999 18:27:46
> The standard fundamental theories of physics deal freely with
> classes of operators on function spaces
What if we could develop the requisite functional analysis in a
subsystem of second order arithmetic that is conservative over PRA?
PRA seems crucial here, because PRA is finitistic: it may commit us to
potential infinity, but it does not commit us to actual infinity. See
also my paper on Hilbert's program
<http://www.math.psu.edu/simpson/papers/hilbert/> and my book on
subsystems of second order arithmetic
[ By the way, these web addresses as well as the FOM web address may
be out of order for the next few days, because of a computer system
upgrade. ]
> the case (argued by Quine, Putnam, and Maddy) for an ontological
> commitment to infinite (even uncountably infinite) sets.
Could you please state Quine's case briefly? A summary of Putnam's
and Maddy's case would also be welcome, though I think they have
changed their minds on this issue. (Maddy's first book was `Realism
in Mathematics' and her second was `Naturalism in Mathematics'.)
> Steve and Martin, are the grounds on which you think the universe is
> finite empirical or a priori?
I don't have a final, well-thought-out answer to this, and even my
tentative answer may appear somewhat strange to you. First, if I say
the universe is finite, that doesn't imply that the universe is
mathematically describable by a finite formula -- I have no evidence
for such a statement. What I mean by `finite' in this context is
something like `definite' or `limited' or `following definite laws'.
Now, with that proviso, my grounds for believing that the universe is
finite is, loosely speaking, empirical' because the universe appears
to be orderly, to follow definite laws. My grounds are also, loosely
speaking, a priori, because the idea of an orderly universe is a
prerequisite for all thought about anything.
> A seriously finitist ontological position is in fact an atheistic
> position;
I'm an atheist, so that's not a problem for me. By the way, when
theists say that God is infinite, I think the main implication of that
is that, according to the theists, God is not limited by the laws of
> unity of knowledge and general intellectual integrity demand that
> we at least attempt to reconcile our schizophrenic attitudes.
> It matters professionally whether you "really believe" in finitism.
I thoroughly agree.
-- Steve
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1999-March/002759.html","timestamp":"2014-04-20T12:01:17Z","content_type":null,"content_length":"4854","record_id":"<urn:uuid:78b8e9a3-0060-4481-9c20-286db1c03277>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Java vs Javascript: Speed of Math
Last week Iβve created a ray marcher 3d engine which renders the Mandelbulb. And Iβve translated it into pure Javascript a couple of days later. After the translation I decided I should optimize the
code a little for speed, so I made some speed improvements in the Javascript code. The main optimization was using an array for the vector3d instead of a class/function.
Rendering the Mandelbulb on a 400Γ400 canvas now took just 1850ms in Javascript (Chrome, V8). Which is very fast! Even faster than my Java implementation (running on Java 1.6.0.33 -server, which was
faster than Java 7). But the Java code didnβt have some of the speed optimizations. So I re-translated the Javascript code back to Java. It produced the following numbers (lower is better
What has happened here? The output is the same, why is Java so much slower than Javascript? I would have suspected the oppositeβ¦
I fired up the profiler to see what was causing the Java code to be so slow, and it turned out the method it spend most time in was Math.pow(). Other slow methods were Math.acos(), cos(), sin() etc.
It turns out that the Math library isnβt very fast, but there is an alternative, FastMath. Apache Commons has implemented a faster Math library for commons-math. Lets see what changing Math.* to
FastMath.* does to the performance:
This is already much better. But still the method causing most delay is FastMath.pow(). Why is Javascript so much faster? The method is made so you can calculate the power of two doubles, not only
integer values. But Iβm only doing Integer powers (7 and 8 to be precise). So I decided to implement my own method:
private double fasterPow(double d, int exp) {
double r = d;
for(int i = 1; i<exp; i++) {
r *= d;
return r;
Warning: This isnβt the same as Math.pow/FastMath.pow!
The speed with this new method is much better and seems comparable with Javascript. Maybe this is an optimization the V8 engine does by default? Who knows.
The slowest method in the program now is FastMath.acos. From highschool I know that acos(x) can also be calculated as atan(sqrt(1-x*x)/x). So I created a own version of acos. When benchmarked, the
different methods: Math.acos(), FastMath.acos() and FastMath.atan(FastMath.sqrt(1-x*x)/x), the result is again surprising:
The custom acos() function is a bit faster than FastMath.acos() and a lot faster than Math.acos(). Using this function in the Mandelbulb renderer gives us the following metric:
So it turns out that with a bit of tweaking we can get the Java version faster than Javascript, but I would have never imagined Java would be slower in the first place. The Chrome V8 guys really did
an amazing job improving the speed of their Javascript VM. Mozilla isnβt far behind, they are getting +/- 2200 ms in the benchmark. Which is also faster than Java.Math and FastMath! It seems that
V8β²s math implementation has some optimizations that Java could really use. The tricks used above donβt make any difference with the Javascript version.
Edit 1: Is Javascript faster than Java?
Well surprisingly in this case it is. With the code a 100% the same, using arrays as vector and Math.* the code actually runs faster in my browser!
Edit 2: People have been asking me: What could have been done to make it faster in Java? And, why is it slow?
Well the answer is twofold:
1) The Math libraries are made for βdoubleβ in Java. Having a power() method work with doubles is much harder than working with just integer numbers. The only way to optimize this would be to
overload the methods with int-variants. This would allow much greater speeds and optimizations. I think Java should add Math.pow(float, int), Math.pow(int, int) etc.
2) All the Math libraries have to work in all situations, with negative numbers, small numbers, large numbers, zero, null etc. They tend to have a lot of checks to cope with all those scenarioβs. But
most of the time youβll know more about the numbers you put inβ¦ For example, my fastPower method will only work with positive integers larger than zero. Maybe you know that the power will always have
even numbersβ¦? This all means that the implementation can be improved. The problem is, this canβt be easily achieved in a generic (math) library.
33 Responses to Java vs Javascript: Speed of Math
1. @Andrey: Iβve tested it, but on small numbers (lets say 8, which I needβ¦?) it isnβt faster on my laptop, it is actually 100 times slower (?!):
int times = 1000000000;
long t = System.currentTimeMillis();
for(int i = 0; i<times;i++) {
rayMarching.fasterPow(1.0, 8);
t = System.currentTimeMillis();
for(int i = 0; i<times;i++) {
rayMarching.binPow(1.0, 8);
//One more time just to make sure we are warmed up:
t = System.currentTimeMillis();
for(int i = 0; i<times;i++) {
rayMarching.fasterPow(1.0, 8);
t = System.currentTimeMillis();
for(int i = 0; i<times;i++) {
rayMarching.binPow(1.0, 8);
This results in:
Can you verify this?
2. Thanks but I may have been mistaken =).
If you look at the algorithmic, your code is indeed more optimized for your needs so it will be faster than Math.pow(). But my point is that the JVM optimizes βhotβ code (branches or methods).
For instance, using the JVM option β-serverβ and the default parameters, a method will be compiled into native code by the JVM after 10k invocations.
So in you case, you may see that during the first 9.999 times, the Java implementation take ~5.2 seconds. This behavior is normal since it is warmup time. But after the 10β000th time, it may take
less than 2 seconds once the JVM has optimized it.
You can use the flag -XX:+PrintCompilation if you want the JVM to print a message when it compiles a code block. Until you see no more compilation message, you cannot measure precisely the
duration of your code. See this page for a complete listing of JVM options : http://jvm-options.tech.xebia.fr
3. Using atan in JS, compared to acos: http://jsperf.com/acos-vs-atan-flavour β in Chrome, the difference is massively in favour of atan (acos 13.3m op/sec vs. atan 24m op/sec), but Chrome has
interesting behaviour for trigonometric functions. In Firefox, the difference is marginal, borderline non-existent (11.8 vs. 11). in Opera, there is a big difference, in favour of acos (7.5 vs.
6) and in IE9 there is a big in favour of atan (8,6 vs 14.2).
So it would speed up things significantly, not really, or to a negative degree, depending on the browser =)
4. Itβs no secret that Javaβs trig functions are slow. The biggest reason for this is that Java favors cross platform compatibility over performance. It would be really really nice if Oracle would
add a JVM flag indicating whether or not performance was desired over strict compatibility for all floating point operations.
5. Andrey says:
Yeah it sucks on small powers, but if much faster with higher ones.
Try this one, it is 10-20% faster (3 multiplications instead of 7):
private static double casePow(double d, int exp) {
double d2, d4, d8;
switch (exp)
case 0 : return 1;
case 1 : return d;
case 2 : return d * d;
case 3 : return d * d * d;
case 4 : d2 = d * d; return d2 * d2;
case 5 : d2 = d * d; return d2 * d2 * d;
case 6 : d2 = d * d; return d2 * d2 * d * d;
case 7 : d2 = d * d; d4 = d2 * d2; return d4 * d2 * d;
case 8 : d2 = d * d; d4 = d2 * d2; return d4 * d4;
case 9 : d2 = d * d; d4 = d2 * d2; return d4 * d4 * d;
default: return 1; //use different one
6. Olivier says:
For performance testing, use Caliper: http://code.google.com/p/caliper/
Regarding the implementation itself, switch+case is slow. Use if/else instead. See implementation here: http://jafama.svn.sourceforge.net/viewvc/jafama/src/odk/lang/FastMath.java?view=markup
Look for the method named powFast.
7. someone says:
Just for your information, there is a MUCH better way to implement exponentiation on integers, commonly called fast exponentiation. Iβm curious how much your code would improve if you substituted
it in. Iβve written the code below (making some assumptions about java behaving like C, Iβm not a java guy) This is a logorithmic time algorithm as opposed to linear; although it only works for
integer powers.
private double fastIntExp(double d, int exp) {
//if the power is even
//take the base to half the power and multiply it with itself
double result = fastIntExp(d,exp/2);
return result*result;
//if the power is odd
}else if(exp>1){
//same thing but take one out and multiply with the base again at the end
double result = fastIntExp(d,(exp-1)/2);
return result*result*d;
//base cases to stop recursion
}else if(exp==1){
return d;
}else if(exp==0){
return 1;
//invalid data given, can't do a thing with that
return -1;
8. Michael Tiller says:
One point that wasnβt addressed. Did you actually make sure that these different approaches got the same answer? I didnβt see this discussed but perhaps I missed it or it was otherwise implied.
If you donβt get the same answers, its hard to make such comparisons.
9. imma says:
Liked the article, thankyou. It inspired me to tinker a bit with making a fast-pow in javascript (for integer exponents) & it seems *nearly* as fast as Math.pow on my browsers :-)
Incase itβs vaguely helpful, relevant or of interest in terms of algorithm :
function fpow3(x, exp) {
if(exp == 0) return 1;
if(exp == 1) return x;
var r = 1, v = x, e = exp;
while(1) {
if(e & 1) r = r * v;
e = e >> 1;
if(!e) return r; // return asap
v = v * v; // next exp bit is double the multiplier : x, x*x, (x*x)*(x*x), ((x*x)*(x*x))*((x*x)*(x*x)), etc
10. Alexander Ewering says:
You should try actually replacing your Math.pow() stuff directly inline with x*x*x*x*x*x*x if you say that the exponents are only 7 or 8β¦ I bet that will again double the performane without all
the function call and loop overheadβ¦
11. Friso says:
Not sure if youβre reading comments on this old article but anyway, here goes.
Just happened to come across this blog, just like I just happened to come across this one which states a possible cause for the difference in speed:
And there is additional restrictions about when numbers can be considered to be integers. V8 has a faster version of Math.pow because the specification that it is implementing allows for a faster
12. I do read comments to old posts; and yes Iβve also come across that point! They are allowed to take some shortcuts in the math libraries that Java cannot do. This could very well explain the
speed difference, but even so it is still impressive!
13. Friso says:
On the speed of java vs javascript: http://developer-blog.cloudbees.com/2013/12/about-paypal-node-vs-java-fight.html
notice that the JavaScript specification allows for an Γ’β¬Εimplementation-dependent approximationΓ’β¬Ε₯ (of unspecified accuracy) while the JVM version has the addition that
βThe computed result must be within 1 ulp of the exact result. Results must be semi-monotonic.β
And there is additional restrictions about when numbers can be considered to be integers. V8 has a faster version of Math.pow because the specification that it is implementing allows for a
faster version. | {"url":"http://www.redcode.nl/blog/2012/07/java-speed-of-math/","timestamp":"2014-04-20T11:14:08Z","content_type":null,"content_length":"58710","record_id":"<urn:uuid:eb5ef65a-1b7f-4974-8fbb-e9ac69c9b649>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Quantum Mechanics of a Bouncing Ball
The SchrΓΆdinger equation can be written , where is the mass of the ball (idealized as a point mass), is the acceleration of gravity, and is the vertical height (with ground level taken as ). For
perfectly elastic collisions, the potential energy at can be assumed infinite: , leading to the boundary condition . Also, we should have as .
The problem, as stated, is not physically realistic on a quantum level, given Earth's value of , because would have to be much too small. But an analogous experiment with a charge in an electric
field is possibly more accessible. We will continue to refer to the gravitational parameters, however.
Redefining the independent variable as , the equation reduces to the simpler form . (The form of the variable is suggested by running
on the original equation). The solution that remains finite as is found to be . (A second solution, , diverges as .)
The eigenvalues can be found from the zeros of the Airy function: , using
. The roots lie on the negative real axis, the first few being approximately , , , , , , β¦. Defining the constant , the lowest eigenvalues are thus given by , , , and so on. The corresponding
(unnormalized) eigenfunctions are . These are plotted on the graphic.
The semiclassical phase integral gives quite accurate values of the energies. Evaluate these using (the added fraction is , rather than the more common , because one turning point is impenetrable).
The integral is explicitly given by , leading to . The first six numerical values are , compared with the corresponding exact results from the SchrΓΆdinger equation .
D. ter Haar, ed.,
Problems in Quantum Mechanics
, 3rd ed., London: Pion Ltd., 1975 pp. 6, 98-105. | {"url":"http://demonstrations.wolfram.com/QuantumMechanicsOfABouncingBall/","timestamp":"2014-04-20T23:29:48Z","content_type":null,"content_length":"47451","record_id":"<urn:uuid:3d6abe22-2b5a-4c32-aa40-82af7fc79b46>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theory Old_Recdef
theory Old_Recdef imports Wfrec
(* Title: HOL/Library/Old_Recdef.thy
Author: Konrad Slind and Markus Wenzel, TU Muenchen
header {* TFL: recursive function definitions *}
theory Old_Recdef
imports Wfrec
"recdef" "defer_recdef" :: thy_decl and
"recdef_tc" :: thy_goal and
"permissive" "congs" "hints"
subsection {* Lemmas for TFL *}
lemma tfl_wf_induct: "ALL R. wf R -->
(ALL P. (ALL x. (ALL y. (y,x):R --> P y) --> P x) --> (ALL x. P x))"
apply clarify
apply (rule_tac r = R and P = P and a = x in wf_induct, assumption, blast)
lemma tfl_cut_apply: "ALL f R. (x,a):R --> (cut f R a)(x) = f(x)"
apply clarify
apply (rule cut_apply, assumption)
lemma tfl_wfrec:
"ALL M R f. (f=wfrec R M) --> wf R --> (ALL x. f x = M (cut f R x) x)"
apply clarify
apply (erule wfrec)
lemma tfl_eq_True: "(x = True) --> x"
by blast
lemma tfl_rev_eq_mp: "(x = y) --> y --> x"
by blast
lemma tfl_simp_thm: "(x --> y) --> (x = x') --> (x' --> y)"
by blast
lemma tfl_P_imp_P_iff_True: "P ==> P = True"
by blast
lemma tfl_imp_trans: "(A --> B) ==> (B --> C) ==> (A --> C)"
by blast
lemma tfl_disj_assoc: "(a β¨ b) β¨ c == a β¨ (b β¨ c)"
by simp
lemma tfl_disjE: "P β¨ Q ==> P --> R ==> Q --> R ==> R"
by blast
lemma tfl_exE: "βx. P x ==> βx. P x --> Q ==> Q"
by blast
ML_file "~~/src/HOL/Tools/TFL/casesplit.ML"
ML_file "~~/src/HOL/Tools/TFL/utils.ML"
ML_file "~~/src/HOL/Tools/TFL/usyntax.ML"
ML_file "~~/src/HOL/Tools/TFL/dcterm.ML"
ML_file "~~/src/HOL/Tools/TFL/thms.ML"
ML_file "~~/src/HOL/Tools/TFL/rules.ML"
ML_file "~~/src/HOL/Tools/TFL/thry.ML"
ML_file "~~/src/HOL/Tools/TFL/tfl.ML"
ML_file "~~/src/HOL/Tools/TFL/post.ML"
ML_file "~~/src/HOL/Tools/recdef.ML"
setup Recdef.setup
subsection {* Rule setup *}
lemmas [recdef_simp] =
less_Suc_eq [THEN iffD2]
lemmas [recdef_cong] =
if_cong let_cong image_cong INT_cong UN_cong bex_cong ball_cong imp_cong
map_cong filter_cong takeWhile_cong dropWhile_cong foldl_cong foldr_cong
lemmas [recdef_wf] = | {"url":"http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/library/HOL/HOL-Library/Old_Recdef.html","timestamp":"2014-04-20T00:51:28Z","content_type":null,"content_length":"7522","record_id":"<urn:uuid:6721feaa-ec24-43ae-881b-8611c9b2f17c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] The Gold Standard
Eray Ozkural examachine at gmail.com
Thu Feb 23 05:42:11 EST 2006
Dear Professor Friedman,
Please consider my following comments.
On 2/23/06, Harvey Friedman <friedman at math.ohio-state.edu> wrote:
> Already it can be perfectly well argued that the series of natural
> numbers is already Platonistic. In fact, one can already argue, if one
> wants, that the number 0 is Platonistic.
I do not see how that follows at all. The series of natural numbers
may be explained perfectly well by at least two other schools
of mathematics: formalism and intuitionism. I prefer psychologism
(cognitivism): The natural number is a psychological abstraction,
it is something that exists purely in your brain. Although in the
real world there are no such things as numbers, there are numbers
in one's brain, and they are quite useful, including the number 0 which
usually means the lack of objects in a discourse. (Also helpful
is the discussion about grammatical mirages, we often
mistake our abstract thoughts for things that do not exist in the
physical world.)
In many of the recent discussions, I have seen platonism
assumed and then platonism concluded. This is an expected
outcome. The main thrust of Weaver's philosophical analysis
cannot be dismissed by mere assumption of platonism. More
would have to be said. One such approach would be an
indispensability argument, e.g., we cannot even formulate physics
without being a real number realist. Unfortunately, that argument
has weak foundations, because for instance we already
know that all of quantum physics can be formulated in computable
mathematics. Likewise for several other physical theories.
In the failure of indispensability, one might try the approach of
uniqueness / independence, which is something that Godel
tried. However, that approach is also flawed, at least for the fact
that there are now many mathematical formulations of the concept
of "set", and unfortunately none of these formulations can be accepted
as a golden standard, or the "true" set theory from a philosophical
standpoint. They are merely competing theories, and there is no
winner. The formalist point of view is indifferent as to the reality of
each of these competing theories, and perhaps it is healthier this
way, since nobody has yet observed a set or a number in the
physical world, let alone an infinite set.
It may be also possible to work out Godel's explanation of
mathematical reality. According to him, mathematical facts
resided in a second order of reality. If I understood him correctly
he thought that the "first order", or sensory reality includes or
respects this second order of reality. He thought that mathematical
intuition can reach out to this second order of reality directly. He
implied also that this could be done by analysis of first order reality
(Godel scholars: correct me if I am wrong in interpretation of this
implication). This second way would be philosophically plausible. A
short explanation is order. Suppose that something like string theory
is a correct theory of everything. Such a theory may prescribe our
"universe bubble" as a solution to some uncanny equation. If that is
exactly true, then the mathematical facts inherent in the solution are
no less physical than a physical law (e.g., gravity). Then, "real"
mathematics is physics, and it is discovered. This point of view might
directly explain the "reality" of mathematics without succumbing to an
indispensability argument. The problem with this view is it does not seem
trivial to conclude that the sequence of natural numbers or the number zero
Finally, there is one other alternative to make sense of the "reality"
of set theory. Godel dismissed a view he termed "Aristotelian
Realism". According to this view, mathematical properties are
etched onto physical events. A simple example would be "the number
of electrons in X atom".
Eray Ozkural (exa), PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
http://www.cs.bilkent.edu.tr/~erayo Malfunct: http://www.malfunct.com
ai-philosophy: http://groups.yahoo.com/group/ai-philosophy
Pardus: www.uludag.org.tr KDE Project: http://www.kde.org
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-February/010024.html","timestamp":"2014-04-17T12:31:45Z","content_type":null,"content_length":"6832","record_id":"<urn:uuid:c21bafb6-6f18-4806-b57c-a843aa3aefb7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Mediated Moderation with Count NBI Outcome
David Lovis-McMahon posted on Friday, May 06, 2011 - 1:59 pm
Iβm currently working on a mediated moderation model with the following structure:
Sentence on Responsibility(CntR1);
Sentence#1 on Responsibility(InfR1);
Responsibility on NN(R2);
NN on Defect (b1)
Prime (b2)
I used Model Constraint to calculate the simple slopes.
lSS = (b1 + b3*(-.5));!FW
hSS = (b1 + b3*(.5));!SD
Question 1. Is it reasonable to compute the indirect effect of the simple slopes on Sentence?
!Sentence Count
cLIND = (lSS*R2*CntR1);!FW
cHIND = (hSS*R2*CntR1);!SD
!Zero Inflation
zLIND = (lSS*R2*InfR1);!FW
zHIND = (hSS*R2*InfR1);!SD
Iβve seen elsewhere on the discussion boards that indirect effects for these kinds of count variables can be calculated by using Model Constraint with S.E. and tests for that product of coefficients
is all handled by Mplus. What I havenβt seen is this treatment of the simple slopes within Mplus as an indirect effect. That is can I exponentiate cLIND to zHIND and meaningfully interpret them.
Thank you.
Bengt O. Muthen posted on Saturday, May 07, 2011 - 8:38 am
Sorry, you lost me with the b3*(-.5) expression.
David Lovis-McMahon posted on Saturday, May 07, 2011 - 10:48 am
Sorry for being unclear. Defect (B1), Prime(B2), and PxD(B3) are two experimentally manipulated variables and their interaction: 2(Neurological Defect: Drug Induced, Congenital) x 2(Prime: Free Will,
Scientific Determinism).
-.5 and .5 represent the effect coding that I used for the variables. The two simple slopes represent the effect of the Defect manipulation on NN for the Free Will (-.5) and Scientific Determinism
(.5) Prime manipulations respectively.
Bengt O. Muthen posted on Sunday, May 08, 2011 - 8:27 pm
In general in count models, an indirect effect is an effect on the log rate, which means that the indirect effect can be exponentiated into a rate. The same would hold for the inflation part I would
Back to top | {"url":"http://www.statmodel.com/discussion/messages/11/7277.html?1304911671","timestamp":"2014-04-17T03:54:05Z","content_type":null,"content_length":"21049","record_id":"<urn:uuid:2f6660d5-5563-431a-bf65-6b1ce05b3bc2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resources and services for Utah Higher Education faculty and students such as Canvas and collegEmedia.
Curriculum Tie: Summary:
This lesson plan focuses on place value concepts.
β’ Language Arts
3rd Grade Main Curriculum Tie:
Standard 1 Mathematics Grade 2
Objective 1 Understand place value. 1.
β’ Language Arts Understand that the three digits of a three-digit number represent amounts of
3rd Grade hundreds, tens, and ones; e.g., 706 equals 7 hundreds, 0 tens, and 6 ones. Understand
Standard 7 the following as special cases:
Objective 2
β’ Mathematics Grade 2 [2011] Materials:
Understand place value. Invitation to Learn
β’ Place Value Stamps
Group Size: β’ Ink Pad
Large Groups β’ Numeral cards
β’ Place value block cards
Places, Everyone
β’ Place Value Houses
β’ Single digit card
β’ Place Value Chart
β’ White paper
β’ Numeral Strips
β’ Overhead markers
Additional Resources
The Mailbox the Idea Magazine for Teachers, The Education Center; August/September
1997. Volume 19, Number 4 (Intermediate)
Place Value (Kid Friendly Computation), by Sarah Morgan
Place Value Quizmo
β‘ Place_Value_Houses.pdf
β‘ Numeral_Strips.pdf
Web Sites
Background For Teachers:
Students should be able to know and understand what basic whole numbers are and what
they look like. They should have some understanding of place value and what it
represents in a whole number. They should be taught specific vocabulary relating to
the lesson before you begin. This should include: Numeral, digit, standard form,
expanded form, ones, tens, hundreds, thousands, ten thousands, and horizontal and
vertical lines. It would be very helpful if you could show them pictures or examples
of each vocabulary word listed above. They should be taught and understand how
numbers are used in the world and how important the use of learning to read and write
numbers is beneficial in their daily life.
Intended Learning Outcomes:
1. Develop a positive learning attitude toward mathematics.
4. Communicate mathematical ideas and arguments coherently to peers, teachers, and
others using the precise language and notation of mathematics.
6. Represent mathematical ideas in a variety of ways.
Instructional Procedures:
Invitation to Learn
This activity is called βMatch Gameβ. Each student will receive a card. On the card
there will be a numeral or place value blocks. Students will walk around and find
their match. Those students with numeral cards will be looking for the person that
has the same value on their card that is represented by place value blocks. Those
students with place value blocks will be looking for the person that has the same
value on their card but is represented by numerals. Once they have found their match
they say the number with their partner. They then find another set of partners and
they both share their numbers with each other. They return to their seats and write
their number in their journal in standard form, expanded form and word form. They can
then use their stamps to put the place value blocks for that number in their journal.
Instructional Procedures Places, Everyone
1. Each student should receive a copy of the Place Value Houses.
2. The teacher should have a copy of the Place Value Houses on an overhead.
3. Have students cut out their Place Value Houses and glue them in their journal.
4. Teach students what each house represents. The first house on the right is called
Units that have the values of ones, tens and hundreds. The second house is called
Thousands with the values of ones, tens and hundreds and the third house is
called Millions with the values of ones, tens and hundreds. Each house will have
a group of three digits in a number. Each group is called a period. Explain to
students that within each period the names are the same: hundreds, tens, and
5. Write a four or five digit number on the overhead or chalkboard. (e.g. 6, 348 or
45, 823). Model how to say this number by pointing to where each number would be
represented on the houses. Explain to students that when reading or writing a
large numeral, it is helpful to break it down into periods and read each period
as a simple one, two or three digit numeral. Also help students see that the
commas between each house represent pauses when reading a numeral, just as they
do in reading text. Whenever a student comes to a comma in reading or writing a
large numeral, he knows to pause and say or write a period name. It is very
important when you are modeling that you do not say βandβ when reading the
number. βAndβ represents a decimal, so when reading 6,348 you would not say six
thousand three hundred and forty eight you would say six thousand three hundred
forty eight. Model a few numbers to show students how to read large numbers.
After you have modeled it a few times have students begin to say and point to the
numbers that would be represented on their place value house chart.
6. Write a number on the overhead or chalkboard that has a 0 (e.g. 35, 207). Explain
to students that the value of the first digitβs place determines how large the
numeral will be and that any empty place to the right of the digit must have a
zero place holder. Read this number to the students and point to where each digit
would be represented on the place value house chart. Explain that even though you
didn't say anything for the zero in the tens place it is very important that they
don't forget to put it in when writing the number. Each place value on any digit
has to be represented by a numeral.
7. Divide the class into two groups.
8. Give each student in each group a single digit card. (0-9)
9. Teacher reads a number (e.g. 12, 543) and the students arrange themselves in the
proper order. Each student in the group will help each other to form the number.
Once they have formed the number they raise their hand to show they have
completed the number. The teacher then asks them to say the number out loud. You
can continue this activity having them create many different numbers with their
cards. (See extensions for more ideas to use with this activity.)
10. After each number they create they can write that number in their journal in
standard form, expanded form and word form. They can also use the place value
stamps to create the number.
11. Next, you will need a Place Value Chart there is a black line or your students
can make their own by following these simple steps.
a. Lay a sheet of paper horizontally, fold one side in thirds and crease it and
fold the other side in thirds and crease it.
b. Open up your sheet. Draw lines along the two vertical creases.
c. Measure and draw a horizontal line one inch from the top edge of your sheet.
d. Beginning on the left side, label the four resulting boxes: Millions,
Thousands, and Units.
e. Measure and draw another horizontal line 1β2 inch below the first one.
f. Beginning on the right side of the paper, measure and draw a vertical line 1
1β4 inches from the edge. Extend this line from the first horizontal line down to
the bottom edge of the paper.
g. Measure and draw another vertical line 1 1β4 inches from the first one. Extend
this line from the first horizontal line down to the bottom edge of the paper.
h. From left to right, label the three resulting small boxes βHβ (hundreds), βTβ
(tens), and βOβ (ones).
i. Continue measuring and drawing vertical lines (1 1β4 inches apart) across the
paper so that the thousands and millions sections are exactly like the units
j. Label the three column headings (βHβ, βTβ, and β0β) in each section.
k. If you want a pocket at the bottom to hold number strips just fold the bottom
up 1 1β2 inches and tape or glue on each end.
12. Once they have their place value chart made you can laminate it and use overhead
markers and/or use the Place Value Strips.
13. Read a number to them and have them place their Place Value Strips in the correct
order to create the number provided.
14. Next have students go to a journal and write the number in standard form,
expanded form, and word form. They can also use their place value stamps and
stamp them in their journal to create the number given.
15. Students can work with partners and they can create numbers together or one
partner can say a number and the other would create it on their place value
Curriculum Extensions/Adaptations/ Integration
β’ For advanced learners extend the place value house activity by using larger
numbers and have students practice saying and writing numbers to the millions.
β’ Some extensions you could use with the single digit cards would be to have each
group make the smallest number with their cards and then have them make the
largest numbers with their cards. Next have them make a number with the value of
8 in the 10,000 place or a number with a value of 3 in the hundreds place. Have
them say and write the numbers that they create.
β’ For advanced learners make another place value chart with four periods which
include units, thousands, millions and billions. They can work with partners and
create different numbers on their own.
β’ For students with special needs have them pair up with a partner and work
together on each of the activities.
β’ You can extend these activities by taking two numbers and comparing the numbers.
Use the symbols <, >, = and β . Teach the vocabulary greater than, less than,
equal to and not equal to.
Family Connections
β’ Students can work with their parents at home by having a parent say a number and
the child writes it down in standard form, expanded form and word form.
β’ Students can take home a copy of the Place Value Houses and the parent can write
down a number and the child would say the number and point to the value of each
numeral on the house.
β’ Students could take home their journal and share their place value activities
with their parents.
β’ Parents can work with students on comparing numbers by writing two different
numerals down and having the child pick the correct symbol that would go between
each numeral.
Assessment Plan:
β’ Teachers should walk around and assess the students to see if they are creating
the numbers she has given them correctly.
β’ Students can say and point to the place value of each numeral, to the teacher, so
she can see if they understand.
β’ Another way to assess would be to check the studentβs journal to see if they
understand the concepts taught.
β’ Have students work together and assess each otherβs journals.
Research Basis
Ball Loewenberg, D., Research on Teaching Mathematics: Making Subject Matter
Knowledge Part of the Equation. Greenwich, CT: JAI Press.
In order to teach mathematics effectively, teachers must understand mathematics
themselves? This articles research shows that past efforts to show the relationship
of teachersβ mathematical knowledge to their teaching mathematics have been largely
unsuccessful. The author researches what it means to understand mathematics and the
role played by such understanding in teaching.
Baxter, J. A., Woodward, J., (2005). Writing in Mathematics: An Alternative Form of
Communication for Academically Low-Achieving Students. Learning Disabilities Research
and Practice. 20(2), 119-135.
In this study they analyze how one teacher used writing to support communication in a
seventh-grade, low-track mathematics class. For one school year, they studied four
low achieving students in the class. Students wrote in journals on a weekly basis.
Using classroom observations and interviews with the teacher, they developed profiles
of the four students, capturing their participation in class discussions. The
profiles highlighted an important similarity among the four students: marginal
participation in both small-group and whole class discussions. However, their
analysis of the studentsβ journals identified multiple instance where the students we
able to explain their mathematical reasoning, revealing their conceptual
understanding, ability to explain, and skill at representing a problem.
Utah LessonPlans
Created Date :
Jul 08 2008 21:50 PM
Main Curriculum Tie: Mathematics Grade 2 Understand place value. 1.Understand that the three digits of a three-digit number represent amounts of hundreds, tens, and ones; e.g., 706 equals 7 hundreds,
0 tens, and 6 ones. Understand the following as special cases:
The Mailbox the Idea Magazine for Teachers, The Education Center; August/September 1997. Volume 19, Number 4 (Intermediate)
Background For Teachers:Students should be able to know and understand what basic whole numbers are and what they look like. They should have some understanding of place value and what it represents
in a whole number. They should be taught specific vocabulary relating to the lesson before you begin. This should include: Numeral, digit, standard form, expanded form, ones, tens, hundreds,
thousands, ten thousands, and horizontal and vertical lines. It would be very helpful if you could show them pictures or examples of each vocabulary word listed above. They should be taught and
understand how numbers are used in the world and how important the use of learning to read and write numbers is beneficial in their daily life.
Intended Learning Outcomes:1. Develop a positive learning attitude toward mathematics.
4. Communicate mathematical ideas and arguments coherently to peers, teachers, and others using the precise language and notation of mathematics.
This activity is called βMatch Gameβ. Each student will receive a card. On the card there will be a numeral or place value blocks. Students will walk around and find their match. Those students with
numeral cards will be looking for the person that has the same value on their card that is represented by place value blocks. Those students with place value blocks will be looking for the person
that has the same value on their card but is represented by numerals. Once they have found their match they say the number with their partner. They then find another set of partners and they both
share their numbers with each other. They return to their seats and write their number in their journal in standard form, expanded form and word form. They can then use their stamps to put the place
value blocks for that number in their journal.
a. Lay a sheet of paper horizontally, fold one side in thirds and crease it and fold the other side in thirds and crease it.
b. Open up your sheet. Draw lines along the two vertical creases.
c. Measure and draw a horizontal line one inch from the top edge of your sheet.
d. Beginning on the left side, label the four resulting boxes: Millions, Thousands, and Units.
e. Measure and draw another horizontal line 1β2 inch below the first one.
f. Beginning on the right side of the paper, measure and draw a vertical line 1 1β4 inches from the edge. Extend this line from the first horizontal line down to the bottom edge of the paper.
g. Measure and draw another vertical line 1 1β4 inches from the first one. Extend this line from the first horizontal line down to the bottom edge of the paper.
h. From left to right, label the three resulting small boxes βHβ (hundreds), βTβ (tens), and βOβ (ones).
i. Continue measuring and drawing vertical lines (1 1β4 inches apart) across the paper so that the thousands and millions sections are exactly like the units section.
j. Label the three column headings (βHβ, βTβ, and β0β) in each section.
k. If you want a pocket at the bottom to hold number strips just fold the bottom up 1 1β2 inches and tape or glue on each end.
Ball Loewenberg, D., Research on Teaching Mathematics: Making Subject Matter Knowledge Part of the Equation. Greenwich, CT: JAI Press.
In order to teach mathematics effectively, teachers must understand mathematics themselves? This articles research shows that past efforts to show the relationship of teachersβ mathematical knowledge
to their teaching mathematics have been largely unsuccessful. The author researches what it means to understand mathematics and the role played by such understanding in teaching.
Baxter, J. A., Woodward, J., (2005). Writing in Mathematics: An Alternative Form of Communication for Academically Low-Achieving Students. Learning Disabilities Research and Practice. 20(2), 119-135.
In this study they analyze how one teacher used writing to support communication in a seventh-grade, low-track mathematics class. For one school year, they studied four low achieving students in the
class. Students wrote in journals on a weekly basis. Using classroom observations and interviews with the teacher, they developed profiles of the four students, capturing their participation in class
discussions. The profiles highlighted an important similarity among the four students: marginal participation in both small-group and whole class discussions. However, their analysis of the studentsβ
journals identified multiple instance where the students we able to explain their mathematical reasoning, revealing their conceptual understanding, ability to explain, and skill at representing a | {"url":"http://www.uen.org/Lessonplan/preview.cgi?LPid=21500","timestamp":"2014-04-20T11:06:41Z","content_type":null,"content_length":"53182","record_id":"<urn:uuid:739bff2f-ec8e-4417-90c0-2bb11c6ac60a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
β’ across
MIT Grad Student
Online now
β’ laura*
Helped 1,000 students
Online now
β’ Hero
College Math Guru
Online now
Here's the question you clicked on:
help :) multiply expression
β’ one year ago
β’ one year ago
Best Response
You've already chosen the best response.
\[2\sqrt{6}\times \sqrt{10}\]
Best Response
You've already chosen the best response.
Multiply them seperately... \[2\sqrt6 \times \sqrt{10} \implies 2\sqrt{6 \times 10} \implies 2 \sqrt{2 \times 3 \times 2 \times 5} \implies~?\]
Best Response
You've already chosen the best response.
Can you finish from there? Can you simplify the radical?
Best Response
You've already chosen the best response.
|dw:1344751807375:dw| here is an example. :)
Best Response
You've already chosen the best response.
@merp it is better if you do it here by your own..
Best Response
You've already chosen the best response.
@LaurenAshley1201 \[\implies 2 \sqrt{\color{blue}{\underline{2 \times 2}} \times 3 \times 5} = ??\]
Best Response
You've already chosen the best response.
2 sqrt 60? i dont think im doing it right
Best Response
You've already chosen the best response.
You can solve further for \(\sqrt{60}\)..
Best Response
You've already chosen the best response.
2 sqrt 15
Best Response
You've already chosen the best response.
\[\sqrt{60} = \sqrt{\color{blue}{\underline{2 \times 2}} \times 3 \times 5}\] Can you go further..??
Best Response
You've already chosen the best response.
think of a perfect square number that can go into 60.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
There is one 2 outside too..
Best Response
You've already chosen the best response.
We have broken 60 down into prime factors for you. Remember this? \[\sqrt{x \times x} = x\]Use the same rule...
Best Response
You've already chosen the best response.
\[2 \times 2 \sqrt{15} = ??\]
Best Response
You've already chosen the best response.
do you mean sqrt (4 * 15) ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i dont know how to go any furhter
Best Response
You've already chosen the best response.
\(\sqrt{60}\) is how much ??
Best Response
You've already chosen the best response.
you have solved it above..
Best Response
You've already chosen the best response.
\[\sqrt{60} = \sqrt{\color{blue}{\underline{2 \times 2}} \times 3 \times 5} = ??\]
Best Response
You've already chosen the best response.
its like sqrt( 5 x 5) = sqrt (25) = 5.
Best Response
You've already chosen the best response.
Where are you having problem @LaurenAshley1201
Best Response
You've already chosen the best response.
let me give you an example: \[\sqrt{45}=\sqrt{3*3*5}=\sqrt{9} * \sqrt{5} = 3\sqrt{5}\]
Best Response
You've already chosen the best response.
sqrt of 60 is 2 sqrt 15
Best Response
You've already chosen the best response.
you got it right now! :)
Best Response
You've already chosen the best response.
In square root you can pull out one from a pair like this: \[\sqrt{4 \times 4 \times 5}\] Here you can see there are two 4's. SO you can take one 4 outside and there will remain no 4 in the
square root brackets: So it becomes; \[\sqrt{\color{green}{\underline{ 4 \times 4} \times 5}} \implies 4 \sqrt{5}\]
Best Response
You've already chosen the best response.
Ok. Do you understand this at the very least? \[\sqrt{3 \times 3} = \sqrt{3^2} = \sqrt{9} = 3\color{red}{\huge??????}\]
Best Response
You've already chosen the best response.
yes i do
Best Response
You've already chosen the best response.
THat's basically what you're doing here, it's just that you can make sense of it without showing all of the steps because you know the end result which is why we know that\[\sqrt{60} = \sqrt{2 \
times 2 \times 3 \times 5} = 2\sqrt{15}\]
Best Response
You've already chosen the best response.
\[2 \sqrt{60} = 2 \times (2 \sqrt{15}) = ??\]
Best Response
You've already chosen the best response.
@Calcmathlete would that be my final answer?
Best Response
You've already chosen the best response.
Not quite...you forgot the 2 that was already out there... \[2 \times 2\sqrt{15} = ?\]
Best Response
You've already chosen the best response.
not yet, example: \[3*(4\sqrt{5}) = 3*4\sqrt{5} = 12\sqrt{5}\]
Best Response
You've already chosen the best response.
i figuired it out,with everyones help ! thanks everyone
Best Response
You've already chosen the best response.
Glad to help :)
Best Response
You've already chosen the best response.
my pleasure :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
β’ Teamwork 19 Teammate
β’ Problem Solving 19 Hero
β’ Engagement 19 Mad Hatter
β’ You have blocked this person.
β’ β You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50274809e4b01d88d774de12","timestamp":"2014-04-20T08:28:47Z","content_type":null,"content_length":"405629","record_id":"<urn:uuid:d8ba1eaf-23a1-484a-89b8-28531712bfb5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bridge, Probability & Information Robert F. Mackinnon
This is an excellent book. I've been reading Mackinnon's blog for a long time now and decided to check Bridge Probability and Information out as I enjoy his perspective coming from such a strong
mathematics background. He's taken a subject which is incredibly dry and made it "readable" for those who are non-mathematicians. This is a very technical book that uses stories to make it more
What I really liked is that it helped me in an area I needed improvement -- deducing a picture of the unseen hands using a posteriori probability and good counting methods. Most experts I talk to say
this skill is at the top of their list in terms of importance. The author introduces a different method of counting that the reader is probably not familiar with. It took me quite a bit of practice
to get used to, but once it became habit I'm able to picture the unseen hands much better. This book requires a lot of work, but well worth it.
Probability (a priori and a posteriori) is a very difficult/complex subject for a book to cover in depth, and someone with a combination of Mackinnon's mathematics and bridge background was needed in
my opinion. | {"url":"http://www.bridgebase.com/forums/topic/46637-bridge-probability-information-robert-f-mackinnon/page__pid__577170","timestamp":"2014-04-16T22:01:28Z","content_type":null,"content_length":"149700","record_id":"<urn:uuid:2b2e2ff8-3f69-4f65-9713-9b61cfac5d9e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: C Library Functions (3) Updated: Local index Up
vpWindowPHIGS - multiply the projection matrix by a PHIGS viewing matrix
#include <volpack.h>
vpWindowPHIGS(vpc, vrp, vpn, vup, prp, umin, umax, vmin, vmax,
front, back, projection_type)
vpContext *vpc;
vpVector3 vrp, vpn, vup;
vpVector3 prp;
double umin, umax, vmin, vmax, front, back;
int projection_type;
vpc VolPack context from vpCreateContext.
vrp Point specifying the view reference point.
vpn Vector specifying the view plane normal.
vup Vector specifying the view up vector.
prp Point specifying the projection reference point (in view reference coordinates).
umin Left coordinate of clipping window (in view reference coordinates).
umax Right coordinate of clipping window (in view reference coordinates).
vmin Bottom coordinate of clipping window (in view reference coordinates).
vmax Top coordinate of clipping window (in view reference coordinates).
Coordinate of the near depth clipping plane (in view reference coordinates).
back Coordinate of the far depth clipping plane (in view reference coordinates).
Projection type code. Currently, must be VP_PARALLEL.
vpWindowPHIGS is used to multiply the current projection matrix by a viewing and projection matrix specified by means of the PHIGS viewing model. This model combines specification of the viewpoint,
projection and clipping parameters. The resulting matrix is stored in the projection transformation matrix. Since both the view and the projection are specified in this one matrix, normally the view
transformation matrix is not used in conjunction with vpWindowPHIGS (it should be set to the identity). Currently, only parallel projections may be specified. For an alternative view specification
model, see vpWindow(3).
Assuming that the view transformation matrix is the identity, the matrix produced by vpWindowPHIGS should transform world coordinates into clip coordinates. This transformation is specified as
follows. First, the projection plane (called the view plane) is defined by a point on the plane (the view reference point, vrp) and a vector normal to the plane (the view plane normal, vpn). Next, a
coordinate system called the view reference coordinate (VRC) system is specified by means of the view plane normal and the view up vector, vup. The origin of VRC coordinates is the view reference
point. The basis vectors of VRC coordinates are:
u = v cross n
v = the projection of vup parallel to vpn onto the view plane
n = vpn
This coordinate system is used to specify the direction of projection and the clipping window. The clipping window bounds in the projection plane are given by umin, umax, vmin and vmax. The direction
of projection is the vector from the center of the clipping window to the projection reference point (prp), which is also specified in VRC coordinates. Finally, the front and back clipping planes are
given by n=front and n=back in VRC coordinates.
For a more detailed explanation of this view specification model, see Computer Graphics: Principles and Practice by Foley, vanDam, Feiner and Hughes.
The current matrix concatenation parameters can be retrieved with the following state variable codes (see vpGeti(3)): VP_CONCAT_MODE.
The normal return value is VP_OK. The following error return values are possible:
The clipping plane coordinates are invalid (umin >= umax, etc.).
The type argument is invalid.
The vectors defining view reference coordinates are not mutually orthogonal, or the projection reference point lies in the view plane.
VolPack(3), vpCreateContext(3), vpCurrentMatrix(3), vpWindow(3)
This document was created by man2html, using the manual pages.
Time: 21:58:16 GMT, April 16, 2011 | {"url":"http://www.makelinux.net/man/3/W/WindowPHIGS","timestamp":"2014-04-20T23:36:42Z","content_type":null,"content_length":"12524","record_id":"<urn:uuid:92e100d4-80ac-4028-b972-4aa8040477b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
On minimizing the maximum eigenvalue of a symmetric matrix
Results 1 - 10 of 63
- SIAM REVIEW , 1996
"... ..."
, 2005
"... We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite
matrices. We show that the approach is very efficient for graph bisection problems, such as max-cut. Other appli ..."
Cited by 207 (17 self)
Add to MetaCart
We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices.
We show that the approach is very efficient for graph bisection problems, such as max-cut. Other applications include max-min eigenvalue problems and relaxations for the stable set problem.
, 2003
"... Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice,
particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemp ..."
Cited by 92 (13 self)
Add to MetaCart
Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in
engineering design, and are amenable to a rich blend of classical mathematical techniques and contemporary optimization theory. This essay presents a personal choice of some central mathematical
ideas, outlined for the broad optimization community. I discuss the convex analysis of spectral functions and invariant matrix norms, touching briey on semide nite representability, and then
outlining two broader algebraic viewpoints based on hyperbolic polynomials and Lie algebra. Analogous nonconvex notions lead into eigenvalue perturbation theory. The last third of the article
concerns stability, for polynomials, matrices, and associated dynamical systems, ending with a section on robustness. The powerful and elegant language of nonsmooth analysis appears throughout, as a
unifying narrative thread.
- SIAM J. Optimization , 1991
"... Optimization problems involving eigenvalues arise in many applications. Let x be a vector of real parameters and let A(x) be a continuously differentiable symmetric matrix function of x. We
consider a particular problem which occurs frequently: the minimization of the maximum eigenvalue of A(x), ..."
Cited by 83 (4 self)
Add to MetaCart
Optimization problems involving eigenvalues arise in many applications. Let x be a vector of real parameters and let A(x) be a continuously differentiable symmetric matrix function of x. We consider
a particular problem which occurs frequently: the minimization of the maximum eigenvalue of A(x), subject to linear constraints and bounds on x. The eigenvalues of A(x) are not differentiable at
points x where they coalesce, so the optimization problem is said to be nonsmooth. Furthermore, it is typically the case that the optimization objective tends to make eigenvalues coalesce at a
solution point. There are three main purposes of the paper. The first is to present a clear and self-contained derivation of the Clarke generalized gradient of the max eigenvalue function in terms of
a "dual matrix". The second purpose is to describe a new algorithm, based on the ideas of a previous paper by the author (SIAM J. Matrix Anal. Appl. 9 (1988) 256-268), which is suitable for solving
- Linear Algebra Appl , 1993
"... We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may
appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fr ..."
Cited by 65 (14 self)
Add to MetaCart
We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear
specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fractional programs. Many problems arising in control theory can be cast in this form. The problem
is nondifferentiable but quasiconvex, so methods such as Kelley's cuttingplane algorithm or the ellipsoid algorithm of Shor, Nemirovksy, and Yudin are guaranteed to minimize it. In this paper we
describe relevant background material and a simple interior point method that solves such problems more efficiently. The algorithm is a variation on Huard's method of centers, using a self-concordant
barrier for matrix inequalities developed by Nesterov and Nemirovsky. (Nesterov and Nemirovsky have also extended their potential reduction methods to handle the same problem [NN91b].) Since the
problem is quasiconvex but not convex, devising a non-heuristic stopping criterion (i.e., one that guarantees a given accuracy) is more difficult than in the convex case. We describe several
non-heuristic stopping criteria that are based on the dual of a related convex problem and a new ellipsoidal approximation that is slightly sharper, in some cases, than a more general result due to
Nesterov and Nemirovsky. The algorithm is demonstrated on an example: determining the quadratic Lyapunov function that optimizes a decay rate estimate for a differential inclusion.
, 1993
"... This paper gives max characterizations for the sum of the largest eigenvalues of a symmetric matrix. The elements which achieve the maximum provide a concise characterization of the generalized
gradient of the eigenvalue sum in terms of a dual matrix. The dual matrix provides the information requi ..."
Cited by 64 (4 self)
Add to MetaCart
This paper gives max characterizations for the sum of the largest eigenvalues of a symmetric matrix. The elements which achieve the maximum provide a concise characterization of the generalized
gradient of the eigenvalue sum in terms of a dual matrix. The dual matrix provides the information required to either verify first-order optimality conditions at a point or to generate a descent
direction for the eigenvalue sum from that point, splitting a multiple eigenvalue if necessary. A model minimization algorithm is outlined, and connections with the classical literature on sums of
eigenvalues are explained. Sums of the largest eigenvalues in absolute value are also addressed.
- SIAM Journal on Optimization , 1998
"... This work concerns primal-dual interior-point methods for semidefinite programming (SDP) that use a search direction originally proposed by Helmberg-Rendl-Vanderbei-Wolkowicz [5] and
Kojima-Shindoh-Hara [11], and recently rediscovered by Monteiro [15] in a more explicit form. In analyzing these meth ..."
Cited by 55 (1 self)
Add to MetaCart
This work concerns primal-dual interior-point methods for semidefinite programming (SDP) that use a search direction originally proposed by Helmberg-Rendl-Vanderbei-Wolkowicz [5] and
Kojima-Shindoh-Hara [11], and recently rediscovered by Monteiro [15] in a more explicit form. In analyzing these methods, a number of basic equalities and inequalities were developed in [11] and also
in [15] through different means and in different forms. In this paper, we give a concise derivation of the key equalities and inequalities for complexity analysis along the exact line used in linear
programming (LP), producing basic relationships that have compact forms almost identical to their counterparts in LP. We also introduce a new formulation of the central path and variable-metric
measures of centrality. These results provide convenient tools for deriving polynomiality results for primal-dual algorithms extended from LP to SDP using the aforementioned and related search
directions. We present examples...
, 1996
"... A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X , 1 (X) 2 (X) : : : n (X), and hence may be written f( 1 (X); 2 (X); : : : ; n (X)) for some
symmetric function f . Such functions appear in a wide variety of matrix optimization problems. We ..."
Cited by 48 (13 self)
Add to MetaCart
A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X , 1 (X) 2 (X) : : : n (X), and hence may be written f( 1 (X); 2 (X); : : : ; n (X)) for some
symmetric function f . Such functions appear in a wide variety of matrix optimization problems. We give a simple proof that this spectral function is differentiable at X if and only if the function f
is differentiable at the vector (X), and we give a concise formula for the derivative. We then apply this formula to deduce an analogous expression for the Clarke generalized gradient of the spectral
function. A similar result holds for real symmetric matrices. 1 Introduction and notation Optimization problems involving a symmetric matrix variable, X say, frequently involve symmetric functions of
the eigenvalues of X in the objective or constraints. Examples include the maximum eigenvalue of X, or log(det X) (for positive definite X), or eigenvalue constraints such as positive semidefinit...
, 1995
"... This work concerns primal-dual interior-point methods for semidefinite programming (SDP) that use a linearized complementarity equation originally proposed by Kojima, Shindoh and Hara [11], and
recently rediscovered by Monteiro [15] in a more explicit form. In analyzing these methods, a number of ba ..."
Cited by 47 (0 self)
Add to MetaCart
This work concerns primal-dual interior-point methods for semidefinite programming (SDP) that use a linearized complementarity equation originally proposed by Kojima, Shindoh and Hara [11], and
recently rediscovered by Monteiro [15] in a more explicit form. In analyzing these methods, a number of basic equalities and inequalities were developed in [11] and also in [15] through different
means and in different forms. In this paper, we give a very short derivation of the key equalities and inequalities along the exact line used in linear programming (LP), producing basic relationships
that have highly compact forms almost identical to their counterparts in LP. We also introduce a new definition of the central path and variable-metric measures of centrality. These results provide
convenient tools for extending existing polynomiality results for many, if not most, algorithms from LP to SDP with little complication. We present examples of such extensions, including the
long-step infeasible-...
- SIAM Journal on Optimization , 1996
"... There is growing interest in optimization problems with real symmetric matrices as variables. Generally the matrix functions involved are spectral: they depend only on the eigenvalues of the
matrix. It is known that convex spectral functions can be characterized exactly as symmetric convex functions ..."
Cited by 45 (20 self)
Add to MetaCart
There is growing interest in optimization problems with real symmetric matrices as variables. Generally the matrix functions involved are spectral: they depend only on the eigenvalues of the matrix.
It is known that convex spectral functions can be characterized exactly as symmetric convex functions of the eigenvalues. A new approach to this characterization is given, via a simple Fenchel
conjugacy formula. We then apply this formula to derive expressions for subdifferentials, and to study duality relationships for convex optimization problems with positive semidefinite matrices as
variables. Analogous results hold for Hermitian matrices. Key Words: convexity, matrix function, Schur convexity, Fenchel duality, subdifferential, unitarily invariant, spectral function, positive
semidefinite programming, quasi-Newton update. AMS 1991 Subject Classification: Primary 15A45 49N15 Secondary 90C25 65K10 1 Introduction A matrix norm on the n \Theta n complex matrices is called
unitarily inv... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=215211","timestamp":"2014-04-18T12:56:38Z","content_type":null,"content_length":"38337","record_id":"<urn:uuid:dd732acc-671a-4fe5-86a6-f385515e2a00>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Air-filled Capacitor Consists Of Two Parallel ... | Chegg.com
An air-filled capacitor consists of two parallel plates, each withan area of 7.6 cm
, separated by a distance of1.90 mm.
(a) If a 24.0 V potentialdifference is applied to these plates, calculate the electric fieldbetween the plates.
(b) What is the surface charge density?
2 nC/m^2
(c) What is the capacitance?
(d) Find the charge on each plate. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/air-filled-capacitor-consists-two-parallel-plates-withan-area-76-cm2-separated-distance-of-q486636","timestamp":"2014-04-18T18:43:50Z","content_type":null,"content_length":"23945","record_id":"<urn:uuid:ff82fe14-4ed5-40ab-b513-0201fa61927d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from February 2008 on The Unapologetic Mathematician
From Alexandre Borovik I find this video of someone solving the βorder sevenβ Rubikβs Cube.
Iβm not about to sit down and work up a solution like we did before, but it shouldnβt be impossible to repeat the same sort of analysis. I will point out, however, that the solver in this video is
making heavy use of both of our solution techniques: commutators and a tower of nested subgroups.
The nested subgroups are obvious. As the solution progresses, more and more structure becomes apparent, and is preserved as the solution continues. In particular, the solver builds up the centers of
faces and then slips to the subgroup of maneuvers which leaves such βbig centersβ fixed in place. Near the end, almost all of the moves are twists of the outer faces, because these are assured not to
affect anything but the edge and corner cubies.
The commutators take a quicker eye to spot, but theyβre in there. Watch how many times heβll do a couple twists, a short maneuver, and then undo those couple twists. Just as we used such commutators,
these provide easy generalizations of basic cycles, and they form the heart of this solverβs algorithm.
Alexandre asked a question about the asymptotic growth of the βworst assembly timeβ for the $n\times n\times n$ cube. What this is really asking is for the βdiameterβ of the $n$th Rubikβs group $G_n$
. I donβt know offhand what this would be, but hereβs a way to get at a rough estimate.
First, find a similar expression for the structure of $G_n$ as we found before for $G_3$. Then what basic twists do we have? For $n=3$ we had all six faces, which could be turned either way, and we
let the center slices be fixed. In general weβll have $\lfloor\frac{n}{2}\rfloor$ slices in each of six directions, each of which can be turned either way, for a total of $12\lfloor\frac{n}{2}\
rfloor$ generators (and their inverses). But each generator should (usually) be followed by a different one, and definitely not by its own inverse. Thus we can estimate the number of words of length
$l$ as $\left(12\lfloor\frac{n}{2}\rfloor-2\right)^l$. Then the structure of $G_n$ gives us a total size of the group, and the diameter should be about $\log_{\left(12\lfloor\frac{n}{2}\rfloor-2\
right)}(|G_n|)$. Notice that for $n=3$ this gives us $20$, which isnβt far off from the known upper bound of $26$ quarter-turns.
Iβll get back to deconstructing comics another time. For now, I want to push on with some actual mathematics.
After much blood, toil, tears, and sweat, weβve proven the Fundamental Theorem of Calculus. So what do we do with it? The answerβs in this diagram:
This is sort of schematic rather than something we can interpret literally.
On the left we have real-valued functions β weβre being vague about their domains β and collections of βsignedβ points. We also have a way of pairing a function with a collection of points: evaluate
the function at each point, and then add up all the values or their negatives, depending on the sign of the point. On the right we also have real-valued functions, but now we consider intervals of
the real line. We have another way of pairing a function with an interval: integration!
At the top of the diagram, we can take a function and differentiate it to get back another function. At the bottom, we can take an interval and get a collection of signed points by moving to the
boundary. The interval $\left[a,b\right]$ has the boundary points $\{a^-,b^+\}$, where we consider $a$ to be βnegatively signedβ.
Now, what does the FToC tell us? If we start with a function $F$ in the upper left and an interval $\left[a,b\right]$ in the lower right, we have two ways of trying to pair them off. First, we could
take the derivative of $F$ and then integrate it from $a$ to $b$ to get $\int_a^b F'(x)dx$. On the other hand, we could take the boundary of the interval and add up the function values along the
boundary to get $F(b)-F(a)$. The FToC tells us that these two give us the same answer!
To write this in a diagram seems a little much, but keep the diagram in mind. Weβll come back to it later. For now, though, we can use it to understand how to use the FToC to handle integration.
Say we have a function $f$ and an interval $\left[a,b\right]$, and we need to find $\int_a^bf(x)dx$. Weβve got these big, messy Riemann sums (or Darboux sums), and thereβs a lot of work to compute
the integral by hand. But notice that the integral is living on the right side of the diagram. If we could move it over to the left, weβd just have to evaluate a function twice and add up the
Moving the interval to the left of the diagram is easy: we can just read off the boundary. Moving the function is harder. What we need is to find an antiderivative $F(x)$ so that $F'(x)=f(x)$. Then
we move to the left of the diagram by switch attention from $f$ to $F$. Then we can evaluate $F(b)-F(a)$ and get exactly the same value as the integral we set out to calculate. So if we want to find
integrals, weβd better get good at finding antiderivatives!
This has an immediate consequence. Our basic rules of antiderivatives carry over to give some basic rules for integration. In particular, we know that integrals play nicely with sums and scalar
Okay, usually Iβm all behind XKCD, but todayβs installment is a bit of a head-scratcher.
The title seems especially ill-chosen. I mean, I know that Randallβs not a doctor of linguistics, but heβs usually pretty on the ball. Clearly he canβt mean the title as a normative statement, but he
also has to understand that βhow it worksβ will commonly parse as βhow it should workβ. The fact that thereβs no comeuppance for the jerk doesnβt help here. Without further comment, itβs easy to read
the comic as an endorsement of this attitude.
The other thing that leaves a bad taste in my mouth is that the guy on the left is not a clearly-defined character we all know to be unpleasant already. Yes, I know this is arguing semiotics, but
thereβs a reason Goofus and Gallant comics are so easily read: a generic character will be interpreted as a generic person. Their behavior is then also taken as generic. Putting the Hat Guy in there
would go a long way towards making this not seem like an endorsement.
And then the details are off. The characters are looking at a calculus problem. I donβt know anyone β at least any instructor β in this day and age who thinks like this at the calculus level. As far
as I know, the psychological damage is usually done by this point. The attitude comes in during grade-school, so an arithmetic problem (and younger characters at the board) would be more appropriate.
That is, unless Randall is asserting that this attitude is endemic (remember generic character => generic person) among calculus instructors.
In that case I really have to disagree with him on the strongest possible terms. But again, thereβs no further comment, and the whole thing just feels disappointing as a result.
Well, weβve certainly had a lively time the last few days. Regular [S:commenter:S]complainer Michael Livshits kicked it off by noting that I presented
The same pathetic proof that mixes apples and oranges, and makes the reader believe that MVT has anything to do woth FTC!
Then came some back-and-forth. I argued that there are many approaches, and due to different motivations weβve chosen different ones. Michael argued that I was part of some βChurch of Limitologyβ
which βindoctrinatesβ calculus students, and that my proofs βsuckβ, are βtrashyβ, and are βin bad tasteβ. My point that this is not the approach I actually take in a classroom setting was ignored.
However, he did have some points. One of them was that we can weaken the theorem to only assume that the function $f$ is continuous at the point $x$, and my proof assumed way too much to say the
function is continuous everywhere. But letβs consider where I actually used continuity. First it shows up in invoking the Integral Mean Value Theorem, but there I really only need it to say thereβs a
maximum and a minimum, so the Darboux sums work out. An integrable function still manages to satisfy this condition. Then I use continuity to show that $\lim\limits_{c\rightarrow x}f(c)=f(x)$, which
really only needs continuity at $x$. In fact, my proof already works in Michaelβs extended context.
He also tried presenting his own proof of the crucial step. He argues by continuity at $x$ that for any $\epsilon$ there is a $\delta$ so that $|f(x+\Delta x)-f(x)|<\epsilon$ when $|\Delta x|<\delta$
. This fact then shows that $\int_x^{x+\Delta x}f(t)-f(x)dt<\epsilon|\Delta x|$, and so the limit (thereβs that awful word again!) that I claimed in the post works out.
But what does his proof really mean? Go on, try and draw a picture. Itβs saying that the difference in area we add by integrating from $x$ to $\Delta x$ from the area weβd add if we just used a
constant height of $f(x)$ by less than any constant multiple of the width $\Delta x$. And that meansβ¦ itβs obscure to me, at least.
On the other hand, my proof says that the area function changes by some amount as we go from $x$ to $x+\Delta x$, which means thereβs some average (βmeanβ) rate of change over that interval. At some
point along the way, the derivative actually attains that mean value, and as we contract the interval we push that middle point down to $x$. Now that makes sense to me.
Now, the real genius came later, after I sewed the two parts of the FToC together. Eventually, Michael said:
By MVT I meant the one for continuous function, that it hits the zero if it changes sign. Is it the one you were talking about, or you were talking abouth the Lagrange theorem (= MVT for the
derivative)? Iβm a bit confused. Well, either way itβs not too important.
And here it all runs off the rails. This whole time he hasnβt actually been reading a single thing about my proof, and evidently he hasnβt read the proofs in the calculus textbooks he so despises. He
doesnβt even know which theorem Iβm invoking! And it is important, because the different theorems say vastly different things.
Now itβs plain as day that Michael is a crank, pushing his pet theories while remaining so embittered to the βsystemβ that βindoctrinatesβ students against him. Either that or heβs been trolling. The
actual merits of my own proof β which I hope Iβve shown above to meet his tests β never mattered at all. I do hope he will set up his own weblog to present all his work in his own space, and then
interested readers can judge for themselves the merits and demerits of different proofs.
As for this discussion, itβs closed. I have a proof, and Michael has a proof, and they both work. Our proofs emphasize different aspects of the theorem, and we choose between them depending on what
we want to highlight for our current audience. Despite all his ranting, neither one is βthe right wayβ or βthe wrong wayβ, independent of context. Iβm glad to hear alternative approaches here, since
they might highlight points that I missed. But as a word to future ranters: donβt even try to use my weblog as your soapbox. That sort of behavior really is trashy, and in bad taste.
Besides, Iβm the Dennis Miller around here.
[UPDATE]: Iβve come to a decision, since the war seems to rage on unabated, and Mr. Livshits refuses to take the olive branches of equanimity Iβve been offering since the beginning. As of midnight
(Central Standard Time) tonight, this is over. Mr. Livshits goes in the kill file, and I wash my hands of the whole business. Iβm sure heβll cry foul, and oppression, and maybe heβs right. However,
this whole mess just distracts from my work here, and Iβm sick of it. From sideline conversations with numerous non-commenting readers, Iβm not the only one.
Iβve made my case, and tried over and over to say that ultimately the whole debate comes down to aesthetics. His approach has its merits, as does mine. He really dislikes my approach, so much so that
heβs willing to fight tooth and nail. Ultimately, I really donβt care to fight this any more. But since I canβt seem to continue this project without having my βmathematical tasteβ insulted left and
right, Iβm using my authority as the owner of this space to cut off debate. This does not continue here.
If Mr. Livshits wants to continue his tirades, heβs free to set up his own weblog, as Iβve encouraged him to do time and again. He can even continue to read and post to his own space in parallel to
my coverage. If heβs right and a significant majority of my readers want to hear his side, heβll have a built-in audience ready and waiting, and heβs welcome to it. Just like the sky, thereβs a lot
of blogosphere out there. Of course, heβll eventually have to come up with something to fill his space, since Iβm not spending the rest of my life here on calculus and elementary analysis.
So weβve seen two sides of the FToC: the first part, which says that given a continuous function $f:\left[a,b\right]\rightarrow\mathbb{R}$ we can integrate and differentiate to get our function back:
and the second part, which says that given a differentiable function $F:\left[a,b\right]\rightarrow\mathbb{R}$ whose derivative is the continuous function $f$, we can integrate to get (part of) our
function back again:
$\displaystyle F(b)-F(a)=\int\limits_a^bf(x)dx$
Now, we proved these two sides in very different ways, but it turns out that we can get from one to the other.
Letβs assume the first part holds. Then we take the function $F$ and define $f(x)=F'(x)$ as its derivative. The first part of the theorem tells us that we know a function whose derivative is $f$: the
function defined by $G(x)=\int_a^xf(t)dt$. And we know that any two functions with the same derivative must differ by a constant! That is, there is some real number $C$ with $F(x)=G(x)+C$. Using this
to evaluate $F(b)-F(a)$ we find:
$\displaystyle F(b)-F(a)=(G(b)+C)-(G(a)+C)=G(b)-G(a)=$
Which gives us the second part of the theorem.
On the other hand, what if we assume the second part of the theorem holds? Then we start with a continuous function $f:\left[a,b\right]\rightarrow\mathbb{R}$. Given $x\in\left[a,b\right]$, the
function is continuous on the subinterval $\left[a,x\right]$, and so the second part of the FToC says that $F(x)-F(a)=\int_a^xf(t)dt$. That is, the integral in the first part of the FToC differs by a
constant ($F(a)$) from the function $F$ we assumed to be an antiderivative of $f$. Thus it must itself be an antiderivative of $f$.
So each half of the Fundamental Theorem implies the other, and we can prove either one first before immediately deriving the other.
And now we come to the second part of the FToC. This takes the first part and flips it around.
We again start with a continuous function $f:\left[a,b\right]\rightarrow\mathbb{R}$, but now we take any antiderivative $F$, so that $f(x)=F'(x)$. Then the FToC asserts that
$\displaystyle F(b)-F(a)=\int\limits_a^bf(x)dx$
Before we differentiated a function we got by integrating to get back where we started. Now weβre integrating a function we get by differentiating, and again get back where we started. Integration
and differentiation are two sides of the same coin.
Letβs consider a partition of $\left[a,b\right]$ with points $a=x_0,x_1,...,x_{n-1},x_n=b$. Then we see that $F(b)-F(a)=F(x_n)-F(x_0)$. We can add and subtract the value of $F$ at each of the
intermediate points to see that
$\displaystyle F(b)-F(a)=F(x_n)-F(x_{n-1})+F(x_{n-1})-...-F(x_1)+F(x_1)-F(x_0)$
$\displaystyle F(b)-F(a)=\sum\limits_{i=1}^nF(x_i)-F(x_{i-1})$
Now the Differential Mean Value Theorem tells us that thereβs a point $c_i\in\left[x_{i-1},x_i\right]$ so that $F(x_i)-F(x_{i-1})=(x_i-x_{i-1})F'(c_i)$. And we assumed that $F'(c_i)=f(c_i)$, so we
$\displaystyle F(b)-F(a)=\sum\limits_{i=1}^n(x_i-x_{i-1})f(c_i)$
But this is a Riemann sum for the partition we chose, using the points $c_i$ as the tags. Since every partition, no matter how fine, has such a Riemann sum, the integral must take this value, and the
second part of the FToC holds.
Today we get to the Fundamental Theorem of Calculus, which comes in two parts. This theorem is essential, in that it shows how the two seemingly-dissimilar fields of integral and differential
calculus are actually two sides of the same coin. From this point, most of the basic theory of integration comes down to finding the βmirror imageβ of facts about differentiation.
First, letβs start with some continuous function $f:\left[a,b\right]\rightarrow\mathbb{R}$ and define a new function by $F(x)=\int_a^xf(t)dt$. Now the fundamental theorem tells us that this new
function is differentiable, and its derivative is the function we started with! That is:
To see this, letβs consider the difference $F(x+\Delta x)-F(x)$. The first term here is the integral $\int_a^{x+\Delta x}f(t)dt$. Then we can split this interval up to get the sum of the integrals $\
int_a^{x}f(t)dt+\int_x^{x+\Delta x}f(t)dt$. But the first part here is just $F(x)$, which weβre about to subtract off. Then the difference quotient is $\frac{1}{\Delta x}\int_x^{x+\Delta x}f(t)dt$.
The derivative $F'(x)$ will be the limit of this difference quotient as $\Delta x$ goes to ${0}$.
So now letβs use the Integral Mean Value Theorem to get at the integral here. It tells us that thereβs some $c$ between $x$ and $x+\Delta x$ with $f(c)=\frac{1}{\Delta x}\int_x^{x+\Delta x}f(t)dt$ β
the difference quotient exactly! And as $\Delta x$ gets smaller and smaller, $c$ gets squeezed closer and closer to $x$. And because $f$ is continuous, we find that $\lim\limits_{c\rightarrow x}f(c)=
f(x)$. Presto!
A very common metaphor here is to think of a carpet whose width at a point $x$ along its length is $f(x)$. Then its total area from the starting point $a$ up to $x$ is the integral $\int_a^xf(t)dt$.
How fast is the area increasing as we unroll more carpet? As we unroll $dx$ more length we get $f(x)dx$ more area, and so the derivative of the area is the width of the carpet.
Okay: time to get back on track. Today, weβll see a theorem about integrals thatβs similar to the Differential Mean Value Theorem. Specifically, it states that if we have a continuous function $f:\
left[a,b\right]\rightarrow\mathbb{R}$ then there is some $c\in\left[a,b\right]$ so that
$\displaystyle f(c)=\frac{1}{b-a}\int\limits_a^bf(x)dx$
Letβs consider the Darboux sums we use to define the integral. We know that if we choose a partition, then its upper Darboux sum is greater than any Riemann sum of any refinement of that partition.
So letβs take the absolute coarsest possible partition: the one where we just have partition points $a$ and $b$. Then the upper Darboux sum is $(b-a)M$, where $M$ is the maximum value of $f$ on the
interval $\left[a,b\right]$. Similarly, the lower Darboux sum on this interval is $(b-a)m$ (where $m$ is the minimum value of $f$), and itβs the lowest possible Darboux sum. Then we can divide
everything in sight by $b-a$ to get the inequality
$\displaystyle m\leq\frac{1}{b-a}\int\limits_a^bf(x)dx\leq M$
Now the Intermediate Value Theorem tells us that $f$ must take every value between $m$ and $M$ at some point between $a$ and $b$. And thus there must exist a $c\in\left[a,b\right]$ so that
$\displaystyle f(c)=\frac{1}{b-a}\int\limits_a^bf(x)dx$
just as we wanted.
A guest post by Tom Leinster over at The n-Category CafΓ© reminded me of an interesting fact I havenβt mentioned yet: a metric space is actually an example of an enriched category!
First weβll need to pick out our base category $\mathcal{V}$, in which weβll find our hom-objects. Consider the set of nonnegative real numbers with their real-number order, and add in a point called
$\infty$ thatβs above all the other points. This is a totally ordered set, and orders are categories. Letβs take the opposite of this category. That is, the objects of our category $V$ are the points
in the βintervalβ $\left[0,\infty\right]$, and we have an arrow $x\rightarrow y$ exactly when $x\geq y$.
This turns out to be a monoidal category, and the monoidal structure is just addition. Clearly this gives a monoid on the set of objects, but we need to check it on morphisms to see itβs functorial.
But if $x_1\geq y_1$ and $x_2\geq y_2$ then $x_1+x_2\geq y_1+y_2$, and so we can see addition as a functor.
So weβve got a monoidal category, and we can now use it to form enriched categories. Letβs keep out lives simple by considering a small $\mathcal{V}$-category $\mathcal{C}$. Hereβs how the definition
We have a set of objects $\mathrm{Ob}(\mathcal{C})$ that weβll call βpointsβ in a set $X$. Between any two points $p_1$ and $p_2$ we need a hom-object $\hom_\mathcal{C}(p_1,p_2)\in\mathrm{Ob}(\
mathcal{V})$. That is, we have a function $d:X\times X\rightarrow\left[0,\infty\right]$.
For a triple $(p_1,p_2,p_3)$ of objects we need an arrow $\hom_\mathcal{C}(p_2,p_3)\otimes\hom_\mathcal{C}(p_1,p_2)\rightarrow\hom_\mathcal{C}(p_1,p_3)$. In more quotidian terms, this means that $d
(p_2,p_3)+d(p_1,p_2)\geq d(p_1,p_3)$.
Also, for each point $p$ there is an arrow from the identity object of $\mathcal{V}$ to the hom-object $\hom_\mathcal{C}(p,p)$. That is, $0\geq d(p,p)$, so $d(p,p)=0$.
These conditions are the first, fourth, and half of the second conditions in the definition of a metric space! In fact, thereβs a weaker notion of a βpseudometricβ space, wherein the second condition
is simply that $d(p,p)=0$, and so weβre almost exactly giving the definition of a pseudometric space.
The only thing weβre missing is the requirement that $d(p_1,p_2)=d(p_2,p_1)$. The case can be made (and has been, by Lawvere) that this requirement is actually extraneous, and that itβs in some sense
more natural to work with βasymmetricβ (pseudo)metric spaces that are exactly those given by this enriched categorical framework.
Okay, we know what it means for a function to be integrable (in either of the equivalent Riemann or Darboux senses), but we donβt yet know any functions to actually be integrable. I wonβt give the
whole story now, but a just a large enough part to work with for the moment.
The major theorem here is that a continuous function $f$ on a closed interval $\left[a,b\right]$ is integrable. Notice from the Heine-Cantor theorem that $f$ is automatically uniformly continuous.
That is, for any $\epsilon$ there is some $\delta$ so that for all $x$ and $y$ in $\left[a,b\right]$ with $|y-x|<\delta$ we have $|f(y)-(x)|>\epsilon$. Again, the important thing here is that we can
choose our $\delta$ independently of the point $x$, while continuity says $\delta$ might depend on $x$.
So now we need to take our function and show that the upper and lower Darboux sums converge to the same value. Equivalently, we can show that their difference converges to zero. So given an $\
epsilon$ we want to show that there is some partition $x$ so that the difference
$\displaystyle U_x(f)-L_x(f)=\sum\limits_{i=1}^n(M_i-m_i)(x_i-x_{i-1})$
is less than $\epsilon$, and the same is true of any refinement of this partition.
Weβll choose our partition with every slice having constant width $\delta$, so there are $\frac{b-a}{\delta}$ of them. By the uniform continuity of $f$ we can find a $\delta$ so that for any points
$x$ and $y$ with $|y-x|<\delta$ weβll have $|f(y)-f(x)|<\frac{\epsilon}{b-a}$. Then in particular the difference $M_i-m_i$ will be less than $\frac{\epsilon}{b-a}$, while $x_i-x_{i-1}=\delta$ and $n=
\frac{b-a}{\delta}$. Thus the difference in the Darboux sums will be less than $\epsilon$, as we wanted.
What about refinements? Well, any refinement of a partition can only lower the upper Darboux sum and raise the lower one. This is because adding a point to a partition canβt raise the maximum in
either of the new subintervals or lower the minimum, and in fact adding a point will usually lower the maximum and raise the minimum. So our partition has a small enough difference in the Darboux
sums, and any refinement will make the difference even smaller, and thus we have the convergence we need.
Now, can we do better than continuous functions? Well, we can relax continuity at the endpoints a bit. If the function jumps to a different value at $a$ or $b$ than the limit seems to indicate, we
can still get uniform continuity everywhere but that one point, and weβre still good. We still have problems with asymptotes where the function shoots off to infinity, like $f(x)=\frac{1}{x}$ does at
the left endpoint of $\left[0,1\right]$.
What else? Well, we can allow a finite number of discontinuities, as long as none of them are asymptotes. If a discontinuity happens at $c\in\left[a,b\right]$, we can choose $c$ to be a partition
point, and so on. Then a partition with these selected points is just the same as a partition on each of the continuous sections of the function in between the discontinuities, and we know that
theyβre all good.
Incidentally, we can use this same method of picking some of the partition points ahead of time to show another nice property of the integrals: we can break the integral in the middle of the interval
and evaluate the two pieces separately, then add them together. That is, if $c\in\left[a,b\right]$ then we have the equation
as long as both sides are integrable.
β’ Recent Posts
β’ Blogroll
β’ Art
β’ Astronomy
β’ Computer Science
β’ Education
β’ Mathematics
β’ Me
β’ Philosophy
β’ Physics
β’ Politics
β’ Science
β’ RSS Feeds
β’ Feedback
Got something to say? Anonymous questions, comments, and suggestions at
β’ Subjects
β’ Archives | {"url":"http://unapologetic.wordpress.com/2008/02/page/2/","timestamp":"2014-04-21T12:16:14Z","content_type":null,"content_length":"110819","record_id":"<urn:uuid:f3917ec1-7dbc-4c67-80aa-5f7a8f10e97b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic question involving data types
10-30-2006 #1
Basic question involving data types
I skipped around a lot in learning C++ and in doing so, I didnt comprehend things as good as I should have. I decided to start over, well almost over. I was looking at cplusplus.com and in like
the third section it talks about data types. I am having trouble understanding what something means. In the chart on the page it shows( the | just represent boxes):
| double | Double precision floating point number. | 8bytes | 1.7e +/- 308 (15 digits) |
What exactly does "1.7e +/- 308 (15 digits)" mean? More specificaly what does 1.7e mean? It also says 15 digits, does that just mean it can go 15 digits from 0 in either direction? Float and ong
double are also like this.
I get the rest of the chart, but that part confuses me.
EDIT: I read ona little farther while waiting for an answer. I came across this:
They express numbers with decimals and/or exponents. They can include either a decimal point, an e
character (that expresses "by ten at the Xth height", where X is an integer value that follows the
e character), or both a decimal point and an e character:
After reading it, I think I know what e means now. Double( 1.7e +/- 308 ) would basically be 1.7 * 10 +/- 308. So its like...oh whats that called lol. Im thinkin exponents/ something
else...hmmm...scientific notation. I honestly dont remember much about scientific notation, but thats what its making me think of. So, 1.7e +/- 308 is like the short way of writing out what 1.7 *
10 +/- 308? Stop me if im wrong lol.
Last edited by FingerPrint; 10-30-2006 at 09:18 PM.
/* ------------------------------------------------------------------*/
// INSERT CODE HERE
/* ------------------------------------------------------------------*/
1.7e +/- 308 has to do with the range a double can hold. from 1.7 * 10^308 to 1.7 * 10^-308, and of course the negatives of those numbers as well. a float/double cannot hold exactly 0 because of
the way the data is stored in the computer. it is stored as sign-bit|exponent|base (in that order, next to each other). i forget the computer terms for exponent and base but one of them is
mantissa. because a double cannot hold exactly 0, there is a gap either side of 0. sign-bit is 1 if the number is negative (0 otherwise, this is the same for ints), exponent is negative(the first
bit of the exponent will be 1) if the number is smaller than 1 and base must always have 1 as the first bit.
if this does not answer your question try looking up discrete mathematics for computing.
EDIT: well looky here at what i found :P
it might be a bit advanced. i didn't go looking for it, in fact i was looking for something else.
Last edited by Xeridanus; 10-30-2006 at 09:56 PM.
10-30-2006 #2 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/84792-basic-question-involving-data-types.html","timestamp":"2014-04-18T13:58:23Z","content_type":null,"content_length":"45166","record_id":"<urn:uuid:4c90d59e-34f2-423d-8ed3-d279dc9b62eb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Too many Pratyekabuddhas, not enough Bodhisattvas
(Numbers below are averages)
Dhamma Wheel
David Snyder: 4.53 posts per day
retrofuturist: 10.06 posts per day
Mikenz: 4.10 posts per day
tiltbillings: 9.28 posts per day
Ben: 8.13 posts per day
Dharma Wheel
Astus: 2.17 posts per day
Ngawang Drolma: 2.88 posts per day
David Snyder: 0.61 posts per day
OgyenChodzom: 0.20 posts per day
mr. gordo: 1.56 posts per day
The Theravada forum seems better -- not just in quantity, but also in quality.
Shouldn't it be the other way around?
Or maybe I'm wrong and the people running this forum are too busy IRL out doing good deeds and meditating.
Last edited by Individual on Sun Oct 17, 2010 3:29 am, edited 1 time in total.
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
People have 'real' lives offline to manage and live....fortunately ...
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
If you're the same "Individual" who is on Dhammawheel, then I would like to say that I've appreciated your helpful replies on that forum, and I'm sorry that you couldn't find something equally
satisfying here.
It would be great if tons of ordained Buddhists and Buddhist scholars posted here, but unfortunately, that's not the case at the moment because this site is still developing.
One person you forgot to mention in your OP is Ven. Huifeng. He is a very knowledgeable Zen Buddhist monk, and we are very fortunate to have him here. One of his short posts can often clear up pages
of confusion.
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
Forgive me if my speculations are way off base... I wonder if it is a difference in cultures. The Mahayana seems to emphasize the student-teacher relationship more, whereas the Theravada seem to
emphasize a peer-support model more (kalyanamitta).
I am also thinking that my relationships with Theravadan monastics have tended to be very casual, and my relationships with Mahayana monastics (Zen, Vajrayana) have tended to be very formal.
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
Individual wrote:
The Theravada forum seems better -- not just in quantity, but also in quality.
Shouldn't it be the other way around?
(1) First and foremost, why should a Mahayana forum be expected to have a better quality or quantity of posts than a Theravada forum?
(2) What makes you think that the quality or quantity of posts has to do with these two traditions and not with the knowledge, desire and time to participate of certain individuals compared to other
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
Pema Rigdzin wrote:
Individual wrote:
The Theravada forum seems better -- not just in quantity, but also in quality.
Shouldn't it be the other way around?
(1) First and foremost, why should a Mahayana forum be expected to have a better quality or quantity of posts than a Theravada forum?
(2) What makes you think that the quality or quantity of posts has to do with these two traditions and not with the knowledge, desire and time to participate of certain individuals compared to
other individuals?
Meh. That's a boring debate. And you are free to do whatever you like.
I have nothing more to say that would be relevant.
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
Individual wrote:
Pema Rigdzin wrote:
Individual wrote:
The Theravada forum seems better -- not just in quantity, but also in quality.
Shouldn't it be the other way around?
(1) First and foremost, why should a Mahayana forum be expected to have a better quality or quantity of posts than a Theravada forum?
(2) What makes you think that the quality or quantity of posts has to do with these two traditions and not with the knowledge, desire and time to participate of certain individuals compared
to other individuals?
Meh. That's a boring debate. And you are free to do whatever you like.
I have nothing more to say that would be relevant.
I don't understand what's so boring. You're clearly implying that Mahayana practitioners should be of a better quality than Theravadin practitioners, and should therefore be more diligent in posting
and having higher quality posts. Is that not a highly offensive and controversial supposition? (Not to mention baseless and disrespectful). Plus, I'm not sure how spending more time on the internet,
Buddhist forum or not, is the sign of a more diligent practitioner of any tradition.
As an aside, since you're expecting a better quality of posts out of Mahayanists, you might start with yourself: according to the Buddha's teachings, pratyekabuddhas only occur in places and times in
which not even a trace or memory of a Buddha's teachings remains, so your title for this thread is incorrect. The correct term would be shravakas, not pratyekabuddhas.
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
Pema Rigdzin wrote:Plus, I'm not sure how spending more time on the internet, Buddhist forum or not, is the sign of a more diligent practitioner of any tradition.
<three deep bows>
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
Individual wrote:The Theravada forum seems better -- not just in quantity, but also in quality.
Shouldn't it be the other way around?
I would not say that the Theravada forum seems "better", it is different.
In what way is it different?
It seems to be more focused and the reason for that may be that there is (only) one common scriptural basis to refer to when discussing subjects. This sole scriptural basis makes it easier to focus,
helps to reduce distraction and also entails that there are more experts as to this basis.
I feel that it is much more difficult to find a common basis in the Mahayana forum since Mahayana is so diverse.
Actually the total amount of user/practioners required for a Mahayana forum would be "Mahayana schools x (amount of users in a Thervada forum)"
Kind regards
Re: Too many Pratyekabuddhas, not enough Bodhisattvas
(nods) Mahayana is diverse. And Vajrayana for the most part is secret.
You can make Mahayana stronger, you just have to make it more interesting there, talk about Bodhisattva stuff like compassion and wisdom, how to hold the vows with the support of our immeasurable
aspirations, talk about all those warm fuzzy things overcoming harshness.
In fact, just make a thread on the noble aspirations of Samantabhadra. I can do it if you'd like | {"url":"http://www.dharmawheel.net/viewtopic.php?p=16820","timestamp":"2014-04-21T05:59:41Z","content_type":null,"content_length":"37227","record_id":"<urn:uuid:602ee598-9d1d-4e8e-bb95-371718680f29>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Great Neck Algebra 1 Tutor
...In 2009, I received a Bachelor of Science degree (with honors) in Computer Science from Loyola College in Maryland. I graduated with an overall GPA of 3.65 and major GPA of 3.76. I also was a
member of Upsilon Pi Epsilon, the Computer Science Honor Society.
53 Subjects: including algebra 1, reading, GRE, SAT math
...The review must continue in bits and pieces the closer the test arrives. The ultimate goal is having the material as fresh in the student's mind as possible until test time. Also it is
important to remind the student that this approach will eliminate anxiety, because the student is not walking into the test "cold", but truly ready.
41 Subjects: including algebra 1, English, reading, chemistry
...Are you really interested to have a tutor who not only focuses on content, but also focuses on study habits? Are you looking for someone whose lifestyle can be a good motivation for your
children? Then you are in the right place.
37 Subjects: including algebra 1, chemistry, English, physics
...Learning should always be fun so let's get started! We have mountains to move!As a Biological Sciences major at Cornell, I took both introductory and higher lever coursework in Genetics. I also
took a lab course concentrating on fly and bacteria genetics.
25 Subjects: including algebra 1, chemistry, physics, geometry
I hold a bachelors degree in microbiology, a masters degree in biochemistry and molecular biology, and have five years of Laboratory experience in medical centers like UT Southwestern and Mount
Sinai medical center. These experiences have helped me to understand the subject matters in depth. As a ...
16 Subjects: including algebra 1, chemistry, geometry, algebra 2 | {"url":"http://www.purplemath.com/Great_Neck_algebra_1_tutors.php","timestamp":"2014-04-18T21:43:52Z","content_type":null,"content_length":"24086","record_id":"<urn:uuid:510e4292-fa6c-4dcd-8c09-9ddc8d702581>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast Arc Cos algorithm?
up vote 10 down vote favorite
I have my own, very fast cos function:
float sine(float x)
const float B = 4/pi;
const float C = -4/(pi*pi);
float y = B * x + C * x * abs(x);
// const float Q = 0.775;
const float P = 0.225;
y = P * (y * abs(y) - y) + y; // Q * y + P * y * abs(y)
return y;
float cosine(float x)
return sine(x + (pi / 2));
But now when I profile, I see that acos() is killing the processor. I don't need instense precision. What is a fast way to calculate acos(x) Thanks.
3 Your very fast function has a mean error of 16% in [-pi,pi] and is entirely unusable outside that interval. The standard sinf from math.h on my system takes only about 2.5x as much time as your
approximation. Considering your function is inlined and the lib call is not, this is really not much difference. My guess is if you added range reduction so it was usuable in the same way as the
standard function, you would have exactly the same speed. β Damon Jul 5 '12 at 14:50
No, the maximum error is 0.001 (1/10th %). Did you forget to apply the correction? (y = P * bla...) Look at the original source and discussion here: devmaster.net/forums/topic/
4648-fast-and-accurate-sinecosine Second, sin and cos pre-bounded by +-pi is a VERY common case, especially in graphics and simulation, both of which often require a fast approximate sin/cos. β
jcwenger Oct 18 '12 at 20:15
add comment
5 Answers
active oldest votes
A simple cubic approximation, the Lagrange polynomial for x β {-1, -Β½, 0, Β½, 1}, is:
double acos(x) {
return (-0.69813170079773212 * x * x - 0.87266462599716477) * x + 1.5707963267948966;
up vote 28 down vote }
It has a maximum error of about 0.18 rad.
Maximum error is 10.31 in degrees. Rather big, but in some solutions may be enough. Suitable where computational speed is more important than precision. May be quartic
approximation would produce more precision and still be faster than native acos? β Timo Feb 2 '13 at 17:22
Sure there isn't a mistake in this formular? Just tried it with Wolfram Alpha and it doesn't look right: wolframalpha.com/input/?i=y%3D%282%2F9*pi*x*x-5*pi%2F18%29*x%2Bpi%2F2 β
miho Apr 26 '13 at 14:29
add comment
Got spare memory? A lookup table (with interpolation, if required) is gonna be fastest.
up vote 17
down vote
How could I implement this as a C function? β Milo Aug 1 '10 at 3:42
3 @Jex: bounds-check your argument (it must be between -1 and 1). Then multiply by a nice power of 2, say 64, yielding the range (-64, 64). Add 64 to make it non-negative (0, 128). Use
the integer part to index a lookup table, if desired use the fractional part for interpolation between the two closest entries. If you don't want interpolation, try adding 64.5 and take
the floor, this is the same as round-to-nearest. β Ben Voigt Aug 1 '10 at 5:42
1 Lookup tables require an index, which is going to require a float to int conversion, which will probably kill performance. β phkahler Aug 2 '10 at 13:15
add comment
I have my own. It's pretty accurate and sort of fast. It works off of a theorem I built around quartic convergence. It's really interesting, and you can see the equation and how fast it can
make my natural log approximation converge here: https://www.desmos.com/calculator/yb04qt8jx4
Here's my arccos code:
function acos(x)
local a=1.43+0.59*x a=(a+(2+2*x)/a)/2
local b=1.65-1.41*x b=(b+(2-2*x)/b)/2
local c=0.88-0.77*x c=(c+(2-a)/c)/2
return (8*(c+(2-a)/c)-(b+(2-2*x)/b))/6
A lot of that is just square root approximation. It works really well, too, unless you get too close to taking a square root of 0. It has an average error (excluding x=0.99 to 1) of 0.0003.
The problem, though, is that at 0.99 it starts going to shit, and at x=1, the difference in accuracy becomes 0.05. Of course, this could be solved by doing more iterations on the square
roots (lol nope) or, just a little thing like, if x>0.99 then use a different set of square root linearizations, but that makes the code all long and ugly.
If you don't care about accuracy so much, you could just do one iteration per square root, which should still keep you somewhere in the range of 0.0162 or something as far as accuracy goes:
function acos(x)
local a=1.43+0.59*x a=(a+(2+2*x)/a)/2
local b=1.65-1.41*x b=(b+(2-2*x)/b)/2
up vote 3 local c=0.88-0.77*x c=(c+(2-a)/c)/2
down vote return 8/3*c-b/3
If you're okay with it, you can use pre-existing square root code. It will get rid of the the equation going a bit crazy at x=1:
function acos(x)
local a = math.sqrt(2+2*x)
local b = math.sqrt(2-2*x)
local c = math.sqrt(2-a)
return 8/3*d-b/3
Frankly, though, if you're really pressed for time, remember that you could linearize arccos into 3.14159-1.57079x and just do:
function acos(x)
return 3.14159-1.57079*x
Anyway, if you want to see a list of my arccos approximation equations, you can go to https://www.desmos.com/calculator/tcaty2sv8l I know that my approximations aren't the best for certain
things, but if you're doing something where my approximations would be useful, please use them, but try to give me credit.
add comment
I see approximate routines for sine and cosine that could, as @dan04 says, be done better.
I don't see acos.
up vote 1 down vote
Besides, what do you mean "killing the processor"? No matter how fast you make it, it is going to take 100% of processor time if you do nothing else.
add comment
Another approach you could take is to use complex numbers. From de Moivre's formula,
β
^x = cos(Ο/2*x) + β
*sin(Ο/2*x)
Let ΞΈ = Ο/2*x. Then x = 2ΞΈ/Ο, so
β’ sin(ΞΈ) = β(β
^^2ΞΈ/Ο)
β’ cos(ΞΈ) = β(β
^^2ΞΈ/Ο)
How can you calculate powers of β
without sin and cos? Start with a precomputed table for powers of 2:
β’ β
^4 = 1
β’ β
^2 = -1
β’ β
^1 = β
β’ β
^1/2 = 0.7071067811865476 + 0.7071067811865475*β
β’ β
^1/4 = 0.9238795325112867 + 0.3826834323650898*β
β’ β
^1/8 = 0.9807852804032304 + 0.19509032201612825*β
β’ β
^1/16 = 0.9951847266721969 + 0.0980171403295606*β
β’ β
^1/32 = 0.9987954562051724 + 0.049067674327418015*β
β’ β
^1/64 = 0.9996988186962042 + 0.024541228522912288*β
β’ β
^1/128 = 0.9999247018391445 + 0.012271538285719925*β
β’ β
^1/256 = 0.9999811752826011 + 0.006135884649154475*β
To calculate arbitrary values of β
^x, approximate the exponent as a binary fraction, and then multiply together the corresponding values from the table.
up vote 1 down vote For example, to find sin and cos of 72Β° = 0.8Ο/2:
β
^0.8 ≈ β
^205/256 = β
^0b11001101 = β
^1/2 * β
^1/4 * β
^1/32 * β
^1/64 * β
^1/256
= 0.3078496400415349 + 0.9514350209690084*β
β’ sin(72Β°) ≈ 0.9514350209690084 ("exact" value is 0.9510565162951535)
β’ cos(72Β°) ≈ 0.3078496400415349 ("exact" value is 0.30901699437494745).
To find asin and acos, you can use this table with the Bisection Method:
For example, to find asin(0.6) (the smallest angle in a 3-4-5 triangle):
β’ β
^0 = 1 + 0*β
. The sin is too small, so increase x by 1/2.
β’ β
^1/2 = 0.7071067811865476 + 0.7071067811865475*β
. The sin is too big, so decrease x by 1/4.
β’ β
^1/4 = 0.9238795325112867 + 0.3826834323650898*β
. The sin is too small, so increase x by 1/8.
β’ β
^3/8 = 0.8314696123025452 + 0.5555702330196022*β
. The sin is still too small, so increase x by 1/16.
β’ β
^7/16 = 0.773010453362737 + 0.6343932841636455*β
. The sin is too big, so decrease x by 1/32.
β’ β
^13/32 = 0.8032075314806449 + 0.5956993044924334*β
.
Each time you increase x, multiply by the corresponding power of β
. Each time you decrease x, divide by the corresponding power of β
.
If we stop here, we obtain acos(0.6) ≈ 13/32*Ο/2 = 0.6381360077604268 (The "exact" value is 0.6435011087932844.)
The accuracy, of course, depends on the number of iterations. For a quick-and-dirty approximation, use 10 iterations. For "intense precision", use 50-60 iterations.
add comment
Not the answer you're looking for? Browse other questions tagged c++ c algorithm math performance or ask your own question. | {"url":"http://stackoverflow.com/questions/3380628/fast-arc-cos-algorithm/3380723","timestamp":"2014-04-23T09:43:16Z","content_type":null,"content_length":"92982","record_id":"<urn:uuid:7defb060-9626-4855-9f67-c4306ad4672b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pre Algebra Worksheets for Writing Expressions
Write the equation or expression algebraically.
Print PDF worksheet above, the answers are on the second page.
An algebraic expression is a mathematical expression that will have variables, numbers and operations. The variable will represent the number in an expression or an equation. Answers may vary
slightly. Being able to write expressions or equations algebraically is a pre algebra concept that is required prior to taking algebra.
The following prior knowledge is required before doing these worksheets:
An understanding that a variable is a letter such as x, y or n and it will represent the unknown number.
That an expression is a statement in math that will not contain an equals sign but it can containt numbers, variables and operation signs such as +, - x etc. For example, 3y is an expression.
That an equation is a statement in math that does contain an equals sign.
There should be some familiarity with integers which are whole numbers or whole numbers with a negative sign.
An understanding of terms which are numbers and or numbers and variables separated by the operation sign. For instance, xy is one term and x - y is two terms.
It is also important to understand and know the terms: quotient, product, sum, increased and decreased as they relate to operations. For instance, when the word sum is used, you will need to know
that the operation involves adding or the use of the + sign. When the word quotient is used, it refers to the division sign and when the word product is used, it refers to the multiplication sign
which is indicated by a . or by putting the variable beside the number as in 4n which means 4 x n | {"url":"http://math.about.com/od/prealgeb2/ss/Pre-Algebra-Worksheets-For-Writing-Expressions.htm","timestamp":"2014-04-18T18:11:12Z","content_type":null,"content_length":"44264","record_id":"<urn:uuid:fd8ccf9a-4c8c-482b-b6cb-b086bf99ff96>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Slater Determinants
Next: Simplified Notation for the Up: intro_estruc Previous: Dirac Notation
An electronic wavefunction for
What is an appropriate form for an
This is referred to as a Hartree Product. Since the orbitals and spin coordinates, they are called spin orbitals. These spin orbitals are simply a spatial orbital times a spin function, i.e.,
Unfortunately, the Hartree product is not a suitable wavefunction because it ignores the antisymmetry principle (quantum mechanics postulate #6). Since electrons are fermions, the electronic
wavefunction must be antisymmetric with respect to the interchange of coordinates of any pair of electrons. This is not the case for the Hartree Product.
If we simplify for a moment to the case of two electrons, we can see how to make the wavefunction antisymmetric:
The factor
Note a nice feature of this; if we try to put two electrons in the same orbital at the same time (i.e., set Pauli exclusion principle, which is a consequence of the antisymmetry principle!
This strategy can be generalized to
A determinant of spin orbitals is called a Slater determinant after John Slater. By expanding the determinant, we obtain
Since we can always construct a determinant (within a sign) if we just know the list of the occupied orbitals implied!
How do we get the orbitals which make up the Slater determinant? This is the role of Hartree-Fock theory, which shows how to use the Variational Theorem to use those orbitals which minimize the total
electronic energy. Typically, the spatial orbitals are expanded as a linear combination of contracted Gaussian-type functions centered on the various atoms (the linear combination of atomic orbitals
molecular orbital or LCAO method). This allows one to transform the integro-differential equations of Hartree-Fock theory into linear algebra equations by the so-called Hartree-Fock-Roothan
How could the wavefunction be made more flexible? There are two ways: (1) use a larger atomic orbital basis set, so that even better molecular orbitals can be obtained; (2) write the wavefunction as
a linear combination of different Slater determinants with different orbitals. The latter approach is used in the post-Hartree-Fock electron correlation methods such as configuration interaction,
many-body perturbation theory, and the coupled-cluster method.
Next: Simplified Notation for the Up: intro_estruc Previous: Dirac Notation David Sherrill 2003-08-07 | {"url":"http://vergil.chemistry.gatech.edu/notes/intro_estruc/node6.html","timestamp":"2014-04-21T15:24:11Z","content_type":null,"content_length":"12746","record_id":"<urn:uuid:fa8ae4fa-9598-4cf3-8a10-6aad05aeb84e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Input question based on LSAY example
james posted on Tuesday, February 17, 2009 - 11:47 am
Hi, I have a question regarding the input on slide 61 of your 2005 Growth Modeling Short Course Handout: Title is input for LSAY linear growth model without covariates.
I'm confused as to what the following statement specifies under the i BY math7-math10 and s BY math7@0 etc...
[i s];
This input is for growth modeling across 4 times points with single indicators.
Linda K. Muthen posted on Wednesday, February 18, 2009 - 10:10 am
The i BY statement defines the intercept growth factor. The s BY statement defines the slope growth factor. The statement [math7-math10@0]; fixes the intercepts of the outcomes at zero which is part
of the growth model parametrization. The statement [i s]; frees the means of the intercept and slope growth factors.
JM posted on Wednesday, February 18, 2009 - 3:34 pm
Thanks for your previous post - very helpful. I have another question (sorry!). I'm running an LGM regressing the i and s on a covariate. When I regressed one of my covariates onto both the i and s,
I got a significant positive effect on the intercept, and a significant, negative estimate (est: -0.127 est/S.E. -2.149) on the slope. Does this mean that the higher the score on my covariate, the
higher the initial level, and then the faster the drop in growth overtime? Or is it the slower the the drop in growth overtime? It is the result on the slope I want to double check. Thank you thank
Linda K. Muthen posted on Wednesday, February 18, 2009 - 5:17 pm
The growth factors i and s are continuous variables. The regression coefficients in the regression of i and s on a covariate are linear regression coefficients. The interpretation is that for a one
unit change in x, i and s change the amount of the regression coefficient. In one case the change is positive and in the other it is negative.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=14&page=3996","timestamp":"2014-04-16T08:24:31Z","content_type":null,"content_length":"20742","record_id":"<urn:uuid:ac2e9acd-b8fd-42e6-8316-4f856210730d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Direct Measurement Video to Teach Physics
written by Peter Bohacek
published by the Science Education Resource Center
This teaching method description outlines the use of videos for active learning in introductory physics classes. Direct Measurement Videos show events that students can analyze using physics
concepts. Grids, rulers, frame-counters and other overlays allow students to make measurements from the video. Students use these measurements to answer questions and solve problems. These questions
can be used with inquiry-based learning or modeling instruction.
This material includes best practices for using these videos, a library of videos, and example class activities.
This material is part of Pedagogy in Action, a library of resources for educators provided by SERC, the Science Education Resource Center.
Please note that this resource requires Quicktime.
Subjects Levels Resource Types
Classical Mechanics
- General
- Motion in One Dimension
= Acceleration
= Position & Displacement
= Velocity
Education Foundations - Collection
- Cognition - Instructional Material
= Cognition Development = Activity
Education Practices - High School = Instructor Guide/Manual
- Active Learning - Lower Undergraduate = Lesson/Lesson Plan
= Inquiry Learning = Problem/Problem Set
= Modeling - Audio/Visual
- Technology = Movie/Animation
= Multimedia
General Physics
- Collections
= Introductory Laboratories
= Introductory Mechanics
- Measurement/Units
Intended Users Formats Ratings
- text/html
- Educators - application/ms-word
- Professional/Practitioners - application/pdf
- video/quicktime
Access Rights:
Free access
Free for individual teachers. Please contact author for institutional use
Β© 2013 Peter Bohacek/ISD197
direct measurement, inquiry, kinematics videos, modeling, video, video analysis
Record Creator:
Metadata instance created February 11, 2013 by Peter Bohacek
Record Updated:
January 28, 2014 by Caroline Hall
Last Update
when Cataloged:
February 9, 2013
Other Collections:
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4F. Motion
β’ 9-12: 4F/H1. The change in motion (direction or speed) of an object is proportional to the applied force and inversely proportional to the mass.
β’ 9-12: 4F/H2. All motion is relative to whatever frame of reference is chosen, for there is no motionless frame from which to judge all motion.
β’ 9-12: 4F/H4. Whenever one thing exerts a force on another, an equal amount of force is exerted back on it.
β’ 9-12: 4F/H7. In most familiar situations, frictional forces complicate the description of motion, although the basic principles still apply.
β’ 9-12: 4F/H8. Any object maintains a constant speed and direction of motion unless an unbalanced outside force acts on it.
9. The Mathematical World
9B. Symbolic Relationships
β’ 9-12: 9B/H1b. Sometimes the rate of change of something depends on how much there is of something else (as the rate of change of speed is proportional to the amount of force acting).
11. Common Themes
11B. Models
β’ 6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast,
too complex, or too dangerous to study.
12. Habits of Mind
12B. Computation and Estimation
β’ 9-12: 12B/H2. Find answers to real-world problems by substituting numerical values in simple algebraic formulas and check the answer by reviewing the steps of the calculation and by judging
whether the answer is reasonable.
β’ 9-12: 12B/H9. Consider the possible effects of measurement errors on calculations.
Next Generation Science Standards
Motion and Stability: Forces and Interactions (HS-PS2)
Students who demonstrate understanding can: (9-12)
β’ Analyze data to support the claim that Newton's second law of motion describes the mathematical relationship among the net force on a macroscopic object, its mass, and its acceleration.
β’ Use mathematical representations to support the claim that the total momentum of a system of objects is conserved when there is no net force on the system. (HS-PS2-2)
Disciplinary Core Ideas (K-12)
Forces and Motion (PS2.A)
β’ Newton's second law accurately predicts changes in the motion of macroscopic objects. (9-12)
β’ Momentum is defined for a particular frame of reference; it is the mass times the velocity of the object. (9-12)
β’ If a system interacts with objects outside itself, the total momentum of the system can change; however, any such change is balanced by changes in the momentum of objects outside the system.
Definitions of Energy (PS3.A)
β’ Energy is a quantitative property of a system that depends on the motion and interactions of matter and radiation within that system. That there is a single quantity called energy is due to the
fact that a system's total energy is conserved, even as, within the system, energy is continually transferred from one object to another and between its various possible forms. (9-12)
Conservation of Energy and Energy Transfer (PS3.B)
β’ Energy cannot be created or destroyed, but it can be transported from one place to another and transferred between systems. (9-12)
β’ Mathematical expressions, which quantify how the stored energy in a system depends on its configuration (e.g. relative positions of charged particles, compression of a spring) and how kinetic
energy depends on mass and speed, allow the concept of conservation of energy to be used to predict and describe system behavior. (9-12)
Crosscutting Concepts (K-12)
Scale, Proportion, and Quantity (3-12)
β’ The significance of a phenomenon is dependent on the scale, proportion, and quantity at which it occurs. (9-12)
β’ Algebraic thinking is used to examine scientific data and predict the effect of a change in one variable on another (e.g., linear growth vs. exponential growth). (9-12)
Systems and System Models (K-12)
β’ When investigating or describing a system, the boundaries and initial conditions of the system need to be defined. (9-12)
Stability and Change (2-12)
β’ Change and rates of change can be quantified and modeled over very short or very long periods of time. Some system changes are irreversible. (9-12)
Science and Engineering Practices (K-12)
Analyzing and Interpreting Data (K-12)
β’ Analyzing data in 9β12 builds on Kβ8 and progresses to introducing more detailed statistical analysis, the comparison of data sets for consistency, and the use of models to generate and analyze
data. (9-12)
β‘ Analyze data using computational models in order to make valid and reliable scientific claims. (9-12)
Obtaining, Evaluating, and Communicating Information (K-12)
β’ Obtaining, evaluating, and communicating information in 9β12 builds on Kβ8 and progresses to evaluating the validity and reliability of the claims, methods, and designs. (9-12)
β‘ Communicate technical information or ideas (e.g. about phenomena and/or the process of development and the design and performance of a proposed process or system) in multiple formats
(including orally, graphically, textually, and mathematically). (9-12)
Scientific Investigations Use a Variety of Methods (K-12)
β’ Science investigations use diverse methods and do not always use the same set of procedures to obtain data. (9-12)
Scientific Knowledge is Based on Empirical Evidence (K-12)
β’ Science includes the process of coordinating patterns of evidence with current theory. (9-12)
Using Mathematics and Computational Thinking (5-12)
β’ Mathematical and computational thinking at the 9β12 level builds on Kβ8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including trigonometric
functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are created and used based on
mathematical models of basic assumptions. (9-12)
β‘ Use mathematical or computational representations of phenomena to describe explanations. (9-12)
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.4 Model with mathematics.
MP.6 Attend to precision.
High School β Algebra (9-12)
Seeing Structure in Expressions (9-12)
β’ A-SSE.1.b Interpret complicated expressions by viewing one or more of their parts as a single entity.
Creating Equations^? (9-12)
β’ A-CED.4 Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations.
Reasoning with Equations and Inequalities (9-12)
β’ A-REI.3 Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters.
High School β Functions (9-12)
Interpreting Functions (9-12)
β’ F-IF.6 Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph.
Linear, Quadratic, and Exponential Models^? (9-12)
β’ F-LE.1.b Recognize situations in which one quantity changes at a constant rate per unit interval relative to another.
β’ F-LE.1.c Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another.
β’ F-LE.5 Interpret the parameters in a linear or exponential function in terms of a context.
ComPADRE is beta testing Citation Styles!
<a href="http://www.physicssource.org/items/detail.cfm?ID=12612">Bohacek, Peter. Using Direct Measurement Video to Teach Physics. Northfield: Science Education Resource Center, February 9, 2013.</a>
P. Bohacek, (Science Education Resource Center, Northfield, 2013), WWW Document, (https://serc.carleton.edu/sp/library/direct_measurement_video/index.html).
P. Bohacek, Using Direct Measurement Video to Teach Physics (Science Education Resource Center, Northfield, 2013), <https://serc.carleton.edu/sp/library/direct_measurement_video/index.html>.
Bohacek, P. (2013, February 9). Using Direct Measurement Video to Teach Physics. Retrieved April 16, 2014, from Science Education Resource Center: https://serc.carleton.edu/sp/library/
Bohacek, Peter. Using Direct Measurement Video to Teach Physics. Northfield: Science Education Resource Center, February 9, 2013. https://serc.carleton.edu/sp/library/direct_measurement_video/
index.html (accessed 16 April 2014).
Bohacek, Peter. Using Direct Measurement Video to Teach Physics. Northfield: Science Education Resource Center, 2013. 9 Feb. 2013. 16 Apr. 2014 <https://serc.carleton.edu/sp/library/
@misc{ Author = "Peter Bohacek", Title = {Using Direct Measurement Video to Teach Physics}, Publisher = {Science Education Resource Center}, Volume = {2014}, Number = {16 April 2014}, Month =
{February 9, 2013}, Year = {2013} }
%A Peter Bohacek
%T Using Direct Measurement Video to Teach Physics
%D February 9, 2013
%I Science Education Resource Center
%C Northfield
%U https://serc.carleton.edu/sp/library/direct_measurement_video/index.html
%O text/html
%0 Electronic Source
%A Bohacek, Peter
%D February 9, 2013
%T Using Direct Measurement Video to Teach Physics
%I Science Education Resource Center
%V 2014
%N 16 April 2014
%8 February 9, 2013
%9 text/html
%U https://serc.carleton.edu/sp/library/direct_measurement_video/index.html
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Using Direct Measurement Video to Teach Physics:
Is Part Of Pedagogy in Action: Library Portal
This is the portal to Pedagogy in Action, providing access to teaching modules, learning activities, and research on learning.
relation by Caroline Hall
See details...
Know of another related resource? Login to relate this resource to it.
Related Materials
Similar Materials
Featured By | {"url":"http://www.physicssource.org/items/detail.cfm?ID=12612","timestamp":"2014-04-16T13:07:44Z","content_type":null,"content_length":"59053","record_id":"<urn:uuid:e007c1ee-ba7a-4176-bf06-1f1534137a42>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
I'm new taking my 1st class, and I am having trobble with my homework.. help please!
This is a discussion on I'm new taking my 1st class, and I am having trobble with my homework.. help please! within the C Programming forums, part of the General Programming Boards category;
Originally Posted by quzah You need to assign the value with =, which you aren't doing, and you don't even ...
ok I'm getting closer... I got 1 now I'm on number 2 and I need help again Lol thank you guys for being patient and helpful to a newbie like me!
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
int number;
printf(" enter a positive number between 1 and 10:");
if( (number < 1) || (number > 10) )
printf("The number is out of range. (1-10)\n");
/* Check to see if number is odd or even #1 */
if ((number & 1) > 0)
printf("%d is odd\n",number);
printf("%d is even\n", number);
/* cube the number #2 */
printf("%d is the cube of that number\n");
return 0;
Wasn't you also suppose to quit, if a number isn't 1-10? Otherwise you are just carrying on as if nothing happened.
if( (number < 1) || (number > 10) ) {
printf("The number is out of range. (1-10)\n");
return 0;
Last edited by Subsonics; 09-24-2011 at 05:24 PM. Reason: In your case you might also need system() in there.
you are totally correct I didn't realize that thanks... I'm working on square root right now but I will go back and try and fix that.
this is what I have for square root
/* square root the number #3 */
printf("%d is the square root of that number\n",sqrt);
Nope... look how I did it in message #30 ... copy the form of that, using the new math.
Also note that sqrt() is for floating point numbers and you're working with integers.
(You really gotta get into the habit of looking this stuff up!)
And, you can't use function names as variables...
there is something wrong again... and I can't figure out why its doing it....
this is my program so far....
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
int number, cube, sqrot ,i ,sumx ,fact ;
printf(" enter a positive number between 1 and 10:");
if( (number < 1) || (number > 10) ) {
printf("The number is out of range. (1-10)\n");
return 0;
/* Check to see if number is odd or even #1 */
if ((number & 1) > 0)
printf("%d is odd\n",number);
printf("%d is even\n", number);
/* cube the number #2 */
printf("%d is the cube of that number\n",cube);
/* square root the number #3 */
printf("square root of that is %d\n",sqrot);
/* Sum of the digits #4 */
for (i=1;i<=number;i=i+1)
sumx =sumx + i;
printf("the sum of the digits is %d\n", sumx);
/* Factorial of the number #5 */
for (i=1;i<=number;i=i+1)
printf("the factorial of that number is %d\n",fact);
return 0;
it works fine until I added the last part to get it to do the factoral
/* Factorial of the number #5 */
for (i=1;i<=number;i=i+1)
printf("the factorial of that number is %d\n",fact);
when that is added it jumps back up to the sqrot and says it has an error? Grrr I will get this program right it might take me all night but ya!!!! thanks again for help.
Ok... what EXACTLY is it doing or not doing?
What error messages are you getting?
What warnings/errors are your compiler listing?
All this is good information... "It doesn't work" tells us nothing.
You are missing a semicolon after fact*i.
true it says
In function 'int main()';
line 28 waring converting int to double
expected' before "printf"
[build error] error 1
I'm guessing its because I didn't use a double of the sqrt which is weird because it worked before and only stopped working when I added the factorial. Can I use int and double for the number
they input? If so how do I do that correctly so it works the way I want it to?
-thanks again Cess
lol I can't believe I missed that simple ;
its sayin the factorial is zero.... so I'm still doing something wrong
I need it to give the factorial of the person's input
/* Factorial of the number #5 */
for (j=number;j<=number;j=j+1);
printf("%d is the factorial\n",fact);
Last edited by Cess; 09-24-2011 at 06:28 PM.
You initialize fact to zero, then you multiply that with i. What happens when you multiply something with zero?
ok now I'm getting 6....I don't understand how I am spost to set it up differently
/* Factorial of the number #5 */
for (j=1;j<=number;j=j+1);
printf("%d is the factorial\n",fact);
and I get 3 when I entered 2 and its spost to be 2....
/* Factorial of the number #5 */
for (j=1;j<=number;j=j+1);
printf("%d is the factorial\n",fact);
It's the semicolon again, your for loop runs empty. what you have is the same as:
for(j = 1; j <= number; j=j+1)
I am still not understanding.....
09-24-2011 #31
09-24-2011 #32
09-24-2011 #33
Registered User
09-24-2011 #34
09-24-2011 #35
09-24-2011 #36
09-24-2011 #37
09-24-2011 #38
09-24-2011 #39
Registered User
09-24-2011 #40
09-24-2011 #41
09-24-2011 #42
Registered User
09-24-2011 #43
09-24-2011 #44
Registered User
09-24-2011 #45 | {"url":"http://cboard.cprogramming.com/c-programming/141355-i%27m-new-taking-my-1st-class-i-am-having-trobble-my-homework-help-please-3.html","timestamp":"2014-04-18T01:49:50Z","content_type":null,"content_length":"112349","record_id":"<urn:uuid:d6e01251-a2ec-4ab9-92ab-70ee7552f994>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics Worksheets!
'); } var S; S=topJS(); SLoad(S); //-->
Math Worksheets
Welcome to our statistics worksheet section.
Every time you click to create a worksheet a New worksheet is created!
Back to the Math Table Of Contents
Have a suggestion or would like to leave feedback?
Leave your suggestions or comments about edHelper! | {"url":"http://www.edhelper.com/statistics.htm","timestamp":"2014-04-21T07:16:53Z","content_type":null,"content_length":"10671","record_id":"<urn:uuid:adeeb71b-fbce-4d7e-ac29-bf47a0606545>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to solve this function
October 4th 2011, 02:08 PM
How to solve this function
a function f is defined by:
0 (x<-1)
x+1 (-1<x<1)
f(x)= 1-x (0<x<1)
0 (x>1)
Sketch on separate daiagrams the graphs of f(x)
f(x+0.5), f(x+1), f(x+2), f(x-0.5), f(x-1 and f(x-2)
October 4th 2011, 02:16 PM
Re: How to solve this function
sketch the piece-wise function of f(x)
f(x+c) involves a horizontal shift of f(x) c units to the right or left ... your notes/text should indicate which way.
October 4th 2011, 02:20 PM
Re: How to solve this function
Originally Posted by The Chaz
1. It's a piecewise-defined function, and there's no way I'm drawing this for you! Maybe someone else will... but I would recommend that you learn how to graph piecewise functions. Basically, for
all the different ways that f is defined
(in this case,
x + 1, and
1 - x)
2. Draw the lines
y = 0
y = x + 1 and
y = 1 - x on the same set of coordinate axis. In pencil.
3. Use a colored pencil (or heavier hand) to emphasize the line y = 0 for the part of this line where x < -1 (as defined in your example).
Use a colored pencil (or heavier hand) to emphasize the line y = x + 1 for the part of this line where -1 < x < 0 (as defined in your example).
I changed it to "0", because otherwise your function is not well-defined.
Use a colored pencil (or heavier hand) to emphasize the line y = 1 - x for the part of this line where 0 < x < 1 (as defined in your example).
Use a colored pencil (or heavier hand) to emphasize the line y = 0 for the part of this line where x > 1 (as defined in your example.
Also, you'll have to determine how to define f(-1), f(0), and f(1) by looking for β€ or β₯ in your original problem. Another guess of mine is that you omitted these.
Now, erase everything that you didn't highlight in step 3.
The second part is asking for transformations of this graph. This next sentence will tell you everything you need to know.
The graph of f(x - h) is the graph of f(x), shifted right "h" units.
Since I took the time to work on this and correct some (assumed) typos...
October 4th 2011, 02:46 PM
Re: How to solve this function
The Chaz, thanks for you help, I have just one question on the top:
If they ask me to sketch f(x+0.5) is that mean that
f(x)=0 f(0+0.5) so: y=0.5 within (x<-1)
f(x)=x+1 f(x+1+0.5) so y=x+1.5 within (-1<x<0)
and so on? | {"url":"http://mathhelpforum.com/algebra/189571-how-solve-function-print.html","timestamp":"2014-04-23T19:12:09Z","content_type":null,"content_length":"7537","record_id":"<urn:uuid:6c584ce0-9ee9-41da-9b4f-c2bfa386394f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating Real Rates of Interest
The Open University
Actual or βnominalβ rates of interest are sometimes contrasted with βrealβ interest rates which take into account inflation. Estimates of βrealβ rates of interest have usually been made in the
context of the construction of models which explain interactions in money markets in terms of expectations about future interest rates. This note describes a method of measuring real interest
rates empirically and historically.
The method
The method was developed to help measure the impact of inflation on new town development corporations. The corporations borrowed money for a fixed period of fifty years at fixed rates of
interest. Rates of interest varied from 3% in 1947 at the beginning of the programme up to 18% charged in the mid 1970s. Typically the annual borrowings of an individual corporation increased
every year for a period of five to ten years as construction got under way. Then after a peak there were continued annual borrowings for up to thirty or years at a lower level as the new town
continued to grow.
The period of new town construction in Britain was one of continued inflation. After the first decade of the life of the corporation most of the advances on which the corporation was paying
interest were made at a time when the price level was substantially lower. The βrealβ rate of interest, taking into account inflation, is lower than the rate of interest actually paid.
The real rate of interest does not take into the full impact of inflation on the finances of the development corporations, but is limited to indicating the impact of inflation on the General
Revenue or current account. Inflation also had an impact on capital account as expressed in the Balance Sheet. This note does not attempt any estimation of the influence of inflation on capital
account because of the practical and conceptual problems involved, but does later discuss the influence of inflation on the relationship between the pictures given by the current and capital
The calculation of the real rate for any particular year is based upon all advances made earlier than that year, the rate of interest which was fixed at that time an advance was made, and the
inflation which has taken place since the advance was made:
Let Ln= advances made in year n, and Rn= the rate of interest charged for advances made in year n (which are fixed for the life of the loan). Then It, the βnominalβ interest paid in year t for a
series of advances beginning in year 1, is given by:
It = *Ln*Rn (from n=1 to n=t).
Then ARt, the average nominal rate of interest paid in year t, for a series of advances made in year 1 or in later years, is given by:
ARt = *Ln*Rn/*Ln (from n=1 to n=t)
Let Pn= the price level in year n. Then the basic assumption in the estimation of βrealβ interest rates is that RRtm, the βrealβ interest rate paid in year t for an advance given in year m, is
given by:
RRtm = Rm*Pm/Pt
This assumption defines the real rate of interest as something measured historically and not something which can be measured at the time an advance is made. The definition implies that the real
rate of interest continues to fall for as long as inflation continues (and that the real rate would increase if there were a fall in the general price level). With hyperinflation or inflation
over a very long period of time the real rate may approach zero, but it cannot be negative as it can be in some models which have used the concept of real rates of interest in association with
expectations about future levels of inflation.
It follows from this definition that ARRt, the average real rate of interest paid in year t for a series of advances, is given by:
ARRt = (*Ln*Rn*Pn/Pt)/*Ln (from n=1 to n=t)
The method of calculation of real interest rates actually used is slightly different,. The accounts of the development corporations give total interest payments made each year, ie It=*Ln*Rn (from
n=1 to n=t). These figures were used in as the preferred source for interest payments because no details are given in the annual reports of the method of calculation used for the calculation of
the "average" rate of interest paid which are also included in most of the annual reports.
Another factor is that the accounts of the development corporations include figures for the sum of advances, ie *Ln, as the sum at the end of the financial year. In the formulae given above it
has been implicit that the sum of advances refers in some way to the year as a whole. In the calculations made the sum of advances has been measured as the mean of advances made at the beginning
and end of the year. The average nominal rate of interest in year t can then be written as:
ARt = 200*It/(*Lt + *Lt-1)
The method used for the calculation of the real rates of interest was designed to take advantage of the relative addressing facilities of a spreadsheet. RI2, the βrealβ level of interest payments
made in year 2, for example, was calculated as the sum of interest paid for advances made in year 2 plus the interest paid in year 1 deflated by the increase in the price level in year 2 relative
to year 1:
RI2 = I2 - I1(1-P1/P2)
Generalising this, and using the relative addressing facilities of the spreadsheet, the level of real interest for all years were calculated as the sum of interest paid for advances made in the
current year (measured in terms of It-It-1) plus the real interest paid in the previous year (ie RIt-1) deflated by the increase in the price level since the previous year:
RIt = It - It-1 + RIt-1*Pt-1/Pt
The Retail Price Index was used as a measure of the price level. The calculations were made using the Excel 5 spreadsheet. Columns A to D gave the input for the calculations. Column A was used
for the financial years. Column B for advances made (Ln), column C for the interest payments made (In), and column D for Retail Price Index expressed with a common base year.
Columns E and F were used for calculation of the nominal and real rates of interest. The expression copied for the average nominal rates given in column E was ARt = 200*Ct/(Lt+Lt-1) - and the
figures obtained by this method approximated to the figures given in many of the annual accounts for the average interest rate paid. Column F gives the average real rate of interest. The cell for
year 1 for column F was entered manually as ARR1=F1=C1. The expression for the average real rate of interest copied for year 2 to year t is:
ARRt = 200*(Ct - Ct-1 + Ft-1*Dt-1/Dt)/(Bt+Bt-1)
Results for Harlow and Milton Keynes
Some practical results using this calculation are illustrated in the Chart which shows the rate of inflation as measured by year to year changes in the Retail Price Index and both real and
nominal rates of interest paid by Harlow and Milton Keynes Development Corporations. Milton Keynes was designated twenty years later than Harlow and faced a very different economic environment in
terms of higher levels of both interest rates and inflation. The estimation of real rates of interest makes it possible to make direct comparisons between the impact of interest payments on the
two corporations.
At the end of its main nineteen year development period in 1986 Milton Keynes Development Corporation was paying an average rate of interest of 12.5%. But the real rate is estimated at 8.5%. The
difference of 4% is a measure of the extent to which inflation reduced the real impact of Milton Keynes interest payments. At the end of nineteen years of Harlowβs development in 1966 the
Corporation was paying an average rate of interest of 4.9%. But the real rate was 3.9%. Harlow at that stage benefited by inflation by only 1% - a quarter of the reduction of Milton Keynes.
Milton Keynes could be said to have gained more from inflation than Harlow, but this gain did relatively little to reduce the impact of the higher interest rates which Milton Keynes had to pay.
The nominal interest rate of 12.5% paid by Milton Keynes at the end of its development stage in 1986 was a little more than two and a half times the 4.9% paid by Harlow in its nineteenth year.
But the difference in real terms was not very much smaller. The real rate of interest of 8.5% paid by Milton Keynes was a little more that twice the 3.9% paid by Harlow.
The main difference in the impact of inflation on the financial situation of the two development corporations lies in the length of the development periods. Harlow Development Corporation had a
longer life than Milton Keynes and continued to benefit from inflation for another fifteen years. Harlow had time to demonstrate profitability, and by the 1970s had begun to cumulate substantial
financial surpluses. After 1974 Harlow was able to finance capital expenditures from its own resources and did not need to borrow money. The effect of rapid inflation in the 1970s was to reduce
the real rate of interest paid by Harlow Development Corporation to less than 2% by 1976.
The reasons why both the real and nominal rates of interest paid by Harlow increased after 1976 provide an interesting vignette of the financial relations between central government and the
development corporations. By the 1970s Harlow Development Corporation was able not only to finance its own investment but also lend to money. In the early 70s Harlow was paying 6% on the money it
had borrowed from the Treasury but was earning something in the range of 10-15% in interest on the money it had on loan to other bodies. Harlow Development Corporation actually earned Β£2.2
millions in interest on its loans in 1976. It appears that the Treasury was not happy to see Harlow Development Corporation develop as a semi-autonomous lending institution, and the government
appropriated Β£9m surplus from the Corporation in 1976. The nominal and real rates of interest paid by Harlow Development Corporation rose after 1976 because the Harlow Development Corporation was
obliged to borrow more money at the then prevailing interest rates of 10% or much more.
At the end of its life in 1980 the real rate of interest paid by Harlow Development Corporation was a little over 3% as compared with the nominal rate of a little over 7%. Inflation had by the
end of the life of Harlow Development Corporation reduced the interest real burden by 4%.
The influence of inflation on interest payments on the financial performance of the corporations is indirect rather than direct. Inflation does not influence the amount of interest actually paid.
The concept of real interest rates as defined here implicitly assumes that money borrowed is invested in assets whose value increases with inflation. Assets belong to the capital account and are
recorded in the corporationsβ Balance Sheets at historic cost. But rent income derived from these assets is classified to the General Revenue Account where it is set against interest payments.
The rate of return on assets is a measure of financial performance which is independent of interest rates paid by the corporation but is expressed in the same units as the rate of interest, and
is a measure which will increase with inflation if the development corporation increases its levels of rents in accordance with the general level of prices.
The rate of return on the assets transferred to the Commission for New Towns by Harlow Development Corporation in 1980 at the end of its life was 12% - as compared with the nominal rate of
interest paid of 7%. But the real rate of interest was only 3%. It seems reasonable to suggest that 4% of this 12% should be regarded as gain solely attributable to inflation. The other 8% is an
indicator of the rate of return at that stage on Harlowβs planning activities. The qualification "at that stage" is appropriate because it can be expected that rent income will usually lag
inflation. The crucial component of the development corporations income comes from industrial and commercial property let on leases which may be subject to review only every five years or longer.
At the end of its development stage in 1986 the rate of return on capital expenditure by Milton Keynes Development Corporation was only 2%. The difference of 4% in that year between the nominal
and real rates of interest paid suggests that the rate of return of 2% can be wholly attributable to inflation, and that at that stage of development, the benefits from the creation of new urban
values in Milton Keynes had not begun to manifest themselves in levels of rent. In the case of Milton Keynes the value of the assets created became manifest only when they were revalued or sold
in 1987 or later years. Concluding remarks The estimation of real interest rates seems particularly useful in the context of new town development because of the long period over which
expenditures are made and over which financial performance might be evaluated. But the method of measurement described could be applied to the investigate the influence of inflation on any
organisation which finances its investment from fixed interest loans. The method of calculation is of interest in demonstrating the extraordinary usefulness of the relative addressing facility of
the spreadsheet. | {"url":"http://www.economicsnetwork.ac.uk/cheer/ch10_1/ch101p08.htm","timestamp":"2014-04-19T09:48:21Z","content_type":null,"content_length":"15824","record_id":"<urn:uuid:417e859c-8ea0-485e-9e0e-4be1bef26a46>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spectrum of Three Anyons in a Harmonic Potential and the Third Virial Coefficient
Sen, Diptiman (1992) Spectrum of Three Anyons in a Harmonic Potential and the Third Virial Coefficient. In: Physical Review Letters, 68 (20). pp. 2977-2980.
Restricted to Registered users only
Download (194Kb) | Request a copy
We use supersymmetric quantum mechanics to show that the spectrum of three anyons in a harmonic potential exhibits an almost complete mirror symmetry about semions. Barring a subset of exactly solved
states, all states come in pairs such that the energy of one state with the statistics parameter $\theta$ is equal to the energy of its partner state at $\pi-\theta$. Bosons, semions, and fermions
have $\theta=0,\pi/2$, and $\pi$, respectively. From this, we show that the third virial coefficient of an ideal anyon gas is exactly mirror symmetric about semions.
Actions (login required) | {"url":"http://eprints.iisc.ernet.in/8254/","timestamp":"2014-04-21T15:07:56Z","content_type":null,"content_length":"20936","record_id":"<urn:uuid:b821389a-9292-4d14-bdb8-2e1359953a85>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
a question for which I *should* know the answer
You could go about it this way:
You have two masses, m_1 and m_2, compressing a spring between them.
Take it in three different scenarios;
1) m1 >> m2
2) m1 = m2
3) m1 << m2
In scenario 1, mass 1 will remain almost stationary while mass two moves off with significant velocity.
In scenario 2, the masses will move off with equal velocity
In scenario 3, mass 2 will remain almost stationary, mass 1 will move off.
Because it's the same spring in each scenario (not counting the mass or inertia of the spring), you know you have a constant energy potential and therefore a consistent force action against the
masses each time.
Instead of viewing one as 'firing' the other, you could simply view it as it really is: gunpowder explosion pushes each object away with equal momentum.
Let's take it a step further, and assume the spring is elastic in the sense that no energy is lost. We should, at some point, be able to establish a ratio between the original energy of the system
and the masses to find velocity.
I did it, but I don't have time to post the whole thing right now. | {"url":"http://www.physicsforums.com/showthread.php?t=101944","timestamp":"2014-04-19T22:47:18Z","content_type":null,"content_length":"42813","record_id":"<urn:uuid:59d3311d-250d-4b60-839d-32dd9ad5eec2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adding and Subtracting Time ? Confused?
June 29th 2012, 02:50 AM #1
May 2012
Adding and Subtracting Time ? Confused?
Here is a trick that i got from online
Adding Times
Follow these steps:
β‘ Add the hours
β‘ Add the minutes
β‘ If the minutes are 60 or more, subtract 60 from the minutes and add 1 to hours
Subtracting Times
Follow these steps:
β‘ Subtract the hours
β‘ Subtract the minutes
β‘ If the minutes are negative, add 60 to the minutes and subtract 1 from hours.
Problem :
However this trick does not work when i need to subtract:
5 PM from 12:00 PM (What happens now)
and also what should i do if i need to subtract 2AM of one day with 5 PM of the day before ?
Any tips or tricks ??
Re: Adding and Subtracting Time ? Confused?
Extend your negative-rule to hours. It is much easier in the 24h-format:
Subtract the hours
Subtract the minutes
If the minutes are negative, add 60 to the minutes and subtract 1 from hours.
If the hours are negative, add 24 to the hours and subtract 1 form the days.
AM/PM would require to study a lot of cases, just convert the times before calculating with them.
Re: Adding and Subtracting Time ? Confused?
Are you suggesting that,i should always convert am/pm to 24 hr before calculating the difference?
Re: Adding and Subtracting Time ? Confused?
That would certainly be the simplest thing to do. When you are talking about time, there is a lot of ambiguity. I interpret 12:00 PM as "noon", not "midnight" ("midnight" is 12 AM although,
strictly speaking, it is incorrect to assign "AM" or "PM" to either noon or midnight. Is that what is intended? When you say "5 PM from 12:00 PM" do you mean "noon of the next day" (7+ 12= 19
hours) or "midnight" (7 hours)?
Re: Adding and Subtracting Time ? Confused?
Is that what is intended? When you say "5 PM from 12:00 PM" do you mean "noon of the next day" (7+ 12= 19 hours) or "midnight" (7 hours)?
Noon to 5PM (Which is 5 hours)
Re: Adding and Subtracting Time ? Confused?
Yeah, but if time passes 11 AM -> 12 PM -> 1 PM -> ... -> 11 PM -> 12 AM -> 1 AM, you can imagine the mess of calculating with those numbers.
June 29th 2012, 05:27 AM #2
Junior Member
Jun 2012
June 29th 2012, 05:33 AM #3
May 2012
June 29th 2012, 07:42 AM #4
MHF Contributor
Apr 2005
June 29th 2012, 03:13 PM #5
May 2012
July 1st 2012, 05:05 AM #6
Junior Member
Jun 2012 | {"url":"http://mathhelpforum.com/business-math/200479-adding-subtracting-time-confused.html","timestamp":"2014-04-16T07:50:29Z","content_type":null,"content_length":"43389","record_id":"<urn:uuid:07bb1210-3f45-4cad-bb7b-81e834f9edb7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 699
An Interactive Approach for Calculating Ship Boundary Layers and Wakes for Nonzero Froude Number Y. Tahara, F. Stern (Iowa Institute of Hydraulic Research, The University of Iowa, USA) B. Rosen
(South Bay Simulations Inc., USA) ABSTRACT An interactive approach is set form for calculating ship boundary layers and wakes for nonzero Froude number. The Reynolds-averaged Navier-Stokes equa-
tions are solved using a small domain with edge condi- tions matched with those from a source-doublet-Dawson method solved using the displacement body. An overview is given of both the viscous- and
inviscid-flow methods, including their treatments of the free-surface boundary conditions and interaction procedures. Results are presented for the Wigley hull, including comparisons for zero and
nonzero Froude number and with available experimental data and the inviscid-flow results, which validate the overall approach and enable an evaluation of the wave-boundary layer and wake interaction.
NOMENCLATURE AΒ’,BΒ’,etc. Aij,Bij,aj,bjk i bj CD,CP,CU,Cnb finite-analytic coefficients (nb = NE, NW, SE, etc.) 2 friction coefficient (= 2~W/pUo) pressure coefficient residuary-resistance coefficient
(= 2R/pSU2o) Froude number (= U OCR for page 699
Historically, inviscid-flow methods have been used to calculate wavemaking and viscous-flow methods the boundary layer and wake, in both cases, without accounting for the interaction. Recent work on
wave- making has focused on the solution of the so-called Neumann-Kelvin problem using both Rankine- and Havelock-source approaches. Methods implementing these approaches were recently competitively
evaluated and ranked by comparing their results with towing-tank experimental data [11. In general, the methods under- predicted the amplitude of the divergent bow waves, were lacking in high
wave-number detail in the vicinity of the bow-wave cusp line, and overpredicted the ampli- tudes of the waves close to the stern. These difficulties were primarily attributed to nonlinear and viscous
effects. The methods using the Havelock-source approach generally outperformed those using the Rankine-source approach, except with regard to the near- field results (i.e., within one beam length of
the model) for which one of the the lager methods [2] was found to be far superior. Considerable effort has been put forth in the development of viscous-flow methods for ship boundary layers and
wakes. Initially, three-dimensional integral and differential boundary-layer equation methods were developed; however, these were found to be inapplicable near the stern and in the wake. More
recently, efforts have been directed towards the development of Navier- Stokes (NS) and Reynolds-averaged Navier-Stokes (RANS) equation methods; hereafter both of these will simply be referred to as
RANS equation methods. At present, the status of these methods is such that practical ship geometries can be considered, including complexi- ties such as appendages and propellers. Comparisons with
experimental data indicate that many features of the flow are adequately simulated; however, turbulence modeling and grid generation appear to be pacesetting Issues with regard to future developments
(see, e.g., the review by Patel [3] and the Proceedings of the 5th International Conference on Numerical Ship Hydrodynamics [43~. Relatively little work has been done on the inter- action between
wavemaking and boundary layer and wake. Most studies have focused separately on either the effects of viscosity on wavemaking or the effects of wavemaking (i.e., waves) on the boundary layer and
wake. Professor Landweber and his students have both demonstrated experimentally the dependence of wave resistance on viscosity and shown computationally that by including the effects of viscosity in
inviscid-flow cal- culations of wave resistance beKer agreement with exper- imental data is obtained (most recently, [5~. Such effects have been confirmed by others, including other more detailed
aspects of the flow field such as surface- pressure distributions and wave profiles and patterns [61. Most studies concerning the effects of waves on boundary layer and wake have been of an
approximate nature utilizing integral methods and assuming small- crossflow conditions (see Stern [7] for a more complete review, including references). In [7,8], experiment and theory are combined
to study the fundamental aspects of the problem utilizing a unique, simple model and computational geometry, which enabled the isolation and identification of certain important features of the wave-
induced effects. In particular, the variations of the wave- induced piezometric-pressure gradients are shown to cause acceleration and deceleration phases of the streamwise velocity component and
alternating direction of the crossflow, which results in large oscillations of the displacement thickness and wall-shear stress as com- pared to the no-wave condition. For the relatively simple
geometry studied, f~rst-order boundary-layer calculations with a symmetry-condition approximation for the free- surface boundary conditions were shown to be satisfac- tory; however, extensions of the
computational approach for practical geometries were not successful [91. Miyata et al. [101 and Hino [11] have pursued a comprehensive approach to the present problem in which the NS equations
(sub-grid scale and Reynolds aver- aged, respectively) are solved using a large domain with approximate free-surface boundary conditions. In both cases, the basic algorithms closely follow those of
MAC [12] and SUMMAC [131. However, [10] uses a time- dependent free-surface conforming grid, whereas [111 uses a fixed grid which does not conform to the free sur- face. The results from both
approaches are promising, but, thus far, have had difficulties in accurately resolving the boundary-layer and wake regions and, in the case of [10], have been limited to low Re. The present
interactive approach is also compre- hensive. Two of the leading inviscid- [2] and viscous- flow [14] methods are modified and extended for inter- active calculations for ship boundary layers and
wakes for nonzero Fr. The interaction procedures are based on extensions of those developed by one of the authors for zero Fr [151. The work of [7,8,15] is precursory to the present study. Also, it
should be mentioned that the present study is part of a large project concerning free- surface effects on boundary layers and wakes. Some of the related studies under this project will be referenced
later. In the following, an overview is given of both the viscous- and inviscid-flow methods, with particular emphasis on their treatments of the free-surface boundary conditions and the interaction
procedures. Results are presented for the Wigley hull, including comparisons for zero and nonzero Fr and with available experimental data and inviscid-flow results, which validate the overall
approach and enable an evaluation of the wave-boundary layer and wake interaction. In the presentation of the computational methods and results and discussions to follow, variables are either defined
in the text or in the NOMENCLATURE and are nondimensionalized using the ship length L, freestream velocity UO, and fluid density p. COMPUTATIONAL METHODS Consider the flow past a ship-like body,
moving steadily at velocity UO, and intersecting the free surface of an incompressible viscous fluid. As depicted in figure 1, the flow field can be divided into four regions in each of which
different or no approximations can be made to the governing RANS equations: region 1 is the inviscid flow; region 2 is the bow flow; region 3 is the thin boundary layer; and region 4 is the thick
boundary layer and wake. The resulting equations for regions 1 and 3 and their interaction (or lack of one) are well known. Relatively little is known about region 2. Recent exper- iments concerning
scale effects on near-field wave patterns have indicated a Re dependency for the bow wave both in amplitude and divergence angle [161; however, this aspect of the problem is deferred for later study.
Herein, we are primarily concerned with the flow 700
OCR for page 699
in region 4 and its interaction with that in region 1. As discussed earlier, the description of the flow in region 4 requires the solution of the complete RANS equations (or, in the absence of flow
reversal, the so-called partially-parabolic RANS equations, however, this simplification will not be considered here). There are two possible approaches to the solution of the RANS equations: a
global approach, in which one set of governing equations appropriate for both the inviscid- and viscous-flow regions are solved using a large solution domain so as to capture the viscous-invis- cid
interaction; and an interactive approach, in which dif- ferent sets of governing equations are used for each region and the complete solution obtained through the use of an interaction law, i.e.,
patching or matching conditions. Both approaches are depicted in figure 1. The former approach is somewhat more rigorous because it does not rely on the patching conditions that usually involve
approximations. Nonetheless, for a variety of reasons, both types of approaches are of inter- est. In [151, both approaches were evaluated for zero Fr by comparing interactive and large-domain
solutions for axisymmetric and simple three-dimensional bodies using the same numerical techniques and algorithms and turbu- lence model. It is shown that both approaches yield sat- isfactory
results, although the interaction solutions appear to be computationally more efficient. As men- tioned earlier, the present study utilizes the interactive approach. This takes advantage of the
latest develop- ments in both the inviscid- and viscous-flow technolo- gies; however, a large-domain solution for the present problem is also of interest and a comparative evaluation as was done
previously for zero Fr is planned for study under the present project for nonzero Fr. Viscous-Inviscid Interaction Referring to figure 1, there are two primary dif- ferences between the interactive
and large-domain approaches with regard to the solution of the RANS equations: (1) the size of the solution domain, i.e., the placement of the outer boundary SO; and (2) the bound- ary (i.e., edge)
conditions specified thereon. For the large-domain solution, uniform-flow and wave-radiation conditions are appropriate, whereas the interaction solu- tion requires the specification of the match
boundary (i.e., SO) as well as an interaction law, and also a method for calculating the inviscid flow. In the present study, solutions were obtained with the match boundary at about 2b, where ~ is
the boundary-layer and wake thickness. The interaction law is based on the concept of displacement thickness 8*. A three-dimensional 5* for a thick boundary layer and wake can be defined
unambiguously by the two require- ments that it be a stream surface of the inviscid flow continued from outside the boundary layer and wake and that the inviscid-flow discharge between this surface
and any stream surface exterior to the boundary layer and wake be equal to the actual discharge between the body and wake centerplane and the latter stream surface. A method for implementing this
definition for practical geometries is presently under development [171; how- ever, in lieu of this, an approximate definition is used in which two-dimensional definitions for a*, i.e. ~ = `: (MU-)
dr (1) for the keelplane and waterplane at each station are con- nected by a second order polynomial. In summary, the inviscid-flow solution is obtained for the displacement body 5*. This solution
then provides the boundary conditions for the viscous- flow solution, i.e. U(So) = Up(So) = Ue W(SO) = Wp(SO) = We p(So) = pp(So) = pe (2) Because S* and Vp (SO) are not known a priori, an initial
guess must be provided and the complete solution obtained by iteratively updating the viscous- and inviscid-flow solutions until the patching conditions (1) and (2) are satisfied. Viscous Flow The
viscous flow is calculated using the large- domain method of Patel et al. [14], modified and extended for interactive calculations and to include free- surface boundary conditions. The details of the
basic method are provided by [141. Herein, an overview is given as an aid in understanding the present modifications and extensions. Eguations and Coordinate System The RANS equations are written in
the physical domain using cylindrical coordinates (x,r,8) as follows: au + ~ a ~rV' + ~ aW = 0 (3) Dt = ~ ~ (Β’ + Hi) ~ ear Ad) ~ r am (u-w) UV+ReV2U (4) DDV W2 = - ad `~y -~9 +~ - r are (v-w) - r (A
- ww) + Re (V2V-r~ at -rim) (5) DW VW ~ a - D +-= - ~ (uw) ~ ar (vw) r at (6 + ww) - r (vw) + Re (V2W + ~ DO ~ r=) (6) with 701
OCR for page 699
DD` =~+Uaa +vaa +_ a and v2=~+~+ ~ a + ~ a2 Closure of the RANS equations is attained through the use of the standard k-Β£ turbulence model without modifications for free-surface effects. The lim-
ited experimental data available for surface-piercing bodies [18] indicate that, near a free surface, the normal component of turbulence is damped and the longitudinal and transverse components are
increased. This effect has also been observed in open-channel flow [19] and in recent measurements for free-surface effects on the wake of a submerged flat plate [20] and a plane jet [211. Such a
turbulence structure cannot, in fact, be simulated with an isotropic eddy viscosity turbulence model like the present one; however, this aspect of the problem is deferred for later study. In the
standard k-Β£ turbulence model, each Reynolds stress is related to the corresponding mean rate of strain by the isotropic eddy viscosity vt as follows: au av 1 au aw -uv=vt(~r + ~X) -Uw=vt(r as + ax)
1av aw w -Vw=vt(r all + fir ~ r ) - UU =Vt (24) - 3-k - VV =Vt (2 ear) 3 k - ww = vt (r aa~ 2 Vr ) 3 k (7) Vt is defined in terms of the turbulent kinetic energy k and its rate of dissipation Β£ by k2
Vt = Cot Β£ where Cp is a model constant and k and Β£ are governed by the modeled transport equations Dk_ a (1 ak fit ax Rkax) + r Or(Rkraark) +~aa~ (Rika~ )+ G - Β£ (9) Do a 1 as 1 a 1 as Dt = aX (R
aX' + r ar (RΒ£ r ar) +~a~ (Ready + CΒ£1 k G - CΒ£2 k (10) G is the turbulence generation term G=Vtt2~(aaXU)+(aavr)+(-aa~w3+vr-) + (aU+av'2+ ~ aU+aaxw' +(rDa~v+aDw- r-~ }(11) The effective Reynolds
number Rib is defined as Rip Re ~, (12) in which ~ = k for the k-equation (9) and ~ = Β£ for the Β£- equation (10~. The model constants are: Cal = .09, CΒ£1 = 1.44, CΒ£2 = 1.92, t7U = <'V = oW = ok = 1,
C7Β£ = 1.3. The governing equations (3) through (12) are transformed into nonorthogonal curvilinear coordinates such that the computational domain forms a simple rect- angular parallelepiped with
equal grid spacing. The transformation is a partial one since it involves the coor- dinates only and not the velocity components (U,V,W). The transformation is accomplished through the use of the
expression for the divergence and "chain-rule" defi- nitions of the gradient and Laplacian operators, which relate the orthogonal curvilinear coordinates x1 = (x,r.,8) to the nonorthogonal
curvilinear coordinates 41= (t,q,(~. In this manner, the governing equations (3) through (12) can be rewritten in the following form of the continuity and convec~ave-transport equations am (bin + b2V
+ b3W) + as (b2U + b2V + bow + Ha: (b3U + b3V + b3W) = 0 (13) all a2Β’ + g22 a2Β’ + g33 02~Β’ = 2At a: (8, + 2B~ ad) + 2C~ am + If am + so (14) Discretization and Velocity-Pressure Coupling The
convective-transport equations (14) are reduced to algebraic form through the use of a revised and simplified version of the finite-analytic method. In this method, equations (14) are linearized in
each local rectangular numerical element, AK, = Aq = A: = 1, by evaluating the coefficients and source functions at the interior node P and transformed again into a normalized form by a simple
coordinate stretching. An analytic solution is derived by decomposing the normalized equation into one- and two-dimensional partial-differen- tial equations. The solution to the former is readily
obtained. The solution to the latter is obtained by the method of separation of variables with specified bound- ary functions. As a result, a twelve-point finite-analytic formula for unsteady,
three-dimensional, elliptic equa- tions is obtained in the form 702
OCR for page 699
1 8 (P= R {) Cnb~nb l+CpECU+CD+ - ~ 1 + Cp(CUΒ’U + CDΒ’D +-~ - S)1 (15) It is seen that up depends on all eight neighboring nodal values in the crossplane as well as the values at the upstream and
downstream nodes MU and ~D, and the values at the previous time step ~ I. For large values of the cell Re, equation (15) reduces to the partially- parabolic formulation which was used previously in
other applications. Since equations (15) are implicit, both in space and time, at the current crossplane of calculation, their assembly for all elements results in a set of simultaneous algebraic
equations. If the pressure field is known, these equations can be solved by the method of lines. However, since the pressure field is unknown, it must be determined such that the continuity equation
is also satisfied. The coupling of the velocity and pressure fields is accomplished through the use of a two-step iterative procedure involving the continuity equation based on the SIMPLER algorithm.
In the first step, the solution to the momentum equations for a guessed pressure field is cor- rected at each crossplane such that continuity is satisfied. However, in general, the corrected
velocities are no longer a consistent solution to the momentum equations for the guessed p. Thus, the pressure field must also be corrected. In the second step, the pressure field is updated again
through the use of the continuity equation. This is done after a complete solution to the velocity field has been obtained for all crossplanes. Repeated global iterations are thus required in order
to obtain a converged solution. The procedure is facilitated through the use of a staggered grid. Both the pressure-correction and pres- sure equations are derived in a similar manner by substi-
tuting equation (15) for (U,V,W) into the the discretized form of the continuity equation (13) and representing the pressure-gradient terms by finite differences. Solution Domain and Boundary
Conditions The solution domain is shown in figure 1. In terms of the notation of figure 1, the boundary condi- tions on each of the boundaries are as follows. On the inlet plane Si, the initial
conditions for ~ are specified from simplellat-plate and the inviscid-flow solutions. On the body surface Sb, a two-point wall-function approach is used. On the symmetry plane Sk, the conditions
imposed are 8(U,V,p,k,Β£~/a~ = W = 0. On the exit plane Se, axial diffusion is negligible so that the exit conditions used are a2~/aX2 = 0, and a zero-gradient condition is used for 19. On the outer
boundary SO, the edge conditions are specified according to (2), i.e., (U,W,6) = (Ue,We, be) and a (k,Β£~/ar = 0, where (Ue,We,pe) are obtained from the inviscid-flow solution evaluated at the match
boundary SO. On the free-surface S,~(or simply 11), there are two boundary conditions, i.e. and V n = 0 * tijnj = tiinj (16) (17) where n is the unit normal vector to the free surface and tij and All
are the fluid- and external-stress tensors respectively, the latter, for convenience, including surface tension. The kinematic boundary condition expresses the requirement that ~ is a stream surface
and the dynamic boundary condition that the normal and tangential stresses are continuous across it. Note that ~ itself is unknown and must be determined as part of the solution. In addition,
boundary conditions are required for the turbulence parameters, k and Β£; however, at present, these are not well established. In the present study, the following approxima- tions were made in
employing (16) and (17~: (a) the external stress and surface tension were neglected; (b) the normal viscous stress and both the normal and tan- gential Reynolds stresses were neglected; (c) the
curva- ture of the free surface was assumed small and the tan- gential gradients of the normal velocity components were neglected in the tangential stresses; and (d) the wave ele- vation was assumed
small such that both (16) and (17) were represented by first-order Taylor series expansions about the mean wave-elevation surface (i.e., the water- plane Sw). Subject to these approximations, (16)
and (17) reduce to the following: (UX11 X + Vylly - WZ) ~ = 0 (18) Sw 19(Sw)=ll/Fr2-~] afi I (19) az ~ low Ia~v,k,Β£' =0 US ~ NEW (20) where Cartesian coordinates (x,y9z) have been used in (18) and
(19~. Conditions (18) through (20) were implemented numerically as follows. The kinematic condition (18) was used to solve for the unknown free- surface elevation ~ by expressing the derivatives in
finite-difference form and 11 in terms of its difference from an assumed (or previous) value. A backward dif- ference was used for the x-derivative, a central difference for the y-derivative, and the
inviscid-flow lip was used as an initial guess. The dynamic conditions, (19) and (20), were used in conjunction with the solution for ~ in solving the pressure and momentum and turbulence model
equations, respectively. Backward differences were used for the z- and ~derivatives. Inviscid Flow The inviscid flow is calculated using the method of Rosen [2], i.e., the SPLASH computer code. The
method is an extended version of the basic panel method of Maskew [22,231 originally developed for the predic- tion of subsonic aerodynamic flows about arbitrary con- figurations, modified to include
the presence of a free surface and gravity waves both for submerged and surface-piercing bodies. As is the case with the basic 703
OCR for page 699
method, lifting surfaces and their associated wake treatments as well as wall boundaries are included; however, the present overview and calculations are for nonlifting unbounded flow (see [24] for
SPLASH results for lifting flow). The details of the basic method are provided by [22,231. Herein, an overview is given as an aid in understanding the extensions for the inclusion of the free surface
and gravity waves and the present interaction calculations. The flow is assumed irrotational such that the governing differential equation is the Laplace equation V20 = 0 (21) where 0 is the external
perturbation velocity potential, i.e. Vp = UOx + Vo (22) A solution for ~ may be obtained by defining also an internal perturbation potential Β’~ and applying Green's theorem to both the inner and
outer regions and combining the resulting expressions to obtain Β’= - J FOUR-~) + R~} dS (23) sb where RpQ is the distance from the surface point Q to the field point P and ~ = Β’~ - O and c, = D(Β’ -
o~/anQ are the dipole and source strengths, respectively. In [22], the nature of solutions to (23) is investigated for two dif- ferent specifications for hi, i.e., Β’~ = 0 and UOx. In both cases, (23)
is solved for the surface potential (i.e., Β’(Sb)) by representing the body by flat quadrilateral panels over which ~ and ~ are assumed constant and uti- lizing the far-field a ~ O and body DΒ’/an = -
UOnx boundary conditions. The zero internal perturbation potential formulation Cot = 0) is shown to produce "results of comparable accuracy to those from higher- order methods for the same density of
control points." In this case, the velocity normal to the external surface vn is Vn = UOnx + 80/0n = UOnx + ~ (24) and, the velocity tangent to the external surface V' is V~=Uotx+a~l~t=uotx-~/8t (25)
where tx is the x-component of a tangent vector and t is arclength in a tangential direction. For solid surfaces, Vn is usually zero, but it may be a specified nonzero value to simulate body motion,
boundary-layer growth, inflow and outflow, control-surface deflection, etc. Hence, in the basic method, (24) is used to evaluate the source strengths directly. The corresponding doublet strengths are
then given by solution of the discretized form of (23~. Values of V' are subsequently computed using (25) with a central difference for the t-derivative. It should be recognized that the so-called
zero internal perturbation formulation is, in fact, equivalent to methods based on Green's third formula applied directly to the external perturbation potential (e.g., [251~. In the SPLASH code, the
internal zero-perturba- tion boundary condition is satisfied not only inside the submerged portion of the configuration, but also on the "other side" of a finite portion of the free surface. Both are
represented by source-doublet singularity panels and flow leakage from one side of the free-surface to the other, at the free-surface outer boundary is assumed to be negligible. This assumption is
valid if the outer boundary of the free surface is sufficiently far from the configuration, and if wave disturbances are eliminated before reaching the free-surface outer boundary. In this case, the
discretized form of (23) is (i = Hi, Aij ale + Hi, Bij Hi = 0 (26) Sb + Sw Sb + Sw The free-surface shape is determined by representing the undisturbed free surface by panels, whereupon free-surface
boundary conditions linearized with respect to zero Fr are imposed [261. The zero Fr velocities, UO, VO, and WO, are obtained by first considering all free-surface panels as solid and fixed (in
contrast to a traditional approach which employs the double panel or image model). The nonzero Fr velocities are then expressed as small increments to those for zero Fr. The velocities tangent and
normal to a free-surface panel are, respectively Ux ~ UO + AU Vy ~ VO + TV (27) and since WO = 0 for a free-surface panel. Through Bernoulli's equation, the pressure on free-surface panels is a
function of local velocity, and is approximated by retaining only first-order incremental velocity terms Vn = Wz ~ WO + AW ~ AW (28) pa = ~ ~ 1 (U2 + Vy + W2) ~ ~2 1-(U20+V2)t-{Uo~U+Vo~V } ~ 2 ~ ~ -
(U20 + V20 ~ ~ - { Uo (Ux - Uo) +Vo (Vy~ Vo) } (29) Free-surface boundary conditions are linearized in a similar manner, retaining only h~rst-order incremental velocity and surface-elevation terms.
The kinematic free- surface boundary condition (18) is approximated by Wz = Vn ~ UO 11X + VO lly ~ (UO + VO)1/2 also (30) where the subscript so denotes differentiation along a zero Fr streamline.
The dynamic free-surface boundary condition (19), after differentiation along so, and substi- tuting for 1lsO from (30), becomes aSO Fr2 BUS + V2~1/2 (31) 704
OCR for page 699
A five-point backward difference is used in the ~ and ~ directions, and the free-surface grid metrics, are used to compute the pressure gradient COOP + vO8P an ax ay asO- (U2+v2~ll2 = uO (aP a; + ap
all) + V (aP at + ap all) at ax ~ ax at by Do, by - (32) (U2 + V2)1/2 The pressure-gradient algorithm is structured to permit the use of any blocked free-surface grid arrangement. Also, using less
than a f~ve-point backward difference tends to dampen wave amplitudes. This wave-damping mechanism is employed on panels near the outer boundary of the finite free-surface model, so that wave
disturbances are eliminated before reaching the free- surface outer boundary. At this point, a sufficient number of linear dependencies have been established to permit the elimi- nation of the
unknown free-surface source strengths in (26), i.e., (24) relates source strength to panel normal velocity, (31) relates free-surface panel normal velocity to streamwise pressure gradient, (32) with
backward differences relates streamwise pressure gradient to free- surface pressures, (29) relates free-surface pressure to free-surface panel tangential velocities, (25) relates panel tangential
velocities to the local surface gradient of dou- blet strength, and central differences relate the local surface gradient of doublet strength to doublet strengths. Hence, free-surface source
strengths can be expressed as a linear combination of free-surface doublet strengths, A. <,j-aj +2 . bjk ink w Substituting for Hi from (33) into (26) yields (i - ~ Aij ~j + ~ Bij ~j . Sb + Sw sb +v
sw Bij (aj +2 bjk ink) w With free-surface source strengths eliminated, and source strengths on the solid body evaluated directly, solution of (34) yields the corresponding dou- blet strengths. The
free-surface source strengths are then given by (33), and (24) and (25) are used to compute the resulting velocities on both body and free-surface panels. Pressures on free-surface panels are given
by (29~. A similar linearized formula is used for pressures acting on body panels, and configuration forces and moments are obtained by panel pressure integration. For interactive calculations, the
SPLASH code calculates the inviscid free-surface flow about the equivalent displacement body resulting from the previous viscous calculation. For this purpose, the equivalent displacement body is
treated as a solid fixed surface. The inviscid flow velocities required for the next viscous flow calculation, at off-body points on the viscous grid outer boundary SO, are obtained using the
computed source-doublet solution and velocity influence coefficients. A sub-panel velocity influence-coefficient algorithm was developed which utilizes a bilinear varia- tion of source and doublet
strength across each panel. The continuous variation of source and doublet strength on each panel, and across panel edges, enhances the accuracy of off-body velocity calculations at points close to
any body andlor free-surface panels. WIGLEY HULL GEOMETRY AND EXPERIMENTAL INFORMATION The Wigley parabolic hull was selected for the initial calculations since the geometry is relatively simple and
it has been used in many previous computational and experimental studies. In particular, it is one of the two hulls, the other being the Series 60 CB = .6 ship model, selected by the Cooperative
Experimental Program (CEP) of the Resistance and Flow Committee of the International Towing Tank Conference [27] for which extensive global (total, wave pattern, and viscous resis- tance, mean
sinkage and trim, and wave profiles on the hull) and local (hull pressure and wall-shear-stress distri- butions and velocity and turbulence fields) measurements were reported. It was for these same
reasons that the Wigley hull was selected as the first test case of the basic viscous-flow method [141, including comparisons with some of the zero Fr data of the CEP. Herein, compar- isons are made
for zero Fr with this same data and for nonzero Fr with the appropriate data of the CEP. As will be shown later, the nonzero Fr data is not as complete or of the same quality as that for zero Fr,
which was the motivation for a related experimental study for the Series 60 CB - .6 ship model [28] for which calculations and comparisons are in progress. However, the comparisons are still useful
in order to validate the present interactive approach and display the shortcomings of both the computations and experiments. The coordinates of the Wigley hull are given by y = 2~4x(1 - x)~1 - (z/d)
2} (35) where B = .1 and d = .0625. Waterplane and typical crossplane views are shown in figure 2. RESULTS In the following, first, the computational grids (figures 2 and 3) and conditions are
described. Then, some example results are presented and discussed for zero Fr, followed by those for nonzero Fr, including wherever possible comparisons with available experi- mental data, and, in
the latter case, with inviscid-flow results. The convergence history of the pressure is shown in figure 4. Figure 5 provides a comparison of the large-domain and interactive solutions. The free-sur-
face perspective view and contours, wave profile, and surface-pressure profiles and contours are shown in fig- ures 6 through 10, respectively. The axial-velocity con- tours, crossplane-velocity
vectors, and pressure, axial- vorticity, and turbulent kinetic energy contours for sev- eral representative stations are shown in figures 11 through 13. Lastly, the velocity, pressure, and turbulent
OCR for page 699
kinetic energy profiles for similar stations are shown in figures 14 through 16. On the figures and in the discus- sions, the terminology "interactive" refers to results from both the interactive
viscous and displacement-body inviscid solutions. When the distinction is not obvious it will be made. The terminology "inviscid" or "bare- body" refers to the noninteractive inviscid solution.
Computational Grids and Conditions The viscous-flow computational grid was obtained using the technique of generating body-fitted coordinates through the solution of elliptic partial differ- ential
equations. Because of the simplicity of the present geometry, it is possible to specify the axial fit and cir- cumferential f3 control functions as, respectively, only functions of I, and (; however,
in order to accurately satisfy the body-surface boundary condition and resolve the viscous flow, f2 = f2~t,ll,(~. Partial views of the grids used in the calculations are shown in figures 2a,b for a
longitudinal plane and typical body and wake cross- planes, respectively. Initially, a large-domain grid was generated. Subsequently, a small-domain grid was obtained by simply deleting that portion
of the large- domain grid that lay beyond about r > .2. The outer boundary for the small-domain grid is shown by the dashed line in figure 2. For the large-domain grid, the inlet, exit, and outer
boundaries are located at x = (.296,4.524) and r = 1, respectively. The first grid point off the body surface is located in the range 90 < y+ < 250. 50 axial, 30 radial, and 15 circumferential grid
points were used. As already indicated, the small- domain grid was similar, except 21 radial grid points were used. In summary, the total number of grid points for the large- and small-domain
calculations are 22,500 and 15,750, respectively. The inviscid-flow displacement-body and free- surface panelization is shown in figure 3. 423 panels are distributed over the displacement body and
546 over the free surface for a total number of 969 panels. The panel- ization covers an area corresponding to 1 ship length upstream of the bow, 1.5 ship lengths in the transverse direction, and 3
ship lengths downstream of the stern. This panel arrangement was judged optimum based on panelization dependency tests E161. The conditions for the calculations are as follows: L= l;Uo= l;Re=4.5x
106;Fr=0and.316; end on the inlet plane the average values for ~ and Us are .0033 and .0455, respectively. These conditions were selected to correspond as closely as possible to those of the
experiments of the CEP with which comparisons will be made [5,29,301. Initially, large-domain calculations were per- formed for zero Fr. A zero-pressure initial condition was used and the values for
the time a`, pressure ap, and transport quantity Ad (where ~ = k and Β£) underrelaxation factors and total number of global iterations were .05 and 200, respectively. Next, small- domain calculations
were performed, first for zero Fr, and then for nonzero Fr. For zero Fr, the interaction calculations were started with a zero-pressure initial condition and freestream edge conditions (Ue = 1,We= Pe
= 01. After 200 global iterations, the edge conditions were updated using the latest values of displacement thickness. Subsequently, the edge conditions were updated every 200 global iterations until
convergence was achieved, which took three updates. For nonzero Fr, the calculations were started with the zero Fr solution as the initial condition and with nonzero Fr edge condi- tions obtained
utilizing the zero Fr displacement body. This solution converged in 200 global iterations. Most of the results to be presented are for this case; however, some limited results will be shown in which
the nonzero Fr edge conditions were obtained using an updated nonzero Fr displacement body. The values for at, up, and At (where ~ = k and Β£) used for the small-doma~n calculations were the same as
those for the large-domain calculations; however, for nonzero Fr, in addition, a value of .01 was used for Al (where ~ = U) for grid nodes near the outer boundary. The a~9/3z term in (19) was found
to have a small influence and was neglected in many of the calculations; however, this may be due, in part, to the present grid resolution. The calculations were performed on the Naval Research
Laboratory CRAY XMP-24 supercomputer. The CPU time required for the calculations was about 17 minutes for 200 global iterations for the viscous-flow code and 1 minute for the inviscid-flow code.
Extensive grid dependency and convergence checks were not earned out since these had been done previously both for the basic viscous-flow method [14] and for other applications. However, some
calculations were performed using both coarser and finer grids. These converged, respectively, more rapidly and slower than the present solution. Qualitatively the solutions were very similar to the
present one, but with reduced and somewhat increased resolution, respectively. The convergence criterion was that the change in solution be less than about .05% for all variables. Usually the solu-
tions were carried out at least 50 global iterations beyond meeting this criterion. Figure 4 provides the conver- gence history for the pressure and is typical of the results for all the variables.
In figure 4, the abscissa is the global iteration number it and the ordinate is the residual R(it), which is defined as follows: imax imax R(it)= Ad, ~ptit-l) - pkit) i/ At, Pith ~ (36) i=1 i=1 where
ill and imax are the total number of iterations and grid points, respectively. Referring to figure 4, global iterations 1 - 200 correspond to the final iterations of the zero Fr solution and global
iterations 200 - 400 to those for the nonzero Fr solution. Zero Fr Figure 5 provides a comparison of the zero Fr large-domain and interactive solutions and experimental data. The two solutions are
nearly identical and show good agreement with the data, which validates the pre- sent interactive approach. The agreement with the data for the large-domain case is, of course, not surprising since
this was already established in [14] for a similar grid and conditions, i.e., the present zero Fr solution is essentially the same as that of [141. Some additional aspects of the zero Fr solution are
displayed in figures 11 through 15 for later comparison with the nonzero Fr solution. Reference [14] provides detailed discussion of the zero Fr solution, including comparisons with the available
experimental data. In summary, there is a downward flow on the forebody and an upward flow on the afterbody in response to the external-flow pressure gradients. The boundary layer and wake remain
thin and attached and the viscous-inviscid interaction is weak; 706
OCR for page 699
however, on the forebody, the boundary layer is rela- tively thicker near the keel than the waterplane, whereas the reverse holds true on the afterbody and in the near wake. The stern vortex is very
weak. In the intermedi- ate and far wake, the flow becomes axisymmetric. As indicated in figures 5 and 14 through 16, the agreement between the calculations and data is quite good; however, there are
some important differences, which are primarily attributed to the deficiencies of the standard k-e turbulence model with wall functions. In particular, the axial velocity and turbulent kinetic energy
are overpredicted near the stern and there is a more rapid recovery in the wake. Nonzero Fr Figure 5 also includes nonzero Fr results for comparison. On the waterplane, the surface and wake
centerplane pressure displays very dramatic differences, the wall-shear velocity shows similar trends, but with reduced magnitude, and the wake centerplane velocity indicates faster recovery in the
intermediate and far wake. As will be shown later, the first closely follows the wave profile, the second is due to an increase in boundary-layer thickness near the waterplane for the nonzero Fr
case, and the third can be explained by the wave-induced pressure gradients. On the keel, all three of these quantities are nearly the same as for zero Fr. The free-surface perspective views (figure
6) and contours (figure 7) vividly display the complex wave pattern consisting of both diverging and transverse wave systems. The bow and stern wave systems are seen to initiate with crests and the
shoulder systems initiate with troughs, which conforms to the usual pattern described for this type of hull form. Very apparent is the reduced amplitude of the stern waves for the interactive as com-
pared to the inviscid solution. Also, the diverging wave system is more pronounced and at a smaller angle with respect to the centerplane. Note that the axial and trans- verse wave-induced pressure
gradients can be discerned from these figures, but with an appropriate phase shift, i.e., increasing and decreasing wave elevations imply, respectively, adverse and favorable gradients. The wave
profile along the hull is shown in figure 8, which, in this case, includes experimental data for comparison. On the forebody, the two solutions are nearly identical and underpredict the amplitude of
the bow-wave crest and the first trough. On the afterbody, the interactive solution indicates larger values than the inviscid solution, with the data in between the two. The wave profile for the
nonzero Fr displacement body (figure 3b) is also shown in figure 8. The differences are minimal on the forebody, whereas, they are significant on the afterbody and depart from the data. It appears
that the present simple definition ( 1 ) is insufficient for "wavy" displacement bodies. The surface-pressure profiles (figure 9) show similar tendencies as just discussed with regard to the wave
profile. On the forebody, the two solutions are nearly identical, but, in this case, in very close agreement with the data. The pressure on the forebody shown by the dashed line is that obtained from
the inviscid dis- placement-body solution. On the afterbody, here again, the interactive solution indicates larger values than the inviscid solution, with the data in between the two. The
wave-induced effects are seen to diminish with increasing depth and the agreement between the two solutions and the data on the afterbody shows improvement. The surface-pressure contours (figure 10)
graphically display the differences between the two solutions and the data. Note that the axial and vertical surface-pressure gradients can be discerned from these figures, i.e., increasing and
decreasing pressure imply, respectively, adverse and favorable gradients. The larger wave elevation and pressure on the afterbody for the interactive solution results in the closed contours near the
stern displayed in figure 10b. As already mentioned, the viscous-inviscid interaction is weak for the Wigley hull, which is the reason that the inviscid and viscous pressure distributions are quite
similar. However, it appears that the interaction is greater for nonzero as compared to zero Fr. Figures 11 through 13 show the detailed results for several representative stations, i.e., x = .506,
.904, and 1.112, although the discussion to follow is based on the complete results at all stations. Note that for zero Fr the upper boundary shown is the waterplane, whereas for nonzero Fr, it is
the predicted free surface. Also, the axial-velocity, -vorticity, and turbulent kinetic energy contours are not shown for the inviscid solution since, in the former case, their values are all very
close to 1 and, in the latter two cases, they are, of course, zero. Solid curves indicate clockwise vorticity. On the forebody (figure 11), the boundary layer is thin such that many aspects of the
solutions are simi- lar; however, there are some important differences. The nonzero Fr pressure fields show local and global effects of the free surface, i.e., near the free surface, regions of high
and low pressure coincide with wave crests and troughs, respectively, and at larger depths, the contours are parallel to the free surface. Also, for nonzero Fr, the crossplane-velocity vectors are
considerably larger, especially for the interactive solution. The inviscid solu- tion clearly lacks detail near the hull surface. The extent of the axial vorticity is increased for nonzero Fr and is
locally influenced by the free surface. In both cases, as expected, the direction of rotation is mostly anticlock- w~se. On the afterbody (figure 12), almost all aspects of the solutions show
significant differences. The boundary layer is thicker near the waterplane for nonzero as compared to zero Fr. This behavior begins at x ~ .825, which coincides with a region of adverse axial
wave-induced pressure gradient (see figure 7~. The differences for the pressure field and axial-vorticity contours are similar as described for the forebody; however, in the case of the
crossplane-velocity vectors, there is an additional difference that near the free surface the interactive solution displays downward flow. This is consistent with the fact that the free-surface
elevation is above the waterplane and the pressure is generally higher near the free surface than it is a larger depths, i.e., 11 ~ O and ap/az ~ 0. Note that, as expected, in both cases, the
direction of rotation for the axial-vorticity is mostly clockwise. The turbulent kinetic energy contours are nearly the same for both Fr. In the wake (figure 13), the solutions continue to show
significant differences. Initially, the low-velocity region diffuses somewhat and covers a larger depthwise region; then, for x > 1.2, recovers quite rapidly. A similar behavior was noted earlier for
the wake centerline velocity for x ~ 1.2, both of which, as already mentioned, are consistent with the wave pattern. The zero Fr pressure field is nearly axisymmetric and fully 707
OCR for page 699
recovered by the exit plane. The nonzero Fr pressure field continues to show free-surface effects, i.e., the contours are parallel to the free surface, but also fully recovered by the exit plane.
Note the considerably larger wave elevation near the wake centerplane for the inviscid as compared to the interactive solution, which was pointed out earlier with regard to figures 6 and 7. Here
again, the crossplane-velocity vectors are larger for nonzero as compared to zero Fr, especially near the wake centerplane for the interactive solution. The interactive and inviscid solutions display
differences near the free surface, which appear to be consistent with the differences in their predicted wave patterns. The zero Fr axial vorticity decays fairly rapidly, whereas, for nonzero Fr, the
decay is slow with a layer of nonzero vorticity persisting near the free surface all the way to the exit plane. The turbulent kinetic energy contours are similar for both Fr, but recover faster for
the nonzero case. Figures 14 through 16 show the velocity, pressure, and turbulent kinetic energy profiles for similar stations as for figures 11 through 13, i.e., x = .5, .9, and 1.1. Also, included
are both zero and non zero Fr experimental data. At the largest two depths, z = .05 and .0625, data for both Fr are available, whereas, at the waterplane, z = 0, only zero Fr data are available. At
the intermediate depths, data are available for both Fr, but for different z values. Since the interest here is primarily nonzero Fr and the zero Fr data and comparisons were already displayed in
[14], only nonzero Fr data are shown for z = .0125, .025, and .0375. For zero Fr, a corrected pressure is also shown which includes a constant (= -.03) reference-pressure correction as described in
[141. Turbulent kinetic energy data are only available for zero Fr. At x = .5, consistent with previous discussions the differences between the two solutions are quite small and the agreement with
the zero Fr data is good. However, the nonzero Fr data show some unexpected differences. In particular, the axial-velocity profile has a laminar appearance and the boundary-layer thickness is
relatively large, the vertical velocity is upward, and the pressure shows considerable scatter. It is pointed out in [5] that the pressure-measurement error was appreciable. At x = .9 and 1.1, here
again, consistent with previous discussions the differences between the two solutions are significant and the agreement between the zero Fr solution and data is good, except for the aforementioned
discrepancies. The nonzero Fr solution shows larger axial velocities than the measurements for the inner part of the profiles. Here again, the measured profiles have a laminar appearance and the
boundary layer is thick. However, no doubt, a part of the difference is due to the calculations, i.e., as is the case for zero Fr, due to deficiencies of the k-e turbulence model an overprediction of
the velocity near the wall and wake centerplane is expected. The transverse velocity is small and with similar trends for both calculations and measurements. The calculations indicate downward
vertical velocities near the free surface and upward values for the midgirth region and near the keel. The agreement with the data near the keel is satisfactory, but in the midgirth region and near
the free surface the data display greater upward flow than the calculations. In the wake, the nonzero Fr data show surprisingly small vertical velocities near the wake centerplane. Here again, the
nonzero Fr pressure data shows considerable scatter and is difficult to compare with the calculations. Consistent with earlier discussions the turbulent kinetic energy profiles are nearly the same
for both Fr. Lastly, Table 1 provides a comparison of the cal- culated pressure-resistance coefficient and experimental values of the residuary-resistance (i.e., total - frictional) coefficient. The
experimental values cover a range of Re, including the present value, and clearly show a dependency on Re. Interestingly, the inviscid result compares well with the data at the highest Re, whereas
the interactive result is close to that that the data implies at the present Re. WAVE-BOUNDARY LAYER AND WAKE INTERACTION The comparisons of the zero and nonzero Fr interactive and inviscid-flow
results with experimental data enables an evaluation of the wave-boundary layer and wake interaction. Very significant differences are observed between the zero and nonzero Fr interactive results due
to the presence of the free surface and gravity waves. In fact, the flow field is completely altered. Most of the differences were explicable in terms of the differences between the zero and nonzero
Fr surface- pressure distributions and, in the latter case, the addi- tional pressure gradients at the free surface associated with the wave pattern. The viscous-inviscid interaction appears to be
greater for nonzero as compared to zero Fr. It should be mentioned that other factors undoubtedly have important influences, e.g., wave-induced separation, which are not included in the present
theory. The interactive and inviscid nonzero Fr solutions also indicate very significant differences. The inviscid solution clearly lacks "real-fluid effects." The viscous flow close to the hull and
wake centerplane is clearly not accurately resolved. The interactive solution shows an increased response to pressure gradients as compared to the ~nv~sc~d solution, especially in regions of low
velocity. Also, the inviscid solution overpredicts the pressure recovery at the stern and the stern-wave amplitudes. CONCLUDING REMARKS The present worlc demonstrates for the first time the
feasibility of an interactive approach for calculating ship boundary layers and wakes for nonzero Fr. The results presented for the Wigley hull are very encouraging. In fact, in many respects, the
present results appear to be superior to the only other solutions of this type available, i.e., [10,111. This is true both with regard to the resolution of the boundary-layer and wake regions and the
wave field. Furthermore, it appears that the present interactive approach is considerably more computationally efficient than the large-domain approaches of [10,111. This is consistent with the
previous finding for zero Fr [151. However, a complete evaluation of the present method was not possible. In the former case, due to the limited available experimental data. As mentioned earlier, a
related experimental study for the Series 60 CB = .6 ship model [28] was recently completed for which extensive measurements were made at both low and high Fr for which calculations and comparisons
are in progress. In the latter case, due to the considerable differences in numerical techniques and algorithms and turbulence models between the present methods and those of 708
OCR for page 699
[10,11]. As mentioned earlier, the pursuit of a large- 10. domain approach to the present problem is also of Interest and will enable such an evaluation. Finally, some of the issues that need to be
addressed while further developing and validating the present approach are as follows: further assessment of the most appropriate free-surface boundary conditions; improved definition and
construction of displacement bodies; the inclusion and resolution of the bow-flow region; extensions for lifting flow; and the ever present problem of grid generation and turbulence modeling. Also,
of interest is the inclusion of nonlinear effects in the inviscid-flow code. ACKNOWLEDGEMENTS This research was sponsored by the Office of Naval Research under Contract N00014-88-K-0113 under the
administration of Dr. E.P. Rood whose sup- port and helpful technical discussions are greatly appre- c~ated. REFERENCES 1. Lindenmuth, W., Ratcliffe, T.J., and Reed, A.M., "Comparative Accuracy of
Numerical Kelvin Wake Code Predicitions - "Wake-Off"," DTRC/SHD- 1260-1, 1988. 2. Rosen, B.,"SPLASH Free-Surface Code: Theoretical/Numerical Formulation," South Bay Simulations Inc., Babylon, NY,
1989 (proprietary report). 3. Patel, V.C., "Ship Stern and Wake Flows: Status of Experiment and Theory," Proc. 17th Office of Naval Research S ymposium on Naval Hydrodynamics, The Hague, 1988, pp.
217-240. Proc. 5th International Conference on Numerical Ship Hydrodynamics, Hiroshima, 1989. 5. Shahshahan, A., "Effects of Viscosity on Wavemaking Resistance of a Ship Model," Ph.D. Thesis, The
University of Iowa, Iowa City, IA, 1985. 6. Ikehata, M. and Tahara, Y., "Influence of Boundary Layer and Wake on Free Surface Flow around a Ship Model," J. Society of Naval Architects of Japan, Vol.
161, 1987, pp. 49-57 (in Japanese). Stern, F., "Effects of Waves on the Boundary Layer of a Surface-Piercing Body," J. of Ship Research, Vol. 30, No. 4, 1986, pp. 256-274. 8. Stern, F., Hwang, W.S.,
and Jaw, S.Y., "Effects of Waves on the Boundary Layer of a Surface- Piercing Flat Plate: Experiment and Theory," J. of Ship Research, Vol. 33, No. 1, 1989, pp. 63-80. 9. Stern, F., "Influence of
Waves on the Boundary Layer of a Surface-Piercing Body," Proc.4th International Conference on Numerical Ship Hydrodynamics, Washington, D.C., 1985, pp. 383-406. 709 Miyata, H., Sato, T., and Baba,
N., "Difference Solution of a Viscous Flow with Free-Surface Wave about an Advancing Ship," J. of Computational Physics, Vol. 72, No. 2, 1987, pp. 393-421. 11. Hino, T., "Computation of a Free
Surface Flow around an Advancing Ship by the Navier-Stokes Equations," Proc. 5th International Conference on Numerical Ship Hydrodynamics, Hiroshima, 1989. 12. Harlow, F.H. and Welch, J.E.,
"Numerical Calculation of Time-Dependent Viscous Flow of a Fluid with Free Surface," The Physics of Fluids, Vol. 8, 1965, pp. 2182-2189. 13. Chan, R.K.C. and Street, R.L., "A Computer Study of
Finite-Amplitude Water Waves," J. of Computational Physics, Vol. 6, 1970, pp. 68-94. 14. Patel, V.C., Chen, H.C. and Ju, S., "Ship Stern and Wake Flows: Solutions of the Fully-Elliptic
Reynolds-Averaged Navier-Stokes Equations and Comparisons with Experiments," Iowa Institute of Hydraulic Research, The University of Iowa, IIHR Report No. 323, 1988; also J. of Computational Physics,
Vol. 88, No. 2, June 1990, pp. 305-336. 15. Stern, F., Yoo, S.Y. and Patel, V.C., "Interactive and Large Domain Solutions of Higher-Order Viscous-Flow Equations," AIAA Journal, Vol. 26, No. 9, 1988,
pp. 1052- 1060. 16. Longo, J., "Scale Effects on Near-Field Wave Patterns," M.S. Thesis, The University of Iowa, Iowa City, IA, 1990. 17. Black, R., "Definition of Three-Dimensional Displacement
Thickness Appropriate for Ship Boundary Layers and Wakes," M.S. Thesis, The University of Iowa, Iowa City, IA, expected 1991. 18. Hotta, T. and Hatano, S., "Turbulence Measurements in the Wake of a
Tanker Model on and under the Free Surface," Fall Meeting of the Society of Naval Architects of Japan, 1983. 19. Rodi, W., "Turbulence Model and Their Application in Hydraulics," Presented at the
IAHR Section on Fundamentals of Division II: Experimental and Mathematical Fluid Dynamics, 1980. 20. Swean, T.F. and Peltzer, R.D., "Free Surface Effects on the Wake of a Flat Plate," NRL Memo Report
5426, Naval Research Laboratory, Washington D.C., 1984. 21. Ramberg, S.E., Swean, T.F., and Plesniak, M.W., "Turbulence Near a Free Surface in a Plane Jet," NRL Memo Report 6367, Naval Research
Laboratory, Washington D.C., 1989. 22. Maskew, B., "Prediction of Subsonic Aerodynamic Characteristics: A Case for Low- Order Panel Methods," Journal of Aircraft, Vol. 19, No. 2, 1982, pp. 157-163.
OCR for page 699
23. Maskew, B., "A Computer Program for Calculating the Non-Linear Aerodynamic Characteristics of Arbitrary Configurations," NASA CR- 166476, 1982. 24. Boppe, C.W., Rosen, B.S., Laiosa, J.P., and
Chance, B., Jr., ''Stars & Stripes '87: Computational Flow Simulations for Hydrodynamic Design," The Eighth Chesapeake Sailing Yacht Symposium, Annapolis, MD., 1987. 25. Stern, F., "Comparison of
Computational and Experimental Unsteady Cavitation on a Pitching Foil, ASME J. Fluids Eng., Vol. 111, 1989, pp. 290-299. 26. Dawson, C.W., "A Practical Computer Method for Solving Ship-Wave
Problems," Proc. 2nd International Conference on Numerical Ship Hvdrodvnamics, Berkeley, CA., 1977, pp. 30-38. 27. "Report of the Resistance and Flow Committee," Proc. 18th Int. Towing Tank Conf.,
Kobe, Japan, 1987, pp. 47-92. 28. Toda, Y., Stern, F., and Longo, J., "Mean-Flow Measurements in the Boundary Layer and Walce and Wave Field of a Series 60 CB = .6 Ship Model for Froude Numbers .16
and .316," Iowa Institute of Hydraulic Research, The University of Iowa, IIHR Report No. xxx, 1990 (in preparation). 29. Sarda, O.P., "Turbulent Flow Past Ship Hulls - An Experimental and
Computational Study," Ph.D. Thesis, The University of Iowa, Iowa City, IA., 1986. 30. Kajatani, H., Miyata, H., Ikehata, M., Tanaka, H., Adachi, H., Namimatsu, M., and Ogiwara, S. "The Summary of the
Cooperative Experiment on Wigley Parabolic Model in Japan," Proc. 2nd DTNSRDC Workshop on Ship Wave-Resistance Computations, 1983, pp. 5-35. Table 1. Residuary-Resistance Coefficients L(m) T(Β°C) UO(m
/s) Fr Re CR ExperimentIHI 6 12.8 2.423 0.316 11.9 x 106 1.803 x 10-3 Expenment SRI 4 10.6 1.978 0.316 6.14 1.998 Experiment UT 2.5 17.3 1.564 0.316 3.6 1.866 Inviscid ~0.316 -- 1.79 Interactive
~0.316 4.5 x 106 1.92 710
OCR for page 699
DIVERGING WAVE 80W WAVE '~ST E R N WAVE REGION 2. BODY SURfACE, Sb / / TRANSVERSE / / Bow FLOW yt ~ \ /| / WAVES ~ ~ Uo ~ /4~MMETRY PLANE, Sk x S- >- ~\l~ ---------I (INTERACTION} REGION 3 ~ ~ ~ 1 ~
S. \ THIN BOUNDARY LAYER \ / o REGION 4. ~tINTERACTtoN, THICK BOUNDARY LAYER ~ WAKE REGION 1. INVISCID FLOW INLET BOUNDARY, S. (LARGE DOMAIN: EXIT BOUNDARY, Se OUTER BOUNDARY, S (LARGE DOMAIN) f (a)
(x,y)plane (a) Fr = 0 O. (b) Fr=.316 Figure 3. Displacement bodies. a' REGION 1. INVISCiD FLOW ARC I' \ So ,/ _ {lNTERACTlON) / OUTER BOUNDARY, So - ,/ (LARGE DOMAIN) SYMMETRY PLANE, Si REWATER
PLANE, -W (b) (y,z) plane Figure 1. Definition sketch of flow-field regions and solution domains. Β° VATERPLANE o _ . ~ . - o to ~- Fr=o - F`r=0.316 %~% - %% _ ) 50 100 150 200 250 300 350 400 44 to
Global Iteration Number Figure 4. Convergence history. (a) - Fr=0.318 -- i'r=O:luLeracLive O ['r=O:Large-Doulain O t:xp.-l'stmuff & J. _9~ _ _ _ _ ~_ _ _ _ 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
X (a) longitudinalplane 1=7 X= 0.~ff 1=~D X= 1.002 l 0.6 0.8 1.0 .2 - F.r=0,316 -- F`r=O:lnteractive O Fr=O:Laree-Domain O Esp.-Ylatmuff dc J. _~ , , , D.4 0.5 0.6 0.7 0.8 0.9 1.0 (C) - FY=O.SIG -- F
`r=O:lDteractive . O Fr=O:Large-Domain O Exp.-Sarda - I , T , 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 0.15 0.20 Y Y (b) body and wake crossplanes Figure 2. Computational grid. 1.50 1.75 2.00 X Figure
5. Comparison of interactive and large-domain solutions for the watelplane: (a) surface and wake centerplane pressure, (b) wall-shear velocity; and (c) wake centeIplane velocity. 711
OCR for page 699
(a) interactive ~111111111111111111111 (b) inviscid Figure 6. Free-surface perspective view. ..~ 0.2 0.4 O.B 0.8 1.0 X (a) z/d=.04 -- Present cal.- Inviscid - Present cal.-VjBCOUB O Bare body -
Inviscid O Exp.-IHI "o~ODOOODOaC -- Present cal.- Invincid - Present cal.- Viscous O Bare body - loviscid O Esp. - IHI [ ~& l l ).0 0.2 0.4 0.6 0.8 X (b) zld = .92 Figure 9. Surface-pressure profiles
-0.2 0.0 O.Z 0.4 0.6 0.8 1.0 1.2 1.4 1.6 X (a) interactive - or~ ~, o = 0.001 " X (b) inviscid Figure 7. Free-surface contours. l - Disp.body - O_Fr - - Disp. body - Non_O_Fr O Bare body O Esp. ~ =
~__= =~; I ~ 1 ~ 0.0 0.2 0.4 0.6 O.B 1.0 X Figure 8. Wave profile. Β·c, ~0 1 ~ 0.0 0.2 0.4 0.6 X (a) expe~iment .. 0.0 o.z ~ . ~ . : : . . . . 0.8 1.0 0.4 0.6 0.8 X (c) inviscid Figure 10.
Surface-pressure contours. .015 1.0
OCR for page 699
Interactive (a) O_ o U~ o o - (C) O o (d (e) o . o o C~ O o o - U] o o ... ,. o - o O ~ Inviscid Fr = 0 Fr = .316 Fr = .316 F.: Do.soo (0.900 ,. l l 0.00 0.05 0.10 (b o ,f- ' .. o_o:oi5 ..- o-o,olo ,
I ~ U.05 0.10 0.15 0.20 . , ," . ,' ~ , ~ ,.' .' ' , ~' 0.1 . r , , O.00 0.05 0.10 0.15 0.20 \~J/~ 0.00 0.05 0.10 . . 0.00 0.05 0.10 `~de /' ,- .,.' ' b,_'o,,05o., / . b, 0 02S. /-'''' '- Β· - -o~o;
OZO 0.00 0.05 0.10 0 15 0;20 | -- .-n n2n ~-0,030 . . . 0_o.076 . . , 1 0.1 r- , ', , , 0.00 0.050.10 0.15 0.20 0.000.05 0.10 O.15 0.20 y 0.00 0.05 y Figure 11. Comparison of solutions at x = .506:
(a) axial-velocity contours; (b) pressure contours; (c) crossplane-velocity vectors; (d) axial-vorticity contours; and (e) turbulent kinetic energy contours. 713
OCR for page 699
(o) 1 all (b) ~ o ~ . - ,/ ~ O a a O . - ~. . O . . . . 0.000.05 0.10 0.15 O.ZO o.eo 0.05 0.10 0.15 O.ZO (C) an_ _ (d - o 0 . . n an n no n In . . . O.UO 0.05 0.10 Y Disco Or = 0 ~ = .316 ~ = .316
..... . . . . O.00 0.()5 0.10 0.15 O.3O / o O_ o 6- ~ O_ I 1 0.00 0.05 0.10 0.00 0.05 0.10 0.15 O.ZO , .. . . . . . . 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.7 O.l5 O.ZO ~0 Y Fugue 12. Carson of spurns
~ x = .9~: (a) ~i~-vel=1~ cone; (b) Is cares; ~) s~l=~-ve~1~ vow; (d) =1~-v~c1~ congas; ad (e) opulent anemic entry congas. 714
OCR for page 699
(a) o o o ~- L~ o o o - O_ _ (b) o o o o o - o in - o Interactive _. Inviscid Fr= 0 Fr = .316 Fr = .316 7J Al= J o 1 1 0_ ~ 0 t1() 0.05 0.10 0.00 0.05 0.10 _._ _-~005 it// _ / / i 15 / .0-10 I I ~I
0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 0.15 0.20 (C) o o . U: _ . . _ (d) (e) 0.00 0.05 0.1 0 (). 15 C).''0 - 1 0.00 0.05 0.10 y . ~1 . .. 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 ().15 0. 'U y 1
0.00 0.05 0.10 y Figure 13. Comparison of solutions for x = .1.112: (a) axial-velocity contours; (b) pressure contours; (c) crossplane-velocity vectors; (d) axial-vorticity contours; and (e)
turbulent kinetic energy contours.
OCR for page 699
l - - CAL. [Fr=0.316] -- CAL. [Fr=O] EXP.-Sarda [Fr=O] EXP.-Ali S. [Fr=0.313] o o _ ~Z= 0.0000 o _ ~~~~~ ~Z= 0.0125 0 ~ o o o o c ,~-Z= 0.0250 Q':;~ z=0.0375 i~Β£~ z= 0.0500 _- ~ Z=0.625 0.00 0.01
O.OZ 0.03 0.04 0.05 y o u~ 0 CAL. [Fr=0.31~;] -- CAL. [Fr=O] O EXP.-Sarda [Fr=O] EXP.-Ali S. [Fr=0.313] ~_ -~- Z= 0.0000 0.0250 Z= 0.03~5 Z= 0.0500 ~X: Z= 0.0625 CAL. [Fr= 0. 316] -- CAL. [Fr=O] O
EXP.-Sarda [Fr=O] ~ EXP.-Ali S. [Fr=0.313] 0 o _ ~Z= 0.0000o ~c ~z= 0.01250 . ~o ~~-Β°~~~~~~ ~Z= 0.02500 e ~Z = 0 . 0 3 7 50 __ . ~:A Z= 0~0500 0 Z=0.0625 0 cz 1 1 1 1 O 0.00 0.01 0.02 *0.03 0.04 0.05
y CAL. [Fr= 0 . 316] - - CAL. [Fr=O] O EXP.-Sarda [Fr=O] X EXP.-Sarda:CORRECTED o ~ EXP.-Ali S. [Fr=0.313] o o - o 0 ~0 0 .~_ Z= 0.0000 o .~ ~ ,,< Z= 0.0125 o o _~__ ~Z= 0.0250 o 0 0 0 ~ ~_0_ ~Z~
0.0375 0 0 ~Z= 0.0500 0 >~ 0~__, --~w.~- 0 ~Z= 0.0625 o ~ ~ ~ ~ ,~, . O ~ ~=';7941~-W-' ~ ~0 O 1 1 1 1 0_ 0.00 0.01 O.OZ .0.03 0.04 0.05 y CAL. [Fr=0.316] - - CAL. [Fr=O] O EXP.-Sarda [Fr=O] ~' Z=
0.0000 Z= 0.0125 ~~ Z= 0.0250 Z= 0.0375 ~ Z= 0.0500 ~ Z= 0.0625 1 1 1 1 0.00 0.01 O.OZ *0.03 0.04 C) 05 Figure 14. Velocity, pressure, and turbulent kinetic energy proD~les at x = .5. 716 0.00 0.01 0
02 0.03 0.04 0.05 y
OCR for page 699
- - In - - CAL.tFr-0.316] -- CAL. [Fr-O] O FXP.-Sarda [l`'r=O] EXP.-Ali S. [Fr=0.313] ~, V9~-Z=O.OOOO - 1, _ Z=0.0125 ~Z=0.0250 :;~ Z= 1).0375 ~, :~Z= 0.0500 _ ~ Z= 0.0625 O- 1 1 1 1 0.00 0.01 O OZ
*0.1)3 0.04 0.05 oF 1 o o 0 of O F o o CAL. [Fr=0.316] -- CAL. [Fi=O] O EXP.-Sarda [Fr=O] X EXP.-Sarda:CORRECTED EXP.-Ali S. [Fr=0.313] ~Z= 0.0000 ~; o - o o o o c~ ~~~~~ Z= 0.0125 ~-Q-^ ^~ Z= 0.0250
~Z=0.0375 ~Z= 0.0500 - z= 0.0625 (}~ 1 1 1 1 0.00 0.01 0.02 *0.03 0.04 0.05 y - CAL. [Fr=0.316] -- CAL. [Fr=O] O EXP.-Sarda [F'r=O] EXP.-Ali S. [Fr=0.3 13] 1\ _ ~a~ Z= 0.0000 Z= 0.0125 Z= 0.0250 ~9~
Z= 0.0375 Z= 0.0500 9~ ~ Z= 0.0625 ,~ ~ 0.00 0.01 0.02 *0.03 0.04 0.05 y CAL. [F'r=0.316] -- CAL. [Fr=O] O EXP.-Sarda [Fr=O] Z= 0.0000 - CAL. [Fr=0.316] -- CAL. [Fr=O] O EXP.-Sal da LFI =()] EXP.-Ali
S. [Fr=n.313] . ~Z= 0.0000 __ _ = Z= 0.0125 __~ Z= 0.0250 Z= 0.03~5 - ,~4Z= 0.0500 Z= 0.0625 . ~5L=1 1 O.00 0.01 0 02 0.03 0.04 0 05 * y ~~ Z= 0.0125 -Z= 0.0250 ~Z=0.0375 = 0.0500 Z= 0.0625 1 1 1
0.00 0.01 0.02 .0.03 0.04 0.05 y Figure 15. Velocity, pressure, and turbulent kinetic energy proD~les at x = .9.
OCR for page 699
o . o o -- ar o o( - o o o - o o _ - N o o C'AL. [Fr= 0 . 3 1 6] - CAL. [Fr= 0. 31 6] -- CAL. tFr=O] -- CAL. [Fr=O] ': EXP.-Sarda [Fr=O] O EXP.-Sarda [Fr=O] | i\ EXP.-Ali S. [Fr=0.313] | ~A EXP.-Ali
S. [Fr= .313] | ~)Z= o.oooo I r =c Z= .0000 ~O. Z=O.OIZ5 1 Β°~ Z= .0125 1 Β° =oo250 1 0~:z= .0250 1 Β°~ :Q ~, z= 0.0375 1 Β°~=~\ z= 0375 1 Β° ~ ~:~ ~--bo Z = 0 . 0 5 0 0 ~Β° ~ = / ~ ~ Z = . 0 5 0 0 ~ _ ~ ~
Z= 0.0625 o- ~=~ ~ Z= 0.0625 o 0.00 0.01 0.02 0.03 0.04 0.()5 0.00 0.01 0.02 0.03 0.04 0.05 * * Y Y o - 0 ~ o o o o o o P~ o o o 1 ~4 o o o o o o o o o o o o 1 1 1'1 1 Β° 0.00 0.01 0.02 *0.03 0.04
0.05 y - CAL. [F'r=0.316] -- CAL [Fr=O] O EXP.-Sarda [Fr=O] X EXP.-Sarda:CORRECTED ~ EXP.-Ali S. [Fr=0.313] ~ Z= 0.0000 Z= 0.0125 ,:^,~ ~ Z= 0.0250 _ _ e _ _ _ _ _ _ _ ~ Z= 0~0375 ~Xz= 0 0500 A Z=
0.0625 - CAL. LFr=0.3 1~3] - - CA L. lFr= 0] O EXP.-Sarda tPr=()] EXP.-Ali S. [Fr=0.313] Z= 0.0000 ~= Z= 0.0125 ~= Z= 0.0250 Z= 0.03~5 ~3~Z= 0.0500 ~_ _Q ~ ~ - A Z= 0.0625 . . . . 0.00 0.01 0.02
*0.03 0.04 0.05 y CAL. tFr=0.316] -- CAL [Fi=O] O EXP.-Sarda [Fr=O] ~Z= 0.0000 Z= 0.0125 Z= 0.0250 Z- 0.0375 ~~ Z= 0.0500 - 11~ - O 0625 Z- . 0.00 0.01 O.OZ *0.03 O.04 0.05 y Figure 16. Velocity,
pressure, and turbulent kinetic energy profiles at x = 1.1. 718
OCR for page 699
DISCUSSION Kuniharu Nakatake Kyushu University, Japan I appreciate your interesting paper. In your calculation, the flow field and pressure distribution around wave surface are not taken into
account. If you distribute source panel on the calculated wave surface and determine its strength from the kinematic condition on the wave surface, you may obtain more plausible results. This is
possible in the linearized framework. AUTHORS' REPLY We agree that a higher-order treatment of the free-surface boundary conditions would be preferable. We hope to make progress in this area in the
future. DISCUSSION Hoyle Raven Maritime Research Institute Netherlands, The Netherlands This valuable paper addresses the difficult problem of prescribing free-surface boundary conditions inside the
viscous domain. The authors solve the wave elevation from integration of the kinematic condition, and thus from the velocities at the undisturbed free surface. Prescribing boundary conditions for
these velocities here is, therefore, critical. If I understand it correctly, in (20) not only the stresses but also the associated strain rates have been neglected, which are of leading order in the
wake elevation. This is not a definite solution of course. What alternatives for the free-surface condition do you consider? AUTHORS' REPLY The approximations (a) through (d) utilized in deriving
equations (18) through (20) have been clearly stated in the text, and therefore do not require further explanation. An alternative treatment of the free- surface boundary conditions within the
present overall framework is to retain more terms in the approximations (a) through (c) and higher- order terms in approximation (d). DISCUSSION William B. Morgan David Taylor Research Center, USA In
the authors' presentation of their problem they stated that they were using an "Unsteady RANS Code using a Kit turbulence model." I believe this statement is inconsistent in that a RANS Code with a
kin model cannot be unsteady. I can understand attempting to use this code in a Quasi-steady" sense, but I believe it is not applicable to the unsteady problem. Would the authors please comment?
AUTHORS' REPLY Although not discussed in the present paper, we recognize the limitations of the kit turbulence model with regard to simulating unsteady flow. Issues concerning this point were
discussed in one of our earlier papers on another topic [31]. In our presentation, we only wished to emphasize the general capability of the IIHR basic viscous- flow method for unsteady flow
notwithstanding the limitations of current turbulence models for such applications. [31] Stern, F., Kim, H.T., Patel, V.C., and Chen, H.C., "A Viscous-Flow Approach to the Computation of
Propeller-Hull Interaction,. Journal of Ship Research, Vol. 32, No. 4, December 1988, pp. 246-262. DISCUSSION Kazu-hiro Mori Hiroshima University, Japan 1. It must be important to be of the same
order in approximations when the viscous effects are taken into account in the free-surface computation. From this standpoint of view, the use of the displacement thickness method is not consistent
where the thickness is calculated exactly. This may be crucial when the 3-D separation is dominant. My suggestion is that the viscous flow should be taken into account as the double model flow
directly in the inviscid computation. 2. According to our computation and experiment, the separation of the flow at stern is much affected by the bow wave system. This means that we should not expect
precise discussions on the interaction between the viscosity and the free surface by the iterative procedure as done in the present study. AUTHORS' REPLY We are not clear as to the precise meaning of
your questions; however, we would like to point out that viscous effects have been included directly in the inviscid-flow computation through the use of the displacement body. As discussed in the
text, the limitations of this approach are not yet known. The present results are encouraging, but a complete evaluation requires further validation through comparisons with experimental data and a
large~omain approach. Work along these lines is in progress. We thank both the oral and written discussers of our paper for their pertinent remarks. 719
OCR for page 699 | {"url":"http://www.nap.edu/openbook.php?record_id=1841&page=699","timestamp":"2014-04-20T13:45:46Z","content_type":null,"content_length":"126024","record_id":"<urn:uuid:a0ef6c45-e14f-44a0-99a2-d173e2cbf5c6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Highway hierarchies star
Results 1 - 10 of 15
- IN: WORKSHOP ON ALGORITHM ENGINEERING AND EXPERIMENTS (ALENEX , 2008
"... During the last years, impressive speed-up techniques for Dijkstraβs algorithm have been developed. Unfortunately, the most advanced techniques use bidirectional search which makes it hard to
use them in scenarios where a backward search is prohibited. Even worse, such scenarios are widely spread, e ..."
Cited by 28 (15 self)
Add to MetaCart
During the last years, impressive speed-up techniques for Dijkstraβs algorithm have been developed. Unfortunately, the most advanced techniques use bidirectional search which makes it hard to use
them in scenarios where a backward search is prohibited. Even worse, such scenarios are widely spread, e.g., timetable-information systems or timedependent networks. In this work, we present a
unidirectional speed-up technique which competes with bidirectional approaches. Moreover, we show how to exploit the advantage of unidirectional routing for fast exact queries in timetable
information systems and for fast approximative queries in time-dependent scenarios. By running experiments on several inputs other than road networks, we show that our approach is very robust to the
- PROCEEDINGS OF THE 7TH WORKSHOP ON EXPERIMENTAL ALGORITHMS (WEAβ08), VOLUME 5038 OF LECTURE NOTES IN COMPUTER SCIENCE , 2008
"... In recent years, highly effective hierarchical and goal-directed speedup techniques for routing in large road networks have been developed. This paper makes a systematic study of combinations of
such techniques. These combinations turn out to give the best results in many scenarios, including graphs ..."
Cited by 24 (11 self)
Add to MetaCart
In recent years, highly effective hierarchical and goal-directed speedup techniques for routing in large road networks have been developed. This paper makes a systematic study of combinations of such
techniques. These combinations turn out to give the best results in many scenarios, including graphs for unit disk graphs, grid networks, and time-expanded timetables. Besides these quantitative
results, we obtain general insights for successful combinations.
- IN: PROCEEDINGS OF THE EIGHT WORKSHOP ON ALGORITHM ENGINEERING AND EXPERIMENTS (ALENEX06), SIAM , 2006
"... An overlay graph of a given graph G =(V,E) on a subset S β V is a graph with vertex set S that preserves some property of G. In particular, we consider variations of the multi-level overlay
graph used in [21] to speed up shortestpath computations. In this work, we follow up and present general verte ..."
Cited by 24 (8 self)
Add to MetaCart
An overlay graph of a given graph G =(V,E) on a subset S β V is a graph with vertex set S that preserves some property of G. In particular, we consider variations of the multi-level overlay graph
used in [21] to speed up shortestpath computations. In this work, we follow up and present general vertex selection criteria and strategies of applying these criteria to determine a subset S inducing
an overlay graph. The main contribution is a systematic experimental study where we investigate the impact of selection criteria and strategies on multi-level overlay graphs and the resulting
speed-up achieved for shortest-path queries. Depending on selection strategy and graph type, a centrality index criterion, a criterion based on planar separators, and vertex degree turned out to be
good selection criteria.
- In Proc. 2007 Int. Conf. on Very Large Data Bases (VLDBβ07 , 2007
"... Efficient fastest path computation in the presence of varying speed conditions on a large scale road network is an essential problem in modern navigation systems. Factors affecting road speed,
such as weather, time of day, and vehicle type, need to be considered in order to select fast routes that m ..."
Cited by 19 (2 self)
Add to MetaCart
Efficient fastest path computation in the presence of varying speed conditions on a large scale road network is an essential problem in modern navigation systems. Factors affecting road speed, such
as weather, time of day, and vehicle type, need to be considered in order to select fast routes that match current driving conditions. Most existing systems compute fastest paths based on road
Euclidean distance and a small set of predefined road speeds. However, βHistory is often the best teacherβ. Historical traffic data or driving patterns are often more useful than the simple Euclidean
distance-based computation because people must have good reasons to choose these routes, e.g., they may want to avoid those that pass through high crime areas at night or that likely encounter
accidents, road construction, or traffic jams. In this paper, we present an adaptive fastest path algorithm capable of efficiently accounting for important driving and speed patterns mined from a
large set of traffic data. The algorithm is based on the following observations: (1) The hierarchy of roads can be used to partition the road network into areas, and different path pre-computation
strategies can be used at the area level, (2) we can limit our route search strategy to edges and path segments that are actually frequently traveled in the data, and (3) drivers usually traverse the
road network through the largest roads available given the distance of the trip, except if there are small roads with a significant speed advantage over the large ones. Through an extensive
experimental evaluation on real road networks we show that our algorithm provides desirable (short and well-supported) routes, and that it is significantly faster than competing methods.
- IN: 6TH WORKSHOP ON EXPERIMENTAL ALGORITHMS , 2007
"... Many speed-up techniques for route planning in static graphs exist, only few of them are proven to work in a dynamic scenario. Most of them use preprocessed information, which has to be updated
whenever the graph is changed. However, goal directed search based on landmarks (ALT) still performs cor ..."
Cited by 16 (5 self)
Add to MetaCart
Many speed-up techniques for route planning in static graphs exist, only few of them are proven to work in a dynamic scenario. Most of them use preprocessed information, which has to be updated
whenever the graph is changed. However, goal directed search based on landmarks (ALT) still performs correct queries as long as an edge weight does not drop below its initial value. In this work, we
evaluate the robustness of ALT with respect to traffic jams. It turns out thatβby increasing the efficiency of ALTβwe are able to perform fast (down to 20 ms on the Western European network) random
queries in a dynamic scenario without updating the preprocessing as long as the changes in the network are moderate. Furthermore, we present how to update the preprocessed data without any additional
space consumption and how to adapt the ALT algorithm to a time-dependent scenario. A time-dependent scenario models predictable changes in the network, e.g. traffic jams due to rush hour.
- In 9th DIMACS Implementation Challenge [1 , 2006
"... {bast,funke,dmatijev} at mpi-inf dot mpg dot de We introduce the concept of transit nodes, as a means for preprocessing a road network, with given coordinates for each node and a travel time for
each edge, such that point-to-point shortest-path queries can be answered extremely fast. The transit nod ..."
Cited by 14 (1 self)
Add to MetaCart
{bast,funke,dmatijev} at mpi-inf dot mpg dot de We introduce the concept of transit nodes, as a means for preprocessing a road network, with given coordinates for each node and a travel time for each
edge, such that point-to-point shortest-path queries can be answered extremely fast. The transit nodes are a set of nodes, as small as possible, with the property that every shortest path that is
non-local in the sense that it covers a certain not too small euclidean distance passes through at least on of these nodes. With such a set and precomputed distances from each node in the graph to
its few, closest transit nodes, every non-local shortest path query becomes a simple matter of combining information from a few table lookups. For the US road network, which has about 24 million
nodes and 58 million edges, we achieve a worst-case query processing time of about 10 microseconds (not milliseconds) for 99 % of all queries. This improves over the best previously reported times by
two orders of magnitude. 1
- IN THE 9TH DIMACS IMPLEMENTATION CHALLENGE: SHORTEST PATHS , 2007
"... We present significant improvements to a practical algorithm for the point-to-point shortest path problem on road networks that combines Aβ search, landmark-based lower bounds, and reach-based
pruning. Through reach-aware landmarks, better use of cache, and improved algorithms for reach computation ..."
Cited by 13 (1 self)
Add to MetaCart
We present significant improvements to a practical algorithm for the point-to-point shortest path problem on road networks that combines Aβ search, landmark-based lower bounds, and reach-based
pruning. Through reach-aware landmarks, better use of cache, and improved algorithms for reach computation, we make preprocessing and queries faster while reducing the overall space requirements. On
the road networks of the USA or Europe, the shortest path between two random vertices can be found in about one millisecond after one or two hours of preprocessing. The algorithm is also effective on
two-dimensional grids.
- PROCEEDINGS OF THE 7TH WORKSHOP ON ALGORITHMIC APPROACHES FOR TRANSPORTATION MODELING, OPTIMIZATION, AND SYSTEMS (ATMOS 2007 , 2007
"... During the last years, impressive speed-up techniques for DIJKSTRAβs algorithm have been developed. Unfortunately, recent research mainly focused on road networks. However, fast algorithms are
also needed for other applications like timetable information systems. Even worse, the adaption of recentl ..."
Cited by 11 (7 self)
Add to MetaCart
During the last years, impressive speed-up techniques for DIJKSTRAβs algorithm have been developed. Unfortunately, recent research mainly focused on road networks. However, fast algorithms are also
needed for other applications like timetable information systems. Even worse, the adaption of recently developed techniques to timetable information is more complicated than expected. In this work,
we check whether results from road networks are transferable to timetable information. To this end, we present an extensive experimental study of the most prominent speed-up techniques on different
types of inputs. It turns out that recently developed techniques are much slower on graphs derived from timetable information than on road networks. In addition, we gain amazing insights into the
behavior of speed-up techniques in general.
"... When you drive to somewhere βfar awayβ, you will leave your current location via one of only a few βimportantβ traffic junctions. Starting from this informal observation, we develop an
algorithmic approachβtransit node routingβ that allows us to reduce quickest-path queries in road networks to a sma ..."
Cited by 9 (2 self)
Add to MetaCart
When you drive to somewhere βfar awayβ, you will leave your current location via one of only a few βimportantβ traffic junctions. Starting from this informal observation, we develop an algorithmic
approachβtransit node routingβ that allows us to reduce quickest-path queries in road networks to a small number of table lookups. We present two implementations of this idea, one based on a simple
grid data structure and one based on highway hierarchies. For the road map of the United States, our best query times improve over the best previously published figures by two orders of magnitude.
Our results exhibit various trade-offs between average query time (5 Β΅s to 63 Β΅s), preprocessing time (59 min to 1200 min), and storage overhead (21 bytes/node to 244 bytes/node).
, 2008
"... Recently, modern tracking methods started to allow capturing the position of massive numbers of moving objects. Given this information, it is possible to analyze and predict the traffic density
in a network which offers valuable information for traffic control, congestion prediction and prevention. ..."
Cited by 7 (2 self)
Add to MetaCart
Recently, modern tracking methods started to allow capturing the position of massive numbers of moving objects. Given this information, it is possible to analyze and predict the traffic density in a
network which offers valuable information for traffic control, congestion prediction and prevention. In this paper, we propose a novel statistical approach to predict the density on any edge of such
a network at some time in the future. Our method is based on short-time observations of the traffic history. Therefore, knowing the destination of each traveling individual is not required. Instead,
we assume that the individuals will act rationally and choose the shortest path from their starting points to their destinations. Based on this assumption, we introduce a statistical approach to
describe the likelihood of any given individual in the network to be located at a certain position at a certain time. Since determining this likelihood is quite expensive when done in a
straightforward way, we propose an efficient method to speed up the prediction which is based on a suffix-tree. In our experiments, we show the capability of our approach to make useful predictions
about the traffic density and illustrate the efficiency of our new algorithm when calculating these predictions. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4027268","timestamp":"2014-04-21T05:54:49Z","content_type":null,"content_length":"41229","record_id":"<urn:uuid:324d9b8c-3940-4e8d-bd87-ee94ccb7c82d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright Β© University of Cambridge. All rights reserved.
'Vector Walk' printed from http://nrich.maths.org/
Suppose that I am given a large supply of basic vectors $b_1=\pmatrix{2\cr 1}$ and $b_2=\pmatrix{0\cr 1}$.
Starting at the origin, I take a 2-dimensional 'vector walk' where each step is either a $b_1$ vector or a $b_2$ vector, either forwards or backwards.
Investigate the possible coordinates for the end destinations of my walk.
Can you find any other pairs of basic vectors which yield exactly the same set of destinations?
Can you find any pairs of basic vectors which yield none of these destinations?
Can you find any pairs of basic vectors which allow you to visit all integer coordinates?
In more formal mathematics, the points visited by such a vector walk would be called a lattice and the two basic vectors would be called the generators . Lattices which repeat themselves are
structurally interesting; the symmetry properties of such lattices are important in both pure mathematics and its applications to, for example, the properties of crystals. | {"url":"http://nrich.maths.org/6572/index?nomenu=1","timestamp":"2014-04-16T10:48:28Z","content_type":null,"content_length":"4295","record_id":"<urn:uuid:b62b9c3f-2c2a-46d8-a9c1-dfa6e51ca0bd>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
normal weight for 5 feet 8 inches
You asked:
normal weight for 5 feet 8 inches
For someone who is 5 feet and 8 inches tall usual weight is somewhere between 121.7lb (8 stone 10lb) and 164.4lb (11 stone 10lb).
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/normal_weight_for_5_feet_8_inches","timestamp":"2014-04-19T12:56:24Z","content_type":null,"content_length":"64587","record_id":"<urn:uuid:69f4c5e8-d7c9-4a54-a86f-de9326dec2bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with Proofs
September 13th 2009, 12:48 PM #1
Oct 2007
I'm taking a new class and instructor isn't very helpful
1) If an-->L, then |an|-->|L|
2) Prove that lim(an-bn) = lim(an)-lim(bn), provided lim(an) and lim(bn) exist
3) Give an example where lim(an) and lim(bn) do not exist, but lim(an+bn) exists.
4) If {an} is a bounded sequence and if {bn} is a sequence converging to 0, then {anbn} converges to 0.
I have tons of questions, but I'll start with these.
Help is extremely appreciated!
I'm taking a new class and instructor isn't very helpful
1) If an-->L, then |an|-->|L|
2) Prove that lim(an-bn) = lim(an)-lim(bn), provided lim(an) and lim(bn) exist
3) Give an example where lim(an) and lim(bn) do not exist, but lim(an+bn) exists.
4) If {an} is a bounded sequence and if {bn} is a sequence converging to 0, then {anbn} converges to 0.
The secret to #1 is $\left| {\left| {a_n } \right| - \left| L \right|} \right| \leqslant \left| {a_n - L} \right|$.
For #4 Suppose that $\left( {\forall n} \right)\left[ {\left| {a_n } \right| \leqslant A} \right]$ then $\left| {a_n b_n } \right| = \left| {a_n } \right|\left| {b_n } \right| \leqslant A\left|
{b_n } \right|$
I'm taking a new class and instructor isn't very helpful
1) If an-->L, then |an|-->|L|
2) Prove that lim(an-bn) = lim(an)-lim(bn), provided lim(an) and lim(bn) exist
3) Give an example where lim(an) and lim(bn) do not exist, but lim(an+bn) exists.
4) If {an} is a bounded sequence and if {bn} is a sequence converging to 0, then {anbn} converges to 0.
I have tons of questions, but I'll start with these.
Help is extremely appreciated!
2. Since $\{a_n\}\to a$, $\exists~N_1$ such that $n>N_1$ implies that $|a_n-a|<\frac{\epsilon}{2}$. Similarly, since $\{b_n\} \to b$, $\exists~N_2$ such that $n>N_2$ implies that $|b_n-b|<\frac{\
Let $N=\max\{N_1,N_2\}$. Thus, when $n>N, |(a_n-b_n)-(a-b)| = \underbrace{|(a_n-a)+(b-b_n)| \leq |a_n-a|+|b-b_n|}_{triangle~inequality} < \frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$.
Therefore $\lim(a_n-b_n)=a-b = \lim a_n-\lim b_n$.
3. Consider $a_n = 1+(-1)^n$ and $b_n=1-(-1)^n$.
Thanks so much. I just had no idea where to start! Thank you!
September 13th 2009, 01:44 PM #2
September 14th 2009, 06:08 PM #3
September 14th 2009, 09:11 PM #4
Oct 2007 | {"url":"http://mathhelpforum.com/differential-geometry/102107-help-proofs.html","timestamp":"2014-04-17T04:37:03Z","content_type":null,"content_length":"45115","record_id":"<urn:uuid:de3925d9-25bf-4512-8be4-bc5955b7b95e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
These pages have moved to http://www.dr-mikes-maths.com/
See: What I live for - What I laugh at - What I like - What I look like - Who I love - Contact me
The Interpolatory Polynomial
Given a set of n+1 points (x[0],f[0]),...,(x[n],f[n]), it can be shown that there is exactly one polynomial of degree n or less passing through the points. This polynomial is called the interpolating
or interpolatory polynomial for the points. There will be many other polynomials with degree greater than n which also pass through the points, but only one with degree n or less.
There are several ways to find the interpolatory polynomial. Three are presented here, the worst one first.
A direct method: If one writes the polynomial as p(x) = a[0]+a[1]x+a[2]x^2+...+a[n]x^n, then substituting the points (x[0],f[0]), (x[1],f[1]), etc into p(x) gives a system of n+1 equations for the
n+1 unknown coefficients a[0],a[1],...,a[n]. This system can then be solved, for example using Gaussian Elimination. This method is not a very good one, since the number of operations required to
solve the system of equations will be roughly proportional to n^3. If one doubles the number of points, it will octuple (multiply by 8) the number of operations required. We say that this method is O
(n^3), or ΓΆrder n^3".
The Lagrange Interpolating Polynomial (LIP): Given x[0],x[1],...,x[n], we can define special polynomials l[i](x) which satisfy l[i](x[i]) = 1, and l[i](x[j]) = 0 for j ΒΉ i. It is easy enough to write
down the l[i](x):
l[i](x) = (x-x[0])(x-x[1])...(x-x[i-1])(x-x[i+1])...(x-x[n])(x[i]-x[0])(x[i]-x[1])...(x[i]-x[i-1])(x[i]-x[i+1])...(x[i]-x[n]) .
Notice that each l[i](x) is a degree n polynomial. Then, we can find p(x) via:
p(x) = f[0]l[0](x) + f[1]l[1](x) + ... +f[n]l[n](x).
Notice that p(x[i]) = f[0].0+f[1].0+... + f[i-1].0+f[i].1+f[i+1].0+...+f[n].0, which is equal to f[i]. When written in this way as a linear combination of the l[i](x), p(x) is called the Lagrange
Interpolating Polynomial. It should be stressed, however, that it is exactly the same polynomial that would be obtained using the direct method. The LIP method is only order n^2, so that doubling the
number of points would only quadruple the number of operations. Therefore, for large n, this method is much faster than direct evaluation.
The Newton Divided Difference Interpolating Polynomial (NDDIP): Let D[i,j], for i+j Β£ n, be defined as follows. Let D[i,0] = f[i], and let
D[i,j] = D[i+1,j-1]-D[i,j-1] x[i+j]-x[i] .
The half-matrix containing all the D[i,j] is called the Divided Difference Table, since each column (after column 0) is made up of differences between entries of the previous column, divided by
differences between the x[k] values. Then, let p(x) be the polynomial defined by
p(x) = D[0,0] + (x-x[0])[ D[0,1] + (x-x[1])[ D[0,2] + ...[ D[0,n-1] + (x-x[n-1])[ D[0,n] ]]...]].
It may not be obvious, but this p(x) is a degree n polynomial passing through all the points (x[0],f[0]), (x[1],f[1]), ..., (x[n],f[n]). Equally non-obvious is the fact that this polynomial is
exactly the same as that obtained either directly or using the LIP. It is called the Newton Divided Difference Interpolating Polynomial, and takes order n^2 calculations to find. Although it is the
same order as the LIP, it is faster, but only by a constant factor.
The DotPlacer Applet (on this site) allows you to place a set of points on the screen, and have it draw the interpolating polynomial through the points. You can even move the points around, and watch
the curve change. Try it!
The algorithm used in the applet is the NDDIP method. To see examples of the three methods, follow this link. I have provided links to Mathworld in case you want to read more about the Lagrange
Interpolating Polynomial.
If the above information is not helpful, you may like to consider the book displayed on the left. It is a somewhat technical book, but devoted entirely to polynomial interpolation in its many shapes
and forms. It would be good for a person seeking to seriously apply just the polynomial interpolation method to a problem at hand. The book on the right, with code samples and good explanations, is
regarded by many as one of the best books around for applying java (and smalltalk) to numerical methods.
File translated from T[E]X by T[T]H, version 2.25. | {"url":"http://www.angelfire.com/mt/ofolives/DPpolint.html","timestamp":"2014-04-18T11:11:28Z","content_type":null,"content_length":"19168","record_id":"<urn:uuid:221c5447-a1c0-4426-86cc-c949787f9547>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Questions from statistical inference
1. 214110
Questions from statistical inference
Select the correct answer and write the appropriate letter in the space provided.
______ 1. A single number used to estimate a population parameter is
a. the confidence interval. b. the population parameter.
c. a point estimate. d. the mean of the population.
______ 2. A range of values constructed from sample data so that the parameter occurs within that range at a specified
probability is
a. a confidence interval. b. the population parameter.
c. a point estimate. d. the mean of the population
______ 3. The size of the standard error is affected by the standard deviation of the sample and
a. a confidence interval. b. the population parameter.
c. the point estimate. d. the sample size
______ 4. Suppose we select 100 samples from a population. For each sample we construct a 95 percent confidence interval. We could expect about 95 percent of these confidence intervals to contain
a. a sample mean. b. the population mean.
c. a point estimate. d. the standard deviation of the population
______ 5. The t distribution is a continuous distribution, with many similarities to
a. the confidence interval. b. the population parameter.
c. the standard normal distribution. d. the mean of the population
______ 6. The t distribution is used when the population is normal and
a. the population standard deviation is unknown.
b. the population standard deviation is known.
c. the point estimate is unknown.
d. the mean of the population is known.
______ 7. If the level of confidence is decreased from 95 percent to 90 percent, the width of the corresponding interval will
a. be increased. b. be decreased.
c. stay the same. d. not have an effect on the level of confidence
______ 8. The finite population correction factor is used when
a. the sample is more than 5 percent of the population.
b. the sample is less than 5 percent of the population.
c. the sample is larger than the population.
d. the population cannot be estimated.
______ 9. A 90 percent confidence interval for means indicates that 90 out of 100 similarly constructed intervals will include the
a. sample mean. b. sampling error.
c. z value d. population mean.
______ 10. The fraction, ratio, or percent indicating the part of the sample or the population having a particular trait of interest is
a. a confidence interval. b. the population parameter.
c. a point estimate. d. the proportion.
Part II Answer the following questions. Be sure to show essential work.
11. As part of a safety check, the Pennsylvania Highway Patrol randomly stopped 25 cars and checked their tire pressure. The sample mean was 32 pounds per square inch with a sample standard
deviation of 2 pounds per square inch. Develop a 98 percent confidence interval for the population mean.
12. A human resource manager for Carver County is evaluating a recent effort to improve the health of the county's firefighters by providing an exercise room in each of the county's firehouses. Their
records prior to installing the exercise rooms indicate that the mean weight of all county firefighters is 198 pounds with a standard deviation of 13 pounds. A random sample of 40 firefighters taken
six months after the program was implemented revealed that the mean of the sample was 192 pounds. Construct a 96% confidence interval for the mean weight of all firefighters six months after the
exercise rooms were installed. Assume that σ has not changed. Does it appear that the program is working?
13. Suppose that Carver County only has 250 firefighters. Construct a 96 percent confidence interval for the mean weight of all firefighters six months after the exercise rooms were installed.
14. A manufacturer of diamond drill bits for industrial production drilling and machining wishes to investigate the length of time a drill bit will last while drilling carbon steel. The production of
the drill bits is very expensive, thus the number available for testing is small. A sample of 8 drill bits had a mean drilling time of 2.25 hours with a standard deviation of 0.5 hours. Is it
reasonable for the manufacturer to claim that the drill bits will last 2.5 hours?
15. Of a random sample of 90 firms with employee stock ownership plans, 50 indicated that the primary reason for setting up the plan was tax related. Develop a 90 percent confidence interval for the
population proportion of all such firms with this as the primary motivation.
16. A study of 305 computer chips found that 244 chips functioned properly. Develop a 99 percent confidence interval for the population proportion of properly functioning computer chips.
17. A correctional institution would like to report the mean amount of money spent per day on operating the facilities. How many days should be considered if a 95 percent confidence is used and the
estimate is to be within one hundred dollars? The standard deviation is $400.
18. The Corporate Lawyer, a magazine for corporate lawyers, would like to report the mean amount earned by lawyers in their area of specialization. How large a sample is required if the 97 percent
level of confidence is used and the estimate is to be within $2,500? The standard deviation is $16,000.
19. The Customer Relations Department at SkyBlue Airline wants to estimate the proportion of customers that do not check any luggage. The estimate is to be within 0.03 of the true proportion with 95
percent level of confidence. No estimate of the population proportion is available. How large a sample is required?
20. A survey is being conducted on a local mayoral election. If the poll is to have a 98 percent confidence level and must be within four percentage points, how many people should be surveyed?
Step by step method for computing test statistic , confidence interval, and answers to multiple choice questions are given in the answer. | {"url":"https://brainmass.com/statistics/confidence-interval/214110","timestamp":"2014-04-21T12:10:29Z","content_type":null,"content_length":"32116","record_id":"<urn:uuid:9375b3d6-7451-42c1-b71d-8eb2bffbcbf0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Here are the slides to some of my talks:
Slides (pdf) from my talk at the "Logic Colloquium" in Wroclaw, 2007.
Slides (pdf) from my talk at "Infinity in Logic and Computation" in Cape Town, 2007.
Slides (pdf) from my talk at the workshop "Mathematical Logic: Proof Theory and Constructive Mathematics" in Oberwolfach, 2008.
Slides (pdf) from my talk at the "Logic Colloquium '08" in Bern, 2008.
Slides (pdf) from my talk at the workshop "Recent trends in Proof Theory", Bern, 2008.
Back to home page. | {"url":"http://folk.uio.no/philipge/talks.html","timestamp":"2014-04-16T15:59:07Z","content_type":null,"content_length":"1203","record_id":"<urn:uuid:11952c91-9f4e-46d2-9cce-23026b45e6da>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating Solar Energy
Hi, apparently you can calculate the amount of energy the sun delivers to each square meter of the earth using an umbrella, tin can, water and a thermometer. You put the tin can outside with some
water in it and put the umbrella over it until it reaches ambient temperature. You then take the umbrella off and measure how quickly the temperature of the water increases in the can. From this you
should get a value of roughly 1000 watts/square meter if you take the measurements at midday day near the equator where the sun is roughly at 90 degrees. Could someone explain exactly how this works
as I saw it on a program which didn't explain how you actually calculate it. What measurements of the water do you take and what maths do you do to arrive at 1000Watts/square meter? Thank you for any
help offered | {"url":"http://www.physicsforums.com/showthread.php?t=727226","timestamp":"2014-04-19T17:40:22Z","content_type":null,"content_length":"22560","record_id":"<urn:uuid:b242b08a-6d36-4958-b5a3-4bbab8a315d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
integration by parts - using a previous integration
January 11th 2012, 09:56 AM #1
Jul 2009
integration by parts - using a previous integration
I am working on a problem which involves integrating $\int \frac{x^3}{1+x} dx$ [with limits 0, 1 (don't know how to do limits in latex)].
So I managed to integrate using substitution as follows:
let u = 1+x, du/dx = 1, limits: u = 1+1 = 2 and u = 1+0=1
x = u-1.
So get:
$\int \frac{(u - 1)^3}{u} du$ [with different limits 1, 2]
I expand out:
$\int u^2 - 3u + 3 - \frac{1}{u} du$
and then:
$\frac{u^3}{3} - \frac{3u^2}{2} + 3u - ln u$
and I get a result of 5/6 - ln2.
I am then asked to integrate this: $\int x^2 ln(1+x)$ with limits 0 to 1.
So I start integrating (by parts) and notice that second integration bit - I have this:
= $ln(1+x) . \frac{x^3}{3} - \frac{1}{3} \int \frac{x^3}{1+x}$
That last bit, I already integrated above.
So my question is, how do I combine the two. My start part of integration by parts and then the minus - and here use my previous result.
The previous result had different limits - so I presume I cant just use the result. Do I just substitute back in 1+x for the u above like this:
$ln(1+x) . \frac{x^3}{3} - \frac{1}{3}(\frac{(1+x)^3}{3} - \frac{3(1+x)^2}{2} + 3(1+x) - ln (1+x))$ - then use my limits 0, 1 as asked for this integration?
Re: integration by parts - using a previous integration
You can say that $\displaystyle \frac{x^3}{1+x} = x^2-x+1-\frac{1}{1+x}$
Re: integration by parts - using a previous integration
What you got is this: $\int_0^1 x^2 \ln (1+x) \, dx = \frac{x^3}{3} \cdot \ln (1+x) \Big\vert_0^1 - \frac{1}{3} \int_0^1 \frac{x^3}{1+x} \, dx$?
You already know the value of $\int_0^1 \frac{x^3}{1+x} \, dx$, just divide it by 3 and subtract, yielding:
$\int_0^1 x^2 \ln (1+x) \, dx = \frac{1^3}{3} \cdot \ln 2 - \frac{1}{3} \cdot \left( \frac{5}{6} - \ln 2 \right) = \frac{2 \ln 2}{3} - \frac{5}{18}$
January 11th 2012, 12:01 PM #2
January 11th 2012, 02:30 PM #3 | {"url":"http://mathhelpforum.com/calculus/195152-integration-parts-using-previous-integration.html","timestamp":"2014-04-16T10:46:32Z","content_type":null,"content_length":"39173","record_id":"<urn:uuid:22bd01a1-9842-4246-9757-6b1120fa88db>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
University Calculus : Elements With Early Transcendentals (09 Edition)
1. Functions and Limits
1.1 Functions and Their Graphs
1.2 Combining Functions; Shifting and Scaling Graphs
1.3 Rates of Change and Tangents to Curves
1.4 Limit of a Function and Limit Laws
1.5 Precise Definition of a Limit
1.6 One-Sided Limits
1.7 Continuity
1.8 Limits Involving Infinity
Questions to Guide Your Review
Practice and Additional Exercises
2. Differentiation
2.1 Tangents and Derivatives at a Point
2.2 The Derivative as a Function
2.3 Differentiation Rules
2.4 The Derivative as a Rate of Change
2.5 Derivatives of Trigonometric Functions
2.6 Exponential Functions
2.7 The Chain Rule
2.8 Implicit Differentiation
2.9 Inverse Functions and Their Derivatives
2.10 Logarithmic Functions
2.11 Inverse Trigonometric Functions
2.12 Related Rates
2.13 Linearization and Differentials
Questions to Guide Your Review
Practice and Additional Exercises
3. Applications of Derivatives
3.1 Extreme Values of Functions
3.2 The Mean Value Theorem
3.3 Monotonic Functions and the First Derivative Test
3.4 Concavity and Curve Sketching
3.5 Parametrizations of Plane Curves
3.6 Applied Optimization
3.7 Indeterminate Forms and L'Hopital's Rule
3.8 Newton's Method
3.9 Hyperbolic Functions
Questions to Guide Your Review
Practice and Additional Exercises
4. Integration
4.1 Antiderivatives
4.2 Estimating with Finite Sums
4.3 Sigma Notation and Limits of Finite Sums
4.4 The Definite Integral
4.5 The Fundamental Theorem of Calculus
4.6 Indefinite Integrals and the Substitution Rule
4.7 Substitution and Area Between Curves
Questions to Guide Your Review
Practice and Additional Exercises
5. Techniques of Integration
5.1 Integration by Parts
5.2 Trigonometric Integrals
5.3 Trigonometric Substitutions
5.4 Integration of Rational Functions by Partial Fractions
5.5 Integral Tables and Computer Algebra Systems
5.6 Numerical Integration
5.7 Improper Integrals
Questions to Guide Your Review
Practice and Additional Exercises
6. Applications of Definite Integrals
6.1 Volumes by Slicing and Rotation About an Axis
6.2 Volumes by Cylindrical Shells
6.3 Lengths of Plane Curves
6.4 Exponential Change and Separable Differential Equations
6.5 Work and Fluid Forces
6.6 Moments and Centers of Mass
Questions to Guide Your Review
Practice and Additional Exercises
7. Infinite Sequences and Series
7.1 Sequences
7.2 Infinite Series
7.3 The Integral Test
7.4 Comparison Tests
7.5 The Ratio and Root Tests
7.6 Alternating Series, Absolute and Conditional Convergence
7.7 Power Series
7.8 Taylor and Maclaurin Series
7.9 Convergence of Taylor Series
7.10 The Binomial Series
Questions to Guide Your Review
Practice and Additional Exercises
8. Polar Coordinates and Conics
8.1 Polar Coordinates
8.2 Graphing in Polar Coordinates
8.3 Areas and Lengths in Polar Coordinates
8.4 Conics in Polar Coordinates
8.5 Conics and Parametric Equations; The Cycloid
Questions to Guide Your Review
Practice and Additional Exercises
9. Vectors and the Geometry of Space
9.1 Three-Dimensional Coordinate Systems
9.2 Vectors
9.3 The Dot Product
9.4 The Cross Product
9.5 Lines and Planes in Space
9.6 Cylinders and Quadric Surfaces
Questions to Guide Your Review
Practice and Additional Exercises
10. Vector-Valued Functions and Motion in Space
10.1 Vector Functions and Their Derivatives
10.2 Integrals of Vector Functions
10.3 Arc Length and the Unit Tangent Vector T
10.4 Curvature and the Unit Normal Vector N
10.5 Torsion and the Unit Binormal Vector B
10.6 Planetary Motion
Questions to Guide Your Review
Practice and Additional Exercises
11. Partial Derivatives
11.1 Functions of Several Variables
11.2 Limits and Continuity in Higher Dimensions
11.3 Partial Derivatives
11.4 The Chain Rule
11.5 Directional Derivatives and Gradient Vectors
11.6 Tangent Planes and Differentials
11.7 Extreme Values and Saddle Points
11.8 Lagrange Multipliers
Questions to Guide Your Review
Practice and Additional Exercises
12. Multiple Integrals
12.1 Double and Iterated Integrals over Rectangles
12.2 Double Integrals over General Regions
12.3 Area by Double Integration
12.4 Double Integrals in Polar Form
12.5 Triple Integrals in Rectangular Coordinates
12.6 Moments and Centers of Mass
12.7 Triple Integrals in Cylindrical and Spherical Coordinates
12.8 Substitutions in Multiple Integrals
Questions to Guide Your Review
Practice and Additional Exercises
13. Integration in Vector Fields
13.1 Line Integrals
13.2 Vector Fields, Work, Circulation, and Flux
13.3 Path Independence, Potential Functions, and Conservative Fields
13.4 Green's Theorem in the Plane
13.5 Surface Area and Surface Integrals
13.6 Parametrized Surfaces
13.7 Stokes' Theorem
13.8 The Divergence Theorem and a Unified Theory
Questions to Guide Your Review
Practice and Additional Exercises
1. Real Numbers and the Real Line
2. Mathematical Induction
3. Lines, Circles, and Parabolas
4. Trigonometric Functions
5. Basic Algebra and Geometry Formulas
6. Proofs of Limit Theorems and L'Hopital's Rule
7. Commonly Occurring Limits
8. Theory of the Real Numbers
9. Convergence of Power Series and Taylor's Theorem
10. The Distributive Law for Vector Cross Products
11. The Mixed Derivative Theorem and the Increment Theorem
12. Taylor's Formula for Two Variables
What Our Readers Are Saying
Be the first to add a comment for a chance to win! | {"url":"http://www.powells.com/biblio/65-9780321533487-0","timestamp":"2014-04-18T09:49:44Z","content_type":null,"content_length":"75643","record_id":"<urn:uuid:a34fbf37-266e-4147-8c3b-e7fd74fcc132>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Testing a Set of Data for Normal Distribution
Date: 08/02/2008 at 18:27:49
From: Bugs
Subject: How to prove a set of data is under Gaussian Distribution?
I have a set of data which i have obtained through experiments. I
need to prove that the data belongs to Gaussian distribution. How do
I do that? I am not that great in probability, so I am not sure how
its done.
Date: 08/04/2008 at 20:40:02
From: Doctor Achilles
Subject: Re: How to prove a set of data is under Gaussian Distribution?
Hi Bugs,
Thanks for writing to Dr. Math.
I had to test for normality once and it took me a long, long time to
figure out how. I ended up finding out a lot of valuable information
in my statistics text: Biostatistical Analysis by J.H. Zar (4th
edition). I will summarize my findings for you. I should warn you,
none of the methods for calculating normality are easy.
Depending on why you need to do this test, some preliminary
information may be of value.
First, it is very hard to determine normality with small sample sizes.
Depending on how skewed, etc. your data are, it may just not be
possible to conclude either way. In general, the assumption is that
data are normally distributed unless concluded otherwise, however for
the purposes of statistical tests performed on the data, that
assumption is not necessarily inviolate (see next paragraph).
Second, you may be trying to determine whether to perform a parametric
statistical test (such as a t-test or ANOVA) on your data or instead
perform a non-parametric test (such as a Wilcoxon test). If that is
the case, you should know that parametric tests are more powerful than
non-parametric tests. In other words, non-parametric tests might miss
a statistically significant difference that a parametric test would
find. As a result of this fact, it is always okay to run a
non-parametric test (even on data that is normally distributed or on
data that might be normally distributed).
One common test for normality with which I am personally NOT familiar,
is the Kolmogorov-Smirnov test. The math behind it is very involved,
and I would suggest you refer to other resources such as this page
Wikipedia: Kolmogorov-Smirnov Test
if you want to learn more about this test.
There are 2 methods that I have some familiarity with for measuring
normality of a data set.
The first and easiest is the Chi-square test. The advantage here is
the ease. The disadvantage is that is is not very powerful. In other
words, you may be unable to reject the hypothesis that your data is
normally distributed when another, more powerful test would detect a
deviation from normality. It is also the only test that you can run
on small sample sizes.
Let's use this example data set:
1.2, 1.4, 1.9, 3.1, 3.3, 3.6, 3.8, 4.2, 4.4, 6.1
To run this test for normality, first calculate the mean and standard
deviation for your data set.
Mean = 3.3
StDev = 1.5
Then, put your data into a histogram.
Bin | Observed
0-1 | 0
1-2 | 3
2-3 | 0
3-4 | 4
4-5 | 2
5-6 | 0
6-7 | 1
7-8 | 0
Next, make an "ideal" histogram based only on the mean and standard
deviation. In other words, for a perfectly normally distributed data
set with a mean of 3.3 and a standard deviation of 1.5, what part of
the data would we expect to fall into each of the bins?
The function for this is the Gaussian Distribution, which is defined as:
f(x) = a*e^(-(x-m)^2/(2s^2))
Where "e" is the base of natural logarithms
e = 2.71828...
"x" is a given value we might observe, "m" is the mean of our
distribution, "s" is the standard deviation, and "a" is a scaling
factor which should be equal to 0.266 times the size of our original
data set.
Our original data set had 10 items in it, so a = 0.266*10 = 2.66, the
mean of our original data set was 3.3, so m = 3.3, and the StDev of
our original data set was 1.5, so s = 1.5.
So our function becomes:
f(x) = 2.66e^(-(x-3.3)^2/(2*1.5^2))
f(x) = 2.66e^(-(x-3.3)^2/4.5)
Now we use this to generate a new set of values. To do this, we take
the integral of the distribution over each range. So, the integral of
the function from x=0 to x=1 is 0.49. That means that if we took 10
samples from a normal distribution, we would expect 0.49 occurrences
of a value between 0 and 1.
The integral from 1 to 2 is 1.30. So we would expect 1.30 occurrences
of a value between 1 and 2 if we took 10 samples.
We can generate a table of the expected number of occurrences of each
bin from our histogram:
Bin | Expected
0-1 | 0.49
1-2 | 1.30
2-3 | 2.28
3-4 | 2.59
4-5 | 1.92
5-6 | 0.93
6-7 | 0.29
7-8 | 0.06
Now, we run the Chi-square test. For more information on how this
test works, check out:
Chi-Square Test
Essentially, what we do is set up a table of expected measurements and
actual measurements for each bin:
Bin | Expected | Observed
0-1 | 0.49 | 0
1-2 | 1.30 | 3
2-3 | 2.28 | 0
3-4 | 2.59 | 4
4-5 | 1.92 | 2
5-6 | 0.93 | 0
6-7 | 0.29 | 1
7-8 | 0.06 | 0
Then we take (expected - observed)^2 for each row. This is the
chi-square value:
Bin | Expected | Observed | Chi-square
0-1 | 0.49 | 0 | 0.2401
1-2 | 1.30 | 3 | 2.8900
2-3 | 2.28 | 0 | 5.1984
3-4 | 2.59 | 4 | 1.9881
4-5 | 1.92 | 2 | 0.0064
5-6 | 0.93 | 0 | 0.8649
6-7 | 0.29 | 1 | 0.5041
7-8 | 0.06 | 0 | 0.0036
We add those all up and that gives us our chi-square statistic. The
sum is 11.7956.
With 10 samples we have 9 degrees of freedom, this gives us a
probability of between 0.25 and 0.1 that the data are normally
distributed. In other words, it is unlikely (less than 25% chance)
that the data are normally distributed.
Traditionally, in statistics, you need a p-value of less than 0.05 to
reject the null hypothesis. In this case, the null hypothesis was
normality. Because our p value is greater than 0.05 (actually, it's
greater than 0.10), we cannot reject the null hypothesis. Therefore,
we have not proven that this data set is different from normality.
Phew! Ok, that was the first way to test normality.
You may have noticed in doing this that the size we chose for our
bins was somewhat arbitrary. What would have happened if I chose bins
of twice that size? Or of half?
The other test of normality is the most powerful but also the most
math intensive. It uses two different parameters: skew and kurtosis.
The math requires n>20, and really you need n>50 or so to have any
power, so this doesn't work with small sample sizes.
A normal distribution is symmetric about the mean. Skew is a measure
of how much the bell-curve for your data set is heavy on one side.
A normal distribution also has a specific width for a given height.
If you double the height, the width scales proportionally. However,
you could imagine stretching a bell curve out in weird ways without
changing its symmetry. You could have a sharp, pointy distribution,
or a fat, boxy one. The pointy ones have positive "kurtosis" and the
boxy ones have negative "kurtosis". A good statistics program should
be able to calculate kurtosis for you.
If your data set is larger than 20, you can try testing for normality
using the D'Agostino-Pearson test. The basic idea is to normalize the
measure of the kurtosis and the skewness to a common value (based on
the sample size) and then add those normalized values together. This
can then be tested for significant deviations from normality.
You can read more about the D'Agostino-Pearson test and get a table
that can be used in Excel here:
Wikipedia: Normality Test
Finally, another test that is related to the D'Agostino-Pearson test
but is a little simpler is the Jarque-Bera Test. It seems a little
more common and straight-forward. Details can be found here:
Wikipedia: Jarque-Bera Test
One item of note: depending on how your stats program calculates
kurtosis, you may or may not need to subtract 3 from kurtosis. See:
Wikipedia Talk: Jarque-Bera Test
The D'Agostino-Pearson test assumed that kurtosis of a Normal
Distribution was 0, but some stats programs (for reasons that mystify
me) have kurtosis of a normal distribution set to 3. You should
figure out which way your stats program calculates kurtosis.
I hope this has been helpful. If you want to talk about this some
more or if you still are having trouble figuring out if your data set
is normally distributed, let me know.
- Doctor Achilles, The Math Forum
Date: 08/06/2008 at 10:17:58
From: Bugs
Subject: Thank you (How to prove a set of data is under Gaussian
Dear Doctor Achilles,
You are just amazing. Your explanations are so clear. I am so
thankful to you. I actually have a large data set around 4000 samples
for each different case. Some cases the bell curve is skewed for
some its not. Overall I need to prove that the distribution
is Gaussian. I am planning to use D'Agostino-Pearson test after
reading your mail. I will also try other tests you mentioned.
Thank you so much for all the trouble. | {"url":"http://mathforum.org/library/drmath/view/72065.html","timestamp":"2014-04-20T12:20:25Z","content_type":null,"content_length":"15239","record_id":"<urn:uuid:d9487054-5068-427e-8037-08354f3b0ff2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Web Resources
Scooter Quest: Decimal Place Value
Help Jimmy make enough money to buy a scooter for his paper route. In the interactive math game, students will earn Jimmy money by identifying the correct decimal place value.
Virtual Manipulatives
Teacher-led app that compares fractions, decimals, & percents in a concrete way.
Math Makes a Connection: The Target Game
This is an interactive math game for practicing fractions and decimals. The game is part of the Connected Math Game site. The goal of the game is to create decimals or fractions to get as close to
the target or benchmark decimal or fraction as possible. This interactive game uses a 20 sided dice with the digits 0-9 repeated twice.
Learning Activities
Comparing Values - Worksheet Creater
Create your own math facts worksheets (with answer sheets) for comparing values.
Snag a Spoon!
This is step-by-step instuctions playing a math review game where students gain an understanding of the basic equivalents (percents, fractions, and decimals) through playing a classic card game. This
may be altered to fit AMSTI.
Virtual Manipulatives
Teacher-led app that compares fractions, decimals, & percents in a concrete way. | {"url":"http://alex.state.al.us/weblinks_category.php?stdID=53747","timestamp":"2014-04-19T14:30:44Z","content_type":null,"content_length":"30257","record_id":"<urn:uuid:fb335455-b061-428d-b54f-5cf11da32a1d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample Real Life Problems Involving Quadratic Equation
Sample Real Life Problems Involving Quadratic Equation
Quadratic equation - wikipedia, the free encyclopedia, In the quadratic formula, the expression underneath the square root sign is called the discriminant of the quadratic equation, and is often
represented using an upper. Quadratic formula - wikipedia, the free encyclopedia, In basic algebra, the quadratic formula is the solution of the quadratic equation. there are other ways to solve the
quadratic equation instead of using the quadratic. Math forum - ask dr. math archives: quadratic equations, Quadratic equations, a selection of answers from the dr. math archives. quadratic equation
what is the formula for the quadratic equation? names of polynomials.
Can you give someexamples of real-life application of a, Can you give someexamples of real life application of a quadratic function using x or y? or a different force y=ax 2 +bx+c where y= final
displacement x=time a. Free calculus tutorials and problems, Interactive and analytical tutorials and problems with detailed solutions are presented.. Grade 7 Β» geometry Β» solve real-life and
mathematical, Grade 7 Β» geometry Β» solve real-life and mathematical problems involving angle measure, area, surface area, and volume. Β» 6 print this page.
Ad alg top - west texas a&m university, College algebra tutorial 19: radical equations andequations involving rational exponents. Free online tutorials on functions and algebra, Free analytical
tutorials using step by step approach with examples and matched exercises are presented here. detailed solutions to the examples are also included.. Algebra homework help, algebra solvers, free math
tutors, Pre-algebra, algebra i, algebra ii, geometry: homework help by free math tutors, solvers, lessons. each section has solvers (calculators), lessons, and a place where.
07 in Technical Web Typography: Guidelines and Techniques
Quadratic regression real life
Solving word problems involving quadratic functions and population
Real life problem using a quadratic method
for quadratic equations how to find quadratic regression in ti 83 plus | {"url":"http://www.myhometone.com/tag/sample-real-life-problems-involving-quadratic-equation","timestamp":"2014-04-21T09:48:41Z","content_type":null,"content_length":"17176","record_id":"<urn:uuid:cbfee210-90c9-46cf-847b-7448cd4a8419>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number theory prime divisibility problem
December 17th 2013, 07:42 PM
Number theory prime divisibility problem
Hello, as part of a proof I am working on, I need to find all $p$ s.t. $p| 3^n + 5^n$ and $p | n^2 -1$. $n\in\mathbb{Z^{+}}$ and $p$ is prime. This part of my proof has stumped me and I have had
no success in attacking this part and I am not sure as to whether I am just going down a blind alley here and there is more efficient way. Any help/ ideas/ hints would be much appreciated.
December 17th 2013, 07:50 PM
Re: Number theory prime divisibility problem
Finding all such $p$ is not a proof. Proving that the set of all such $p$ you find is a proof. What do you have so far?
December 18th 2013, 05:57 AM
Re: Number theory prime divisibility problem
Thank you for taking the time to reply. :)
My apologies as it was poor wording in the post 1. This is not the question itself that I am working on but is the route I have taken down towards my solution of another question. Essentially, I
am trying to prove that $\gcd{(3^n + 5^n , n^2 - 1)} = 8, \ n \in \mathbb{Z^{+}}$ (or what I was trying to convey in the first post was that I was ultimately trying to show the only such $p$ that
exists is 2 - I do not know this to be true or not, I have only conjectured) and I haven't been able to make any progress in this part of the solution so I was inquiring whether this is even true
to begin with (i.e. if anyone could think of a counterexample) and even if it is true, is the proof of it tangible (and if anyone could give me a hint as to how to start to prove this).
December 18th 2013, 07:23 AM
Re: Number theory prime divisibility problem
December 18th 2013, 08:58 AM
Re: Number theory prime divisibility problem
Apologies to bother you again but I am stuck. The problem (which I was trying to find a solution for) is to find all positive integers $n$ s.t. $\dfrac{3^n + 5^n}{n^2 -1} \in \mathbb{Z^{+}}$. My
initial attempt was that I showed solutions can only exist for odd $n$ and then showed that $3^n + 5^n \equiv n^2 - 1 \equiv 0 \pmod{8}$ for all odd $n\geq 3$. I conjectured that the only
solution is for $n=3$ (dunno if the conjecture is correct but $n=3$ is a solution) and tried to prove this by what I set out in the OP - I wished to show that the only prime which divides both is
2 and I would be done (as $3^n + 5^n \equiv 8 \sum_{i=0}^{n-1} 5^i (-3)^{n-1-i} \wedge \sum_{i=0}^{n-1} 5^i (-3)^{n-1-i} \equiv 1ot\equiv 0 \pmod{2}$ for odd $n$) but 2 is not the only such prime
and now I do not know how to proceed. Am I along the right lines for a solution or is there a better method? Thank you.
December 18th 2013, 04:25 PM
Re: Number theory prime divisibility problem
^embarrassingly i have found yet another hole in my above argument in post #5. :( even $n$ can very well be a solution | {"url":"http://mathhelpforum.com/number-theory/225126-number-theory-prime-divisibility-problem-print.html","timestamp":"2014-04-23T20:10:12Z","content_type":null,"content_length":"13172","record_id":"<urn:uuid:e3491b38-7be2-41b1-94dd-fd849d19cd5f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - The Should I Become a Mathematician? Thread
this method works, modified, on any problem which can be factored into first order operators, and where one can solve first order problems. another example is the so called Eulers equation.
Similarly for Euler's equation, x^2y'' +(1-a-b)xy' + ab y = 0, with
indicial equation
(r-a)(r-b) = 0, just factor x^2y'' +(1-a-b)xy' + ab y = (xD-a)(xD-b)y = 0,
and solve (xD-a)z = 0, and then (xD-b)y = z.
As above, this proves existence and uniqueness simultaneously, and also
handles the equal roots cases at the same time, with no guessing.
Here you have to use, I guess, integrating factors to solve the first order acses, and be careful when "multiplying" the non constant coefficient operators (xD-a), since you must use the leibniz
these are usually done by powers series methods, or just stating that the indicial equation should be used, again without proving there are no other solutions. of course the interval of the solution
must be specified, or else I believe the space of solutions is infinite dimensional. | {"url":"http://www.physicsforums.com/showpost.php?p=1013875&postcount=132","timestamp":"2014-04-21T07:12:11Z","content_type":null,"content_length":"8278","record_id":"<urn:uuid:791fd479-a0c2-4091-9a42-ade074716ab8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Event Study Tools
Event studies are generally the first step in a two-step analysis process that aims at identifying the determinants of stock market repsonses to distinct event types. They produce as an outcome
abnormal returns (ARs), which are cumulated over time to cumulative abnormal returns (CARs) and then 'averaged' - in the case of so called sample studies - over several observations of identical
events to AARs and CAARs - where the second 'A' stands for 'average'. These event study results are then oftentimes used as dependent variables in regression analyses.
Explaining abnormal returns by means of regression analysis, however, is only meaningful if the abnormal returns are significantly different from zero, and thus not the result of pure chance. This
assessment wil be made by hypothesis testing. Following general principles of inferential statistics, the null hypothesis ($H_0$) thus maintains that there are no abnormal returns within the event
window, whereas the alternative hypothesis ($H_1$) suggests the presence of ARs within the event window. Formally, the testing framework reads as follows:
$$H_0: ΞΌ = 0 (1)$$
$$H_1: ΞΌ \neq 0 (2)$$
Event studies may imply a hierarchy of calculations, with ARs being compounded to CARs, which can again be 'averaged' to CAARs in cross-sectional studies (sometimes also called 'sample studies').
There is a need for significance testing at each of these levels. ΞΌ in the abovementioned equations may thus represent ARs, CARs, and CAARs. Let's shortly revisit these three different forms of
abnormal return calcuations, as presented in the introduction:
$$AR_{i,t}=R_{i,t}-E(R_{i,t}) (3)$$
$$AAR_{t}= \frac{1}{N} \sum\limits_{i=1}^{N}AR_{i,t} (4)$$
$$CAR(t_1,t_2)_{i}=\sum\limits_{i=t_1}^{t_2} AR_{i,t} (5)$$
$$CAAR(t_1,t_2)=\frac{1}{N}\sum\limits_{i=1}^{N}CAR(t_1,t_2) (6)$$
The literature on event study test statistics is very rich, as is the range of significance tests. Generally, significance tests can be grouped in parametric and nonparametric tests (NPTs).
Parametric tests assume that individual firm's abnormal returns are normally distributed, whereas nonparametric tests do not rely on any such assumptions. In research, scholars commonly complement a
parametric test with a nonparametric tests to verify that the research findings are not due to eg. an outlier (see Schipper and Smith (1983) for an example). Table 1 provides an overview and links to
the formulas of the different test statistics.
Table 1: 'Recommended' Significance Tests per Test Level (note: work in progress/ draft)
N.B.: *-labeled test results are included in the output of EventStudyTools' abnormal return calculator
Nonparametric test statistics ground on the classic t-test. Yet, scholars have further developed the test to correct for the t-test's prediction error. The most widely used of these 'scaled' tests
are those developed by Patell (1976) and Boehmer, Musumeci and Poulsen (1991). Among the nonparametric tests, the rank-test of Corrado (1989), and the sign-based of Cowan (1992) are very popular. EST
provides these test statistics (soon) in its analysis results reports.
Why different test statistics are needed
The choice of test statistic should be informed by the research setting and the statistical issues the analyzed data holds. Specifically, event-date clustering poses a problem leading to (1)
cross-sectional correlation of abnormal returns, and (2) distortions from event-induced volatility changes. Cross-sectional correlation arises when sample studies focus on (an) event(s) which
happened for multiple firms at the same day(s). Event-induced volatility changes, instead, is a phenomenon common to many event types (e.g., M&A transactions) that becomes problematic when events are
clustered. As consequence, both issues introduce a downward bias in the standard deviation and thus overstate the t-statistic, leading to an over-rejection of the null hypothesis.
Comparison of test statistics
There have been several attempts to address these statistical issues. Patell (1976, 1979), for example, tried to overcome the t-test's proneness to event-induced volatility by standardizing the event
window's ARs. He used the dispersion of the estimation interval's ARs to limit the impact of stocks with high return standard deviations. Yet, the test too often rejects the true null hypothesis,
particularly when samples are characterized by non-normal returns, low prices or little liquidity. Also, the test has been found to be still affected by event-induced volatility changes (Campbell and
Wasley, 1993; Cowan and Sergeant, 1996; Maynes and Rumsey, 1993, Kolari and Pynnonen, 2010). Boehmer, Musumeci and Poulsen (1991) resolved this latter issue and developed a test statistic robust
against volatility-changing events.
The nonparametric rank test of Corrado and Zivney (1992) (RANK) applies re-standardized event window returns and has proven robust against induced volatility and cross-correlation. Sign tests are
another category of tests. One advantage the testsβ authors stress over the common t-test is that they are apt to also identify small levels of abnormal returns. Moreover, scholars have recommend the
used of nonparametric sign and rank tests for applications that require robustness against non-normally distributed data. Past research (e.g. Fama, 1976) has argued that daily return distributions
are more fat-tailed (exhibit very large skewness or kurtosis) than normal distributions, what suggests the use of nonparametric tests.
Several authors have further advanced the sign and ranked tests pioneered by Cowan (1992) and Corrado and Zivney (1992). Campbell and Wasley (1993), for example, improved the RANK test by introducing
an incremental bias into the standard error for longer CARs, creating the Campbell-Wasley test statistic (CUM-RANK). Another NPT is the generalized rank test (GRANK) test with a Student
t-distribution with T-2 degrees of freedom (T is the number of observations). It seems that GRANK is one of the most powerful instruments for both shorter and longer CAR-windows.
The Cowan (1992) sign test (SIGN) is also used for testing CARs by comparing the share of positive ARs close to an event to the proportion from a normal period. SIGN's null hypothesis includes the
possibility of asymmetric return distribution. Because this test considers only the sign of the difference between abnormal returns, associated volatility does not influence in any way its rejection
rates. Thus, in the presence of induced volatility scholars recommend the use of BMP, GRANK, SIGN.
Most studies have shown that if the focus is only on single day ARs, the means of all tests stick close to zero. In the case of longer event windows, however, the mean values deviate from zero.
Compared to their nonparametric counterparts, the Patell and the BMP-tests produce means that deviate quite fast from zero, whereas the standard deviations of all tests gravitate towards zero. For
longer event windows, academics recommend nonparametric over parametric tests.
Therefore, the main idea is that in case of longer event-windows, the conclusions on the tests power should be very carefully drawn because of the many over- or under-rejections of the null
hypothesis. Overall, comparing the different test statistics yields the following insights:
1. Parametric tests based on scaled abnormal returns perform better than those based on non-standardized returns
2. Generally, nonparametric tests tend to be more powerful than parametric tests
3. The generalized rank test (GRANK) is one of the most powerful test for both shorter CAR-windows and longer periods
Table 2 provides a short summary of the individual test statistics discussed above.
Table 2: Summary Overview of Main Test Statistics
β β Name β βType β β β β
β# β [synonym] β Key Reference β (P/ βAntecedent β Strengths β Weaknesses β
β β β β NP) β β β β
β1 βt-test β βP β β β’ Simplicity β β’ Prone to cross-sectional correlation and β
β β[ORDIN] β β β β β volatility changes β
β2 βStandardized residual testβPatell (1976) βP βORDIN β β’ Immune to the way in which ARs are distributed across the β β
β β[Patell-test] β β β β (cumulated) event window. β β
β βStandardized βBoehmer, Musumeci and β β β β’ Immune to the way in which ARs are distributed across the β β
β3 βcross-sectional test βPoulsen (1991) βP βPatell-testβ (cumulated) event window. β β
β β[BMP-test] β β β β β β
β4 βAdjusted BMP-test βKolari and PynnΓΆnen (2010) βP βBMP-test β β’ Accounts for cross-correlation and event-induced volatility. β β’ Prone to changes in cross-correlation during β
β β[J-test] β β β β β event time. β
β5 βGeneralized sign test βCowan (1992) βNP β β β’ Accounts for skewness in security returns β β’ Poor performance for longer event windows β
β β[SIGN] β β β β β β
β6 βRank test βCorrado and Zivney (1992) βNP β β β β’ Loses power for longer CARs (e.g., [-10,10]).β
β β[RANK] β β β β β β
β βCampbell-Wasley test β β β β β β
β7 βstatistic βCampell and Wasley (1993) βNP βRANK β β β’ Loses power for longer CARs (e.g., [-10,10]).β
β β[CUM-RANK] β β β β β β
β β β β β β β’ Overcomes the issues extant nonparametric tests have with β β
β8 βGeneralized rank test βKolari and Pynnonen (2010) βNP βRANK β cumulative abnormal returns β β
β β[GRANK] β β β β β’ Immune to the way in which ARs are distributed across the β β
β β β β β β (cumulated) event window β β
β9 βGeneralized sign test β βNP βSIGN β β β
β β[GSIGN] β β β β β β
β10βWilcoxon signed-rank test βWilcoxon (1945) βNP β β β’ Considers that both the sign and the magnitude of ARs are β β
β β β β β β important. β β
Notes: P = parametric, NP = nonparametric; Insights about strenghts and weaknesses were compiled from Kolari and Pynnonen (2011)
Formulas, acronyms, and the decision rule applicable to all test statistics
$T= t_2- t_1+1$ (days in the event window), with $t_1$ denoting the 'earliest' day of the event window, and $t_2$ the 'latest' day of the event window; $N$ = sample size (i.e., number of events/
observations); $EW$ = Estimation Window, with $EW_{min}$ denoting the 'earliest' day of the estimation window, and $EW_{max}$ the 'latest' day of the estimation window; $\hat{\sigma}^2_{AR_i}$, resp.
$\hat{\sigma}_{AR_i}$ represent the variance, resp. the standard deviation as produced by the regression analysis over the estimation window according to the following formula.
$$\hat{\sigma}^2_{AR_i} = \frac{1}{M_i-dF} \sum\limits_{t=EW_{min}}^{EW_{max}}(AR_{i,t})^2$$
$M_{i}$ refers to the number of non-missing (i.e., matched) returns and $dF$ to the degrees of freedom (for the market model, $dF$ = N-2); Please note: If you use the ARC of this website, the
'analysis report'-CSV provides you with $\hat{\sigma}_{AR_i}$ for each event/ observation.
The decision rule for all test statistics mandates the rejection of the null hypothesis with a confidence level of $1-\alpha$ when the test statistic is larger than the critical value from the
t-table (i.e., if $|t(AR_{i,t})|>t_c(\alpha)$).
β Type β AR β AAR β CAR β CAAR β
β t statistic β $t_{AR_{i,t}}=\frac{AR_{i,t}}{\hat{\sigma}_{AR_i}} $ β β $t_{CAR(t_1,t_2)}=\frac{CAR_i}{\sqrt{T\ β $t_{CAAR(t_1,t_2)}=\frac{CAAR(t_1,t_2)}{\hat{\sigma}_{CAAR(t_1,t_2)}} β
β β β β hat{\sigma}_{AR_i}^2}} $ β $ β
β Standard β $\hat{\sigma}_{AR_i} = \sqrt{\frac{1}{M_i-dF} \sum\limits_ β β $\hat{\sigma}_{CAR(t_1,t_2)} = \sqrt{T\ β $\hat{\sigma}_{CAAR(t_1,t_2)} = \sqrt{\frac{1}{N(N-dF)} \sum\limits_ β
β deviation β {t=EW_{min}}^{EW_{max}}(AR_{i,t})^2}$ β β hat{\sigma}_{AR_i}^2}$ β {i=1}^{N}(CAR_i(t_1,t_2)-CAAR(t_1,t_2))^2}$ β
Please note: There are alternative aproaches to calculate the standard deviations for CARs and CAARs (see, for example, Campbell, Lo and MacKinleay (1997)).
The cross-sectional t-test does not account for event-induced variance and thus overstates significance levels. Patell (1976, 1979) suggested to correct for this overstatement by first standardizing
each $AR_i$ before calculating the test statistic using the standardized $AR_i$.
$$SAR_{i,t} = \frac{AR_{i,t}}{S(AR_i)} $$
As the event-window abnormal returns are out-of-sample predictions, Patell adjusts the standard error by the forecast-error:
$$S(AR_{i,t}) = \hat{\sigma}_{AR_i} \sqrt{1+\frac{1}{M_i}+\frac{(R_{m,t}-R_{m,EW})^2} {\sum\limits_{t=EW_{min}}^{EW_{max}}(R_{m,t}-R_m)^2}}$$
'Cumulating' these standardized abnormal returns over time gives us:
$$CSAR_{i,(t_1, t_2)} = \sum\limits_{t=t1}^{t2} \frac{AR_{i,t}}{S(AR_i)} $$
Assuming a Student's t-distribution wit ${M_i-d}$ degrees of freedom (Campbell, Lo, MacKinlay (1997), the expected value of $CSAR_i(t_1,t_2)$ is zero and the standard deviation assumes the following
$$\hat{\sigma}_{CSAR_i} = \sqrt{T\frac{M_i-d}{M_i-2d}}$$
The t-statistic reads as:
Similarly, Boehmer, Musumeci and Poulsen (1991) proposed a stadardized cross-sectional method which is robust to the variance induced by the event. It grounds on the the standardized residual tests
$$\overline{CSAR(t_1,t_2)} = \frac{1}{N}\sum\limits_{i=1}^{N}CSAR(t_1,t_2)_i$$
$$\hat{\sigma}(\overline{CSAR(t_1,t_2)}) = \sqrt{\frac{1}{N(N-1)}\sum\limits_{i=1}^{N}(CSAR(t_1,t_2)-(\overline{CSAR(t_1,t_2)})^2)}$$
$$t_{BMP}= \frac{\overline{CSAR(t_1,t_2)}}{\hat{\sigma}(\overline{CSAR(t_1,t_2)}}$$
[4] J-test (adjusted BMP-test)
Kolari and PynnΓΆnen (2010) propose a modification to the BMP-test to account for cross-correlation of the abnormal returns. Using the standardized abnormal returns ($SAR_{i,t}$) defined as in the
previous section, and defining $\bar r$as the average of the sample cross-correlation of the estimation period residuals, the J-test can be written as:
$$t_{J}=\frac{\overline{SAR}_{i,0}\sqrt N}{\hat{\sigma}_{SAR} \sqrt{1+(N-1)\bar r}}$$
Where $\overline{SAR}_{i,0}$ is the mean of the $SAR$ at the event date, $N$ the number of firms, and the estimated standard deviation $\hat{\sigma}_{SAR}$ is defined as $\hat{\sigma}_{SAR}=\sqrt{\
Assuming the square-root rule holds for the standard deviation of different return periods, this test can be used when considering Cumulated Abnormal Returns. While the average cross-correlation
remains unchanged, the $SAR_{i,0}$ should be replaced by $CSAR_{i}(t_1,t_2)$ in the estimation.
This sign test has been proposed by Cowan (1991) and builds on the ratio of positive cumulative abnormal returns $p^{+}_0$ present in the event window. Under the null hypothesis, this ratio should
not significantly differ from 0.5.
$$t_{SIGN}= \frac{p^+_0-0.5}{\sqrt{0.5(1-0.5)/N}}$$
In a first step, the Corrado's (1989) rank test transforms abnormal returns into ranks. This ranking is done for each event and stock combination and for all abnormal returns of both the event and
the estimation window ('tied ranks').
$$K_{i, t}=rank(AR_{i, t})$$
Thereafter, the average rank is calculated as 0.5 plus half the number of returns observed in the event ($L_2$) and the estimation window ($L_1$).
$$AK_{i, L_1+L_2}=0.5+\frac{(L_1+L_2)}{2}$$
The t-statistic then denotes as:
$$T_{Corrado}=\frac{1}{\sqrt{N}}\sum\limits_{i=1}^{N}(K_{i, t}-AK_{i, L_1+L_2})/\hat{\sigma_U}$$
The standard deviation is calculated as follow. $l_{1b}$ denotes the first day in the estimation and $l_ {2e}$ the last day of the event window.
$$\hat{\sigma_U}=\sqrt{\frac{1}{L_1+L_2}\sum\limits_{i=l_{1b}}^{l_{2e}}(\frac{1}{\sqrt{N}}\sum\limits_{i=1}^{N}(K_{i, t}-AK_{i, L_1+L_2}))^2}$$
When analyzing multiday event periods, Campell and Wasley (1993) define the RANK-test considering the sum of the mean excess rank for the event window as follows:
In this equation, $t$ is the starting date of the event period and $L_2$ is the number of days in the event window as before. $\overline{K}_\tau$ is the mean excess rank on day $\tau$, defined as $\
overline{K}_\tau=\frac{1}{N}\sum\limits_{i=1}^{N}(K_{i,\tau}-AK_i)$, where $K_{i,\tau}$ is the rank of the abnormal return of firm $i$ on period $\tau$ and $AK_i$ is the average rank of $i$ as
defined in the RANK-test. Finally, $\hat{\sigma}^2(\overline{K}_{\tau})$ is defined as the variance used in the RANK-test.
The GRANK test squeezes the whole event window into one observation, the so-called 'cumulative event day'. Thus, the demeaned standardized abnormal ranks of the generalized abnormal returns read as
below. For the definition of $L_1$, see the RANK test.
$$K_{i, t}=\frac{rank(GSAR_{i, t})}{L_1+1}-0.5$$
The generalized rank t-statistic is then defined as:
$$\sigma_{\overline{K}}=\sqrt{\frac{1}{L_1}\sum\limits_{t \in CW}\frac{n_t}{n}\overline{K}_t^2}$$
with CW representing the combined window consisting of estimation window and the cumulative event day, and
Under the Null Hypothesis of no abnormal returns, the number of stocks with positive abnormal cumulative returns ($CAR$) is expected to be in line with the fraction ($\hat{p}_{EW}^{+}$) of positive
$CAR$ from the estimation period. When the number of positive $CAR$ is significantly higher than the number expected from the estimated fraction, it is suggested to reject the Null Hypothesis.
The fraction $\hat{p}_{EW}^{+}$ is estimated as $\hat{p}_{EW}^{+}=\frac{1}{N}\sum\limits_{i=1}^{N}\frac{1}{T_i}\sum\limits_{t=1}^{T_i}\varphi_{i,t}$, where $\varphi_{i,t}$ is $1$ if the sign is
positive and $0$ otherwise.
The Generalized sign test statistic is
Where $W$ is the number of stocks with positive $CAR$ during the event period.
Comment: The GSIGN test is based on the traditional SIGN test where the null hypothesis assumes a binomial distribution with parameter $p=0.5$ for the sign of the $N$ cumulative abnormal returns.
References and further readings
Boehmer, E., Musumeci, J. and Poulsen, A. B. 1991. 'Event-study methodology under conditions of event-induced variance'. Journal of Financial Economics, 30(2): 253-272.
Campbell, C. J. and Wasley, C. E. 1993. 'Measuring security performance using daily NASDAQ returns'. Journal of Financial Economics, 33(1): 73-92.
Campbell, J., Lo, A., MacKinlay, A.C. 1997. 'The econometrics of financial markets'. Princeton: Princeton University Press.
Corrado, C. J. and Zivney, T. L. 1992. 'The specification and power of the sign test in event study hypothesis test using daily stock returns'. Journal of Financial and Quantitative Analysis, 27(3):
Cowan, A. R. (1992). 'Nonparametric event study tests'. Review of Quantitative Finance and Accounting, 2: 343-358.
Cowan, A. R. and Sergeant, A. M. A. 1996. 'Trading frequency and event study test specification'. Journal of Banking and Finance, 20(10): 1731-1757.
Fama, E. F. 1976. Foundations of Finance. New York: Basic Books.
Kolari, J. W. and Pynnonen, S. 2010. 'Event study testing with cross-sectional correlation of abnormal returns'. Review of Financial Studies, 23(11): 3996-4025.
Maynes, E. and Rumsey, J. 1993. 'Conducting event studies with thinly traded stocks'. Journal of Banking and Finance, 17(1): 145-157.
Patell, J. A. 1976. 'Corporate forecasts of earnings per share and stock price behavior: Empirical test'. Journal of Accounting Research, 14(2): 246-276.
Schipper, K. and Smith, A. 1983. 'Effects of recontracting on shareholder wealth: The case of voluntary spin-offs.' Journal of Financial Economics, 12(4): 437-467.
Wilcoxon, F. (1945). 'Individual comparison by ranking methods'. Biometrics Bulletin, 1(6): 80-83. | {"url":"http://www.eventstudytools.com/significance-tests","timestamp":"2014-04-16T08:07:32Z","content_type":null,"content_length":"55116","record_id":"<urn:uuid:0d48482a-f888-4404-b50a-10dac12ec161>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ecological and Non
The Psychology of Ecological and Nonludic Uncertainty
We make the distinction between "ecological" uncertainty, i.e., the type of uncertainty we witness in the real world, and the "ludic" randomness, the one in games and in laboratory setups. A series
of experiments should reveal a variety of errors people make while dealing with the perception of unknowns, particularly the nature of non-textbook and high-impact uncertainty. Our experiments are
not aimed at general theorizing. They aim, simply, at uncovering and cataloguing consequential errors that enter real-world decision making. In other words, errors that matter for real-life.
Research program
1) Telescope blindness and decision making [IN PROGRESS]. We don't know what we are talking about when we talk about risks and opportunities
What causes severe mistakes is that, outside the special cases of casinos and lotteries, you almost never face a single probability with a single (and known) payoff. You may face, say, a 5%
probability of an earthquake of magnitude 3 or higher, a 2% probability of one of 4 or higher, etc. The same with wars: you have a risk of different levels of damage, each with a different
probability. "What is the probability of war?" is a meaningless question for risk assessment. We test for a given class of events if agents make the distinction between probability and shortfall.
2) General intuition and between natural and nonnatural domains [IN PROGRESS]. We test whether agents understand the contribution of tail events in all domains / between thin-tailed (Mediocristan)
and fat-tailed random variables (Extremistan). We test if they get the idea of conditional expectation of a random variable given that it exceeded a certain level K, by varying K.
3) Errors of periodicity. Effects from temporal framing of probability, i.e., Confusion between "One in 10 years" and "10% probability."[IN PROGRESS]. Do agents mistake risks "one in thirty years"
for events that only happen after "thirty years"? In other words, mistake independent events for periodic ones. We have observed many professionals who think that exposure is "safe" if limited to
short periods.
4) Skepticism and domain dependence. Are religious people more skeptical and less pattern seeking than nonreligious people? [IN PROGRESS] We test if people who are skeptical in empirical domains
(economic matters) are gullible in the religious domain, and vice-versa.
5) Confusion between norms L1 and L2, Part 1- Expert problem among professionals talking about volatility [COMPLETED]. Latest experiment on fund managers making mistakes in defining volatility.
Abstract: Finance professionals, who are regularly exposed to notions of volatility, seem to confuse mean absolute deviation with standard deviation, causing an underestimation of 25% with
theoretical Gaussian variables. In some "fat tailed" markets the underestimation can be up to 90%. A lack of statistical knowledge does not appear to be the impediment, but rather a difficulty in
translating a nonlinear measure into a real-world application. The mental substitution of the two measures is consequential for decision making and the perception of variability.
Download: We Don't Quite Know What We are Talking About When We Talk About Volatility
6) Confusion between norms L1 and L2, Part 2. Test of visual minimization aptitude and comparison between minimum least-square and minimum absolute deviation [IN PROGRESS].
7) Intuitions of volatility/deviations "what is" volatility? [IN PROGRESS] We supply people with a variety of graphs of equal volatility and check if they tend to call "volatility" some classes of
8) Ecological uncertainty and academic education [IN PROGRESS] | {"url":"http://www.decisionresearchlab.com/mission.html","timestamp":"2014-04-21T02:09:17Z","content_type":null,"content_length":"5850","record_id":"<urn:uuid:264c55f8-d0b8-478b-9b58-43175f23dea7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recursive algorithm for integration of complex functions
up vote 1 down vote favorite
Integrals of the form $\int_{0}^{\pi} d\theta \sin\theta f(r(\theta)) j_{\ell_{1}} (a r(\theta))y_{\ell_{2}}(br(\theta))P_{\ell_{3}} ^{m} (\cos(\theta))P_{\ell_{4}} ^{m}(\cos(\theta))$, where $f$ is
a $C^{\infty}$ complex function, $\ell_{i}$ $(i=1,\ldots,4)$ integers, $a$,$b$ complex constant values, $j_{\ell_{1}}$ spherical bessel function (of first kind), $y_{\ell_{2}}$ spherical neumann
function (spherical bessel function of second type, $P_{\ell} ^{m}(x)$ the Legendre function and $r$ real function with $r(0) = r(\pi)$, appear in various boundary value problems. They are usually
solved using numerical integration. However there are cases that the function to be integrated oscillates rapidly, while the results is zero. Is there any analytic formula or recursive algorithm that
does not include numerical handling?
EDIT: The function $r$ is an arbitrary continuous function, but is not differentiatable in a finite number of discrete points (finite critical points). I need a method (if there is any) that will be
generally valid and can produce results bounded to any arbitrarily small interval.
Have you tried stationary phase methods? β Alex R. Feb 3 '11 at 1:19
Residue theorem is always a good candidate (your function is $\C^\infty,$ so has a Fourier series, so...) β Igor Rivin Feb 3 '11 at 3:15
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged tag-removed or ask your own question. | {"url":"http://mathoverflow.net/questions/54147/recursive-algorithm-for-integration-of-complex-functions","timestamp":"2014-04-20T06:04:44Z","content_type":null,"content_length":"48486","record_id":"<urn:uuid:a985e480-1b2b-4db0-9e13-0c55127e31cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 11
- Journal of Machine Learning Research , 2001
"... Many discovery problems, e.g., subgroup or association rule discovery, can naturally be cast as n-best hypotheses problems where the goal is to nd the n hypotheses from a given hypothesis space
that score best according to a certain utility function. We present a sampling algorithm that solves this ..."
Cited by 24 (4 self)
Add to MetaCart
Many discovery problems, e.g., subgroup or association rule discovery, can naturally be cast as n-best hypotheses problems where the goal is to nd the n hypotheses from a given hypothesis space that
score best according to a certain utility function. We present a sampling algorithm that solves this problem by issuing a small number of database queries while guaranteeing precise bounds on con
dence and quality of solutions. Known sampling approaches have treated single hypothesis selection problems, assuming that the utility be the average (over the examples) of some function | which is
not the case for many frequently used utility functions. We show that our algorithm works for all utilities that can be estimated with bounded error. We provide these error bounds and resulting
worst-case sample bounds for some of the most frequently used utilities, and prove that there is no sampling algorithm for a popular class of utility functions that cannot be estimated with bounded
error. The algorithm is sequential in the sense that it starts to return (or discard) hypotheses that already seem to be particularly good (or bad) after a few examples. Thus, the algorithm is almost
always faster than its worst-case bounds.
, 2000
"... . In the last decade, one of the research topics that has received a great deal of attention from the machine learning and computational learning communities has been the so called boosting
techniques. In this paper, we further explore this topic by proposing a new boosting algorithm that mends some ..."
Cited by 19 (3 self)
Add to MetaCart
. In the last decade, one of the research topics that has received a great deal of attention from the machine learning and computational learning communities has been the so called boosting
techniques. In this paper, we further explore this topic by proposing a new boosting algorithm that mends some of the problems that have been detected in the, so far most successful boosting
algorithm, AdaBoost due to Freund and Schapire [FS97]. These problems are: (1) AdaBoost cannot be used in the boosting by filtering framework, and (2) AdaBoost does not seem to be noise resistant. In
order to solve them, we propose a new boosting algorithm MadaBoost by modifying the weighting system of AdaBoost. We first prove that one version of MadaBoost is in fact a boosting algorithm. Second,
we show how our algorithm can be used and analyzed its performance in detail. Finally, we show that our new boosting algorithm can be casted in the statistical query learning model [Kea93] and thus,
it is robust to ra...
- In Proceedings of the Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining , 2000
"... In this paper we present a experimental evaluation of a boosting based learning system and show that can be run efficiently over a large dataset. The system uses as base learner decision stumps,
single atribute decision trees with only two terminal nodes. To select the best decision stump at each it ..."
Cited by 11 (5 self)
Add to MetaCart
In this paper we present a experimental evaluation of a boosting based learning system and show that can be run efficiently over a large dataset. The system uses as base learner decision stumps,
single atribute decision trees with only two terminal nodes. To select the best decision stump at each iteration we use an adaptive sampling method. As a boosting algorithm, we use a modification of
AdaBoost that is suitable to be combined with a base learner that does not use all the dataset. We provide experimental evidence that our method is as accurate as the equivalent algorithm that uses
all the dataset but much faster.
- In Proceedings of the International Conference on Knowledge Discovery and Data Mining , 2000
"... Many discovery problems, e.g., subgroup or association rule discovery, can naturally be cast as n-best hypothesis problems where the goal is to nd the n hypotheses from a given hypothesis space
that score best according to a given utility function. We present a sampling algorithm that solves this pr ..."
Cited by 8 (1 self)
Add to MetaCart
Many discovery problems, e.g., subgroup or association rule discovery, can naturally be cast as n-best hypothesis problems where the goal is to nd the n hypotheses from a given hypothesis space that
score best according to a given utility function. We present a sampling algorithm that solves this problem by issuing a small number of database queries while guaranteeing precise bounds on condence
and quality of solutions. Known sampling algorithms assume that the utility be the average (over the examples) of some function, which is not the case for many frequently used utility functions. We
show that our algorithm works for all utilities that can be estimated with bounded error. We provide such error bounds and resulting worst-case sample bounds for some of the most frequently used
utilities, and prove that there is no sampling algorithm for another popular class of utility functions. The algorithm is sequential in the sense that it starts to return (or discard) hypotheses that
, 2001
"... Sequential sampling algorithms have recently attracted interest as a way to design scalable algorithms for Data mining and KDD processes. In this paper, we identify an elementary sequential
sampling task (estimation from examples), from which one can derive many other tasks appearing in practice. We ..."
Cited by 6 (0 self)
Add to MetaCart
Sequential sampling algorithms have recently attracted interest as a way to design scalable algorithms for Data mining and KDD processes. In this paper, we identify an elementary sequential sampling
task (estimation from examples), from which one can derive many other tasks appearing in practice. We present a generic algorithm to solve this task and an analysis of its correctness and running
time that is simpler and more intuitive than those existing in the literature. For two specific tasks, frequency and advantage estimation, we derive lower bounds on running time in addition to the
general upper bounds.
, 1999
"... . Machine learning has been one of the important subjects of AI that is motivated by many real world applications. In theoretical computer science, researchers also have introduced mathematical
frameworks for investigating machine learning, and in these frameworks, many interesting results have been ..."
Cited by 5 (3 self)
Add to MetaCart
. Machine learning has been one of the important subjects of AI that is motivated by many real world applications. In theoretical computer science, researchers also have introduced mathematical
frameworks for investigating machine learning, and in these frameworks, many interesting results have been obtained. Now we are proceeding to a new stage to study how to apply these fruitful
theoretical results to real problems. We point out in this paper that \adaptivity" is one of the important issues when we consider applications of learning techniques, and we propose one learning
algorithm with this feature. 1 Introduction Discovery science 1 is a new area of computer science that aims at (i) developing eΓcient computational methods which enable automatic discoveries of
scientic knowledge and decision making rules and (ii) understanding all the issues concerned with this goal. Of course, discovery science involves many areas, from practical to theoretical, of
computer science. For exampl...
- DATA MINING AND KNOWLEDGE DISCOVERY , 2003
"... Clustering is a widely used knowledge discovery technique. It helps uncovering structures in data that were not previously known. The clustering of large data sets has received a lot of
attention in recent years, however, clustering is a still a challenging task since many published algorithms fail ..."
Cited by 4 (1 self)
Add to MetaCart
Clustering is a widely used knowledge discovery technique. It helps uncovering structures in data that were not previously known. The clustering of large data sets has received a lot of attention in
recent years, however, clustering is a still a challenging task since many published algorithms fail to do well in scaling with the size of the data set and the number of dimensions that describe the
points, or in nding arbitrary shapes of clusters, or dealing effectively with the presence of noise. In this paper, we present a new clustering algorithm, based in self-similarity properties of the
data sets. Selfsimilarity is the property of being invariant with respect to the scale used to look at the data set. While fractals are self-similar at every scale used to look at them, many data
sets exhibit self-similarity over a range of scales. Self-similarity canbe measured using the fractal dimension. The new algorithm which we call Fractal Clustering (FC) places points incrementally in
the cluster for which the change in the fractal dimension after adding the point is the least. This is a very natural way of clustering points, since points in the same cluster have a great degree of
self-similarity among them (and much less self-similarity with respect to points in other clusters). FC requires one scan of the data, is suspendable at will, providing the best answer possible at
that point, and is incremental. We show via experiments that FC effectively deals with large data sets, high-dimensionality and noise and is capable of recognizing clusters of arbitrary shape.
"... As organizations accumulate data over time, the problem of tracking how patterns evolve becomes important. In this paper, we present an algorithm to track the evolution of cluster models in a
stream of data. Our algorithm is based on the application of bounds derived using Cherno#'s inequality ..."
Cited by 2 (1 self)
Add to MetaCart
As organizations accumulate data over time, the problem of tracking how patterns evolve becomes important. In this paper, we present an algorithm to track the evolution of cluster models in a stream
of data. Our algorithm is based on the application of bounds derived using Cherno#'s inequality and makes use of a clustering algorithm that was previously developed by us, namely Fractal Clustering,
which uses self-similarity as the propertyto group points together. Experiments show that our tracking algorithm is e#cient and e#ective in #nding changes on the patterns.
"... Self-similarity is the property of being invariant with respect to the scale used to look at the data set. While fractals are self-similar at every scale used to look at them, many data sets
exhibit self-similarity over a range of scales. Self-similarity can be measured using the fractal dimension. ..."
Cited by 1 (0 self)
Add to MetaCart
Self-similarity is the property of being invariant with respect to the scale used to look at the data set. While fractals are self-similar at every scale used to look at them, many data sets exhibit
self-similarity over a range of scales. Self-similarity can be measured using the fractal dimension. Fractal dimension is an important charactaristics for many complex systems and can serve as a
powerful representation technique. In this chapter, we present a new clustering algorithm, based on self-similarity properties of the data sets, and also its applications to other fields in data
mining, such as projected clustering and trend analysis. Clustering is a widely used knowledge discovery technique. It helps uncovering structures in data that were not previously known. The
clustering of large data sets has received a lot of attention in recent years, however, clustering is a still a challenging task since many published algorithms fail to do well in scaling with the
size of the data set and the number of dimensions that describe the points, or in finding arbitrary shapes of clusters, or dealing effectively with the presence of noise. The new algorithm which we
call Fractal Clustering (FC) places points incrementally in the cluster for which the change in the fractal dimension after adding the point is the least. This is a very natural way of clustering
points, since points in the same cluster have a great degree of self-similarity among them (and much less selfsimilarity with respect to points in other clusters). FC requires one scan of the data,
is suspendable at will, providing the best answer possible at that point, and is incremental. We show via experiments that FC effectively deals with large data sets, high-dimensionality and noise and
is capable of recognizing clusters of arbitrary shape. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1846688","timestamp":"2014-04-17T15:42:02Z","content_type":null,"content_length":"39086","record_id":"<urn:uuid:7bc0eb88-e766-498e-8a41-54f271500243>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why are houses wired with parallel circuits? - ShopYourWay
Your question was published, help is on its way!
Why are houses wired with parallel circuits?
Follow Question
Why are houses wired with parallel circuits?
Be the first to comment
Ask your friends
Answers (1)
Houses are generally wired in
rather than
circuits for a couple of reasons. Think of the series circuits on old Christmas tree lights. If one light bulb doesn't work, none of the lights will come on, because all the electricity has to flow
through each light bulb in sequence. A broken filament in one bulb creates an
open circuit
and the electricity can't flow.
Another problem with series wiring is that as we extend the circuit, adding more lights, each light we add makes the other lights dimmer. That's because we're increasing the total linear resistance
in the circuit. The voltage is fixed, so as the resistance increases, the current flow must decrease.
Neither of these are desirable situations and, therefore, our houses are wired in
. Electricity has several paths it can follow from the energy source to ground. Even with several light fixtures controlled by one switch, the light fixtures are in parallel. If one light bulb burns
out, electricity still flows through the other bulbs.
The other feature of parallel circuits is that adding another light or resistor of any kind will not cause the others that are already working to get dimmer or draw less current. If you think of a
simple circuit with a 60-watt light bulb, a 120-volt power supply seeing a 60-watt light bulb will have a resultant current of 1 /2 amp (I=P/V=60/120=1/2). Any place in this circuit where we measure
the current, we have 1 /2 amp flowing. If we add a second 60-watt light bulb in parallel, the circuit has a second branch. In each leg of the branch, the current flow would be 1/2 amp. Before the
branch splits, and after it comes back together, the current would be 1 amp. However, when the second light is added, the first light still sees the 1/2 amp current flow and does not change in
brightness. If this seems like magic to you, you'll just have to accept that this is the way electricity works. Incidentally, you can extend this picture. If you put a third branch in with another
60-watt light bulb, it too, would draw 1/2 amp, and the total current draw in the common parts of the circuit would be 1 1/2 amps. There are three
paths, each carrying 1/2 amp.
You can see that if you put in thirty 60-watt light bulbs, you are going to draw 15- amps (I=P/V=30x60/120=15). Fifteen amps flowing through a conventional household wire is close to the point where
you'll blow the fuse or trip the breaker. This is the threshold of an
situation. A general design limitation is to restrict a 15-amp circuit to 80% of its rated capacity. This limits the circuit to 12 amps, maximum.
Add your own answer
Sign in to answer a question
Didn't find what you are looking for? Ask a question | {"url":"http://www.shopyourway.com/questions/mml/107237","timestamp":"2014-04-18T18:13:42Z","content_type":null,"content_length":"51216","record_id":"<urn:uuid:00beddd4-aa45-483b-881d-c3e857e02bb6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Examples and importance of Embedding (and Non-Embedding) Theorems
up vote 4 down vote favorite
An embedding is an injective map into a universal, simpler model object. Many embedding theorems are without obstruction, in the sense that every object which you wish to embed can be embedded.
Examples of such theorems are Yoneda lemma, algebraic closure of fields, Nash embedding theorem for Riemannian manifolds. unconditional.
I'm interested in embedding theorems with obstruction. Do you have examples of theorems that give an obstruction to embedding? In the case where there is an obstruction, would you consider the
obstruction to be local or instrinsic? Most embedding problems are possible locally, but there is often a local-global obstruction.
The example that led me to this question is Kodiara's embedding theorem that gives an obstruction for a complex manifold to be a submanifold of complex projective space. Here the obstruction is that
the manifold must carry a positive line bundle. Positivity of curvature is a local criterion.
PS. Sorry, but I really don't know how to tag this question.
I would say that although the word "embedding" has a fairly universal meaning of being an injective map, the specific reasons why you want an embedding can vary according to the subject. There is
1 clearly a (not necessarily correct) philosophy that if you take an object you want to understand it better and embed it somehow into a bigger but simpler (even canonical) object, then this will
make it easier to analyze the original object. But I believe the specific criteria are for the bigger object or conditions you want for the embedding depends quite strongly on the specific
circumstances. β Deane Yang Nov 13 '10 at 16:29
Hi Deane, in instances where we cannot embed into the canonical object, what does the obstruction to embedding teach us? β Colin Tan Nov 14 '10 at 2:19
@Colin, at that level of generality, nothing. β Mariano SuΓ‘rez-Alvarezβ¦ Nov 15 '10 at 6:21
I don't view the target of embedding and non-embedding theorems as being a canonical object (which usually has a universal property that implies an embedding theorem) but merely a "simplest"
1 object. I also think embedding theorems were more popular in the past, when people were less comfortable working with things like manifolds and algebraic varieties unless they were embedded in a
more familiar space. Today, we know that the embedded object is often a lot messier than the intrinsic one. Embeddings of functional spaces (see Serre below) are however, crucial to nonlinear
PDE's. β Deane Yang Nov 15 '10 at 13:59
community wiki? β Dylan Wilson Nov 15 '10 at 20:07
add comment
1 Answer
active oldest votes
I'm not sure whether you look for such an answer, because it comes from analysis. Analysts use various functional spaces, especially the Sobolev spaces. $W^{s,p}(\Omega;\mathbb R)$ is,
roughly speaking, the set of functions with $s$ derivatives in $L^p(\Omega)$ (but $s\ge0$ needs not be an integer).
Sobolev embedding. If $\Omega$ is an open subset with a smooth boundary, and if $\frac1q=\frac1p-\frac{s}{n}$ with $1\le p< q<\infty$, then $W^{s,p}(\Omega;\mathbb R)$ embeds into $L^q
(\Omega)$. If instead $sp>n$, then $W^{s,p}(\Omega;\mathbb R)$ embeds into ${\mathcal C}^\alpha(\bar\Omega)$ where $\alpha:=s-\frac{n}{p}$, unless this exponent is an integer.
When the target $\mathbb R$ is replaced by a manifold, the situation may not be so nice. Embedding theorems are related to norm inequalities, which are usually proved first for ${\mathcal
up vote 5 C}^\infty$-fields, then extended by means of density of ${\mathcal C}^\infty$ in $W^{s,p}$.
down vote
Obstruction (Bethuel 1991). Assume that $p< n$, and let $N$ be a compact manifold of dimension $k$. Then ${\mathcal C}^\infty(\Omega,N)$ is dense in $W^{1,p}(\Omega;N)$ if and only if
$\pi_{[p]}(N)=0$, where $[p]$ is the largest integer $\le p$.
The consequence of this is that in some situations, there is a discrepency between $W^{s,p}$ and the closure of ${\mathcal C}^\infty$ under the $W^{s,p}$-norm.
Does $\Omega$ refer so some subset of $N$? May I know heuristically why a homotopy group features in the obstruction, and why the obstruction involves only $N$ and not $\Omega$? β Colin
Tan Nov 13 '10 at 13:59
@Colin. The obstruction is of local nature. Therefore, as long as $\Omega$ is a smooth manifold, only the topology of $N$ matters. A typical example, related to the mathematics of
1 liquid cristals, is that of $N=S^2$, the unit sphere in $\mathbb R^3$, whereas $\Omega$ is an open subset of $\mathbb R^3$. Then a degree can be defined over $W^{1,p}$ and is
non-trivial. It would be trivial if ${\mathcal C}^\infty$ had been dense. β Denis Serre Nov 13 '10 at 14:41
add comment
Not the answer you're looking for? Browse other questions tagged big-list soft-question ra.rings-and-algebras ca.analysis-and-odes gt.geometric-topology or ask your own question. | {"url":"https://mathoverflow.net/questions/45908/examples-and-importance-of-embedding-and-non-embedding-theorems","timestamp":"2014-04-18T18:34:21Z","content_type":null,"content_length":"63420","record_id":"<urn:uuid:68d7b73f-0994-4cf8-8a82-bd57b6b6cb67>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample Papers
ICSE Board Class 10 Math Sample Papers 2007
ICSE Board Sample Papers 2007 for Class 10 Math
sample paper-2007
Subject β Mathematics
Class β 10th
SECTION β A
Answer all questions from this section.
(a) If x/a = y/b = z/c, prove that each ratio is equal to [(2x3 β 3y3 + 5z3)/(2a3 β 3b3 + 5c3)]1/3.
(b) If sin 540 cosec (900 β ΞΈ) = 1, find the value of ΞΈ, 00 < ΞΈ < 900.
(c) Construct a rhombus ABCD of side 4.5 cm and LBAD = 60ΒΊ, by using ruler and compasses only. Draw the lines of symmetry. Hence prove that diagonals are perpendicular to each other.
(a) Find the amount and compound interest on Rs14000 for 2 tears at 5%.
(b) Solve the quadratic equation : x2 β 16 = 0
(c) Two chords AB and CD of a circle intersect internally at a point P. If AB = 12 cm, AP = 2 cm, PC = 5 cm, find PD.
(a) If (a2 + c2)(b2 + d2) = (ab + cd)2, prove that a, b, c, d are in proportion.
(b) If x Ξ΅ R, solve: 2x β 3 β₯ x + (1 β x)/3 > 2/5 x. Also represent the solution on the number line.
(c) A shopkeeper buys a mobile phone at a discount of 10% from the wholesaler, the printed price of the mobile phone being Rs4600 and the rate of sales tax is 5%. The shopkeeper sells it to buyer at
the printed price and charges tax at the same rate. Find :
(i) the price at which the mobile phone can be bought.
(ii) the VAT paid by the shopkeeper.
4. (a) Find the mean, median and mode of the following distribution :
3, 8, 10, 8, 10, 7, 6, 10, 6, 13, 10.
(b) List the elements of the solution set of the in equation β 3 < x β 2 β€ 9 β 2x, x Ξ΅ N.
(c) Without using set square or protractor construct a rhombus ABCD with sides of length 4 cm and one diagonal AC of length 5 cm. Draw its lines of symmetry. Also mark its point of symmetry.
SECTION β B
Answer any four questions from this section.
(a) Let A = {1, 2, 3} and a relation on A be R = {(1, 1), (1, 2), (2, 2), (2, 3), (3, 1), (3, 3)}. Prove that the relation R is: (i) reflexive, (ii) symmetric, (iii) transitive.
(b) Solve the equation : 1/(x + 1) + 2/(x + 2) = 4/(x + 4).
(c) Eliminate ΞΈ between the equations : x = a cos ΞΈ + b sin ΞΈ, y = a sin ΞΈ β b cos ΞΈ.
(a) Calculate the ratio in which the line joining A(6, 5) and B(4, β 3) is divided by the line y = 2.
(b) Two persons standing on the same side of a tower in a straight line with it, measure the angle of elevation of the top of the tower as 30ΒΊ and 60ΒΊ respectively. If the height of the tower is 70
m, find the distance between the two persons.
(c) A line passes through the point P(3, 2) and cuts off positive intercepts, on the x-axis and the y-axis in the ratio 3 : 4. Find the equation of the line.
(a) AB is a diameter of a circle with centre O. CD is a chord equal to the radius of the circle. AC and BD produced meet at P. Prove that LAPB = 600.
(b) A and B are the points (β 2, 0) and (0, 5). Find the co-ordinates of two points C and D such that ABCD is a square and calculate the length of the diagonal AC.
(c) Plot the points A(2, β 3), B (β 1,2) and C(0, β 2) on the graph paper. Draw the triangle formed by reflecting these points in the x-axis. Are the two triangles congruent?
(a) A man invests Rs3960 in shares of a company which pays 15% dividend at a time when a Rs25 share costs Rs33. Find :
(i) the number of shares he bought
(ii) the annual income from his shares
(iii) the rate of interest which he gets on his investment.
(b) If (a3 + 3ab2)/(3a2b + b3) = (x3 + 3xy2)/(3xy2 + y3) , prove that x/a = y/b.
(c) Fro the top of a cliff 90 m high, the angle of depression of the top and bottom of a tower are observed to be 300 and 600 respectively. Find the height of the tower.
(a) Shabana has a cumulative time deposit account in State Bank of India. She deposits Rs500 per month for a period of 4 years. If at the time of maturity she gets Rs28410, find the rate of (simple)
(b) If x = a sin ΞΈ, y = b tan ΞΈ, prove that : a2/x2 β b2/y2 = 1.
(c) Mr Sharma has 60 shares of nominal value Rs100 and he decides to sell them when they are at a premium of 60%. He invests the proceeds in shares of nominal value of Rs50 quoted at 4% discount,
paying 18% dividend annually. Calculate :
(i) the sale proceeds.
(ii) the number of shares he buys.
(iii) his annual dividend from these shares.
10. (a) Draw a circle of radius 3 cm and inscribe a square in it. Measure and record the length of one side of the square drawn.
(b) A spherical shell of iron whose internal radius is 9 cm is melted into a conical solid of 28 cm in diameter and 4 3/7 cm height. Find the inner diameter of the shell. [Take Ο = 22/7]
(c) Draw a line segment AB of length 12 cm. Mark M, the mid-point of AB. Draw and describe the locus of a point which is :
(i) at a distance of 3 cm from AB.
(ii) at a distance of 5 cm from the point M.
ICSE Board Best Sellers
In order to keep pace with technological advancement and to cope up with ICSE Board examinations, Pearson group has launched Edurite to help students by offering Books and CDs of different courses | {"url":"http://boards.edurite.com/icse+board-math-class+10-2007-sample-question-paper~b1Uk-cMB-sSW-y1Bj.html","timestamp":"2014-04-18T23:47:25Z","content_type":null,"content_length":"74801","record_id":"<urn:uuid:ed1fd48d-9143-454a-a918-1f2b212800b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to test if one set of (unique) integers belongs to another set, efficiently?
up vote 3 down vote favorite
I'm writing a program where I'm having to test if one set of unique integers A belongs to another set of unique numbers B. However, this operation might be done several hundred times per second, so
I'm looking for an efficient algorithm to do it.
For example, if A = [1 2 3] and B = [1 2 3 4], it is true, but if B = [1 2 4 5 6], it's false.
I'm not sure how efficient it is to just sort and compare, so I'm wondering if there are any more efficient algorithms.
One idea I came up with, was to give each number n their corresponding n'th prime: that is 1 = 2, 2 = 3, 3 = 5, 4 = 7 etc. Then I could calculate the product of A, and if that product is a factor of
the similar product of B, we could say that A is a subset of similar B with certainty. For example, if A = [1 2 3], B = [1 2 3 4] the primes are [2 3 5] and [2 3 5 7] and the products 2*3*5=30 and
2*3*5*7=210. Since 210%30=0, A is a subset of B. I'm expecting the largest integer to be couple of million at most, so I think it's doable.
Are there any more efficient algorithms?
What do you mean by "their corresponding prime"? Also, how big are these sets expected to be? (And how are the sets represented when you get them?) β jalf Aug 1 '11 at 4:10
1 @jalf I added an example to remove the ambiguousness, thanks for pointing it out. I'd assume the largest integer would in the order of hundreds of thousands, million or two at best. And the
sets... in the order of a few thousand at largest, I'd guess. They can be represented in any way, array, tree etc. β tsiki Aug 1 '11 at 4:23
Are the sets limited somehow (like, your sets will contain only positive integers between 1 and 10), or unlimited (0..MAXINT, or even more if you use longs or arbitrarily big (limited to your pc's
memory) if you use big ints)? If they are not very constrained, you can simply forget about your prime numbers, the algorithm will be painfully slow, and unless you use some "bigint" you will get
integer overflows and it won't work. β Bruno Reis Aug 1 '11 at 4:24
Your edit tells us the range of the elements of your set. What about the cardinality (size) of the sets? Do you have any information? β Bruno Reis Aug 1 '11 at 4:29
1 @Bruno Reis I think the size of the sets will be in the order of thousands, at largest. β tsiki Aug 1 '11 at 4:33
show 4 more comments
4 Answers
active oldest votes
The asymptotically fastest approach would be to just put each set in a hash table and query each element, which is O(N) time. You cannot do better (since it will take that much time to read
the data).
Most set datastructures already support expected and/or amortized O(1) query time. Some languages even support this operation. For example in python, you could just do
A < B
Of course the picture changes drastically depending on what you mean by "this operation is repeated". If you have the ability to do precalculations on the data as you add it to the set (which
presumably you have the ability to do so), this will allow you to subsume the minimal O(N) time into other operations such as constructing the set. But we can't advise without knowing much
up vote
2 down Assuming you had full control of the set datastructure, your approach to keep a running product (whenever you add an element, you do a single O(1) multiplication) is a very good idea IF there
vote exists a divisibility test that is faster than O(N)... in fact your solution is really smart, because we can just do a single ALU division and hope we're within float tolerance. Do note
however this will only allow you roughly a speedup factor of 20x max I think, since 21! > 2^64. There might be tricks to play with congruence-modulo-an-integer, but I can't think of any. I
have a slight hunch though that there is no divisibility test that is faster than O(#primes), though I'd like to be proved wrong!
If you are doing this repeatedly on duplicates, you may benefit from caching depending on what exactly you are doing; give each set a unique ID (though since this makes updates hard, you may
ironically wish to do something exactly like your scheme to make fingerprints, but mod max_int_size with detection-collision). To manage memory, you can pin extremely expensive set comparison
(e.g. checking if a giant set is part of itself) into the cache, while otherwise using a most-recent policy if you run into memory issues. This nice thing about this is it synergizes with an
element-by-element rejection test. That is, you will be throwing out sets quickly if they don't have many overlapping elements, but if they have many overlapping elements the calculations
will take a long time, and if you repeat these calculations, caching could come in handy.
The only problem of this solution is that either the hash table must be big enough to guarantee that almost always each bucket will contain at most 1 element (that's why I asked for the
probability distribution of the values, if the OP happens to know) therefore using more memory -- it would be necessary to allocate hundreds of such hash tables each second --, or it would
not be O(N) in the average case, since there would be many buckets with more than one element. β Bruno Reis Aug 1 '11 at 5:29
I disagree with the "must be big enough to guarantee that almost always each bucket...[etc]"; the implementation of the hash table does not matter. As I mention, you can consider it
subsumed into building the set datastructure. The notion of building "hundreds of such hash tables each second" is not a scary one if they're only a few bytes in size, depending on the size
of sets one is working with. β ninjagecko Aug 1 '11 at 5:47
add comment
Let A and B be two sets, and you want to check if A is a subset of B. The first idea that pops into my mind is to sort both sets and then simply check if every element of A is contained in
B, as following:
Let n_A and n_B be the cardinality of A and B, respectively. Let i_A = 1, i_B = 1. Then the following algorithm (that is O(n_A + n_B)) will solve the problem:
// A and B assumed to be sorted
i_A = 1;
i_B = 1;
n_A = size(A);
n_B = size(B);
while (i_A <= n_A) {
while (A[i_A] > B[i_B]) {
if (i_B > n_B) return false;
if (A[i_A] != B[i_B}) return false;
return true;
The same thing, but in a more functional, recursive way (some will find the previous easier to understand, others might find this one easier to understand):
// A and B assumed to be sorted
up vote 1 function subset(A, B)
down vote n_A = size(A)
n_B = size(B)
function subset0(i_A, i_B)
if (i_A > n_A) true
else if (i_B > n_B) false
if (A[i_A] <= B[i_B]) return (A[i_A] == B[i_B]) && subset0(i_A + 1, i_B + 1);
else return subset0(i_A, i_B + 1);
subset0(1, 1)
In this last example, notice that subset0 is tail recursive, since if (A[i_A] == B[i_B]) is false then there will be no recursive call, otherwise, if (A[i_A] == B[i_B]) is true, than
there's no need to keep this information, since the result of true && subset0(...) is exactly the same as subset0(...). So, any smart compiler will be able to transform this into a loop,
avoiding stack overflows or any performance hits caused by function calls.
This will certainly work, but we might be able to optimize it a lot in the average case if you have and provide more information about your sets, such as the probability distribution of
the values in the sets, if you somehow expect the answer to be biased (ie, it will more often be true, or more often be false), etc.
Also, have you already written any code to actually measure its performance? Or are you trying to pre-optimize?
You should start by writing the simplest and most straightforward solution that works, and measure its performance. If it's not already satisfactory, only then you should start trying to
optimize it.
add comment
I'll present an O(m+n) time-per-test algorithm. But first, two notes regarding the problem statement:
Note 1 - Your edits say that set sizes may be a few thousand, and numbers may range up to a million or two. In following, let m, n denote the sizes of sets A, B and let R denote the size
of the largest numbers allowed in sets.
Note 2 - The multiplication method you proposed is quite inefficient. Although it uses O(m+n) multiplies, it is not an O(m+n) method because the product lengths are worse than O(m) and O
(n), so it would take more than O(m^2 + n^2) time, which is worse than the O(m ln(m) + n ln(n)) time required for sorting-based methods, which in turn is worse than the O(m+n) time of the
following method.
For the presentation below, I suppose that sets A, B can completely change between tests, which you say can occur several hundred times per second. If there are partial changes, and you
know which p elements change in A from one test to next, and which q change in B, then the method can be revised to run in O(p+q) time per test.
up vote 1 Step 0. (Performed one time only, at outset.) Clear an array F, containing R bits or bytes, as you prefer.
down vote
Step 1. (Initial step of per-test code.) For i from 0 to n-1, set F[B[i]], where B[i] denotes the i'th element of set B. This is O(n).
Step 2. For i from 0 to m-1, { test F[A[i]]. If it is clear, report that A is not a subset of B, and go to step 4; else continue }. This is O(m).
Step 3. Report that A is a subset of B.
Step 4. (Clear used bits) For i from 0 to n-1, clear F[B[i]]. This is O(n).
The initial step (clearing array F) is O(R) but steps 1-4 amount to O(m+n) time.
add comment
Given the limit on the size of the integers, if the set of B sets is small and changes seldom, consider representing the B sets as bitsets (bit arrays indexed by integer set member).
This doesn't require sorting, and the test for each element is very fast.
up vote 1 down
vote If the A members are sorted and tend to be clustered together, then get another speedup by testing all the element in one word of the bitset at a time.
add comment
Not the answer you're looking for? Browse other questions tagged algorithm or ask your own question. | {"url":"http://stackoverflow.com/questions/6894117/how-to-test-if-one-set-of-unique-integers-belongs-to-another-set-efficiently","timestamp":"2014-04-23T16:34:53Z","content_type":null,"content_length":"91994","record_id":"<urn:uuid:7db3b719-ae12-4ace-9da3-1b1aca61b785>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [SI-LIST] : Differential Pair Theory
From: Josh Nickel (jgn103@nickel.ece.uiuc.edu)
Date: Wed Nov 17 1999 - 18:42:55 PST
On Mon, 15 Nov 1999, Mike Jenkins wrote:
> Chris,
> I'm sure you will get multiple responses to your question. Here's
> a theoretical view. (Apologies to the math-phobics out there.)
> Looking onto a single-ended line is a one-port network. Differential
> turns this into a two-port. The dif'l voltage is defined typically
> as V(port1) - V(port2) and the common mode as [V(port1) + V(port2)]/2.
> Some simplifying assumptions (which may or may not be true and should
> be checked for each application) are:
> 1) The dif'l pair is symmetric (i.e., port1 and port2 can be
> interchanged and the 2-port looks the same).
> 2) The signal is dif'l only (i.e., V(port1) = - V(port2)). Your
> example would probably fail this assumption badly, as ground
> loops would induce common mode signals.
> Back to the 2-port thing....The input impedance is now a 2x2 matrix
> rather than a single number:
> | V(port1) | | Z11 Z12 | | I(port1) |
> | | = | | * | |
> | V(port2) | | Z21 Z22 | | I(port2) |
> Z12 = Z21 for passive networks like this dif'l line. The dif'l
> impedance is Z11+Z22-2*Z12. If the pair is symmetric, Z11 = Z22,
> so the dif'l impedance is 2(Z11-Z12). On your board, if the two
> traces are separate, Z12 is neglible, so Z11=50 ohms (100 ohms dif'l).
> On your shield-less cable, Z11 and Z12 are large, but Z11-Z12=50 ohms.
> These two structures (PCB and twisted pair) cannot be matched both
> for dif'l and common mode signals.
> Hope that gives you a framework to start with.
> Regards,
> Mike
(*** switch to fixed font mode ***)
I wanted to clarify your last comment; I was not sure whether you
meant "both modes cannot be matched" or the "two structures cannot both be
In the case of two-coupled microstrip lines over a ground plane, such a
termination may be found which matches both modes. If we consider the
above equation, the Z matrix may be diagonalized (under certain conditions
- I'm thinking of microstrip in particular) to yield a modal
characteristic impedance matrix. Then the Ohm's law state equation
above may be transformed to the modal Ohm's law equation :
| V(mode1) | = | Z_char_mode1 0 | | I(mode1) |
| | = | | * | |
| V(mode2) | = | 0 Z_char_mode2 | | I(mode2) |
If both lines are terminated to ground with Z_char_mode1, the "even" mode,
then we will match mode 1, meaning that any modal voltage content of an
arbitrary signal incident upon the termination will experience no
reflection. Same holds for mode 2, the "odd" mode.
However, both modes may be matched if we can find a termination that
equals the characteristic impedance matrix (Z_char) from Ohm's state
equation. Such a termination would necessarily be a 3-element network,
realized by the following procedure:
Y_termination = Inverse(Z_char) = | Z22 -Z12 |
| | / (Z11*Z22-Z12*Z21)
| -Z21 Z11 |
Thus, the matching network may be realized by:
terminating line 1 to ground with (Z11*Z22-Z12*Z21)/Z22 ohms
terminating line 2 to ground with (Z11*Z22-Z12*Z21)/Z11 ohms
interconnecting lines 1 and 2 with (Z11*Z22-Z12*Z21)/Z21 ohms
Note that the off-diagonal elements of the admittance matrix are negative,
by virtue of simple network theory. They are also equal by reciprocity.
This matching theory may be generalized to an arbitrary number of lines,
n, which are characterized by (n x n) distributed impedance and admittance
matrices or (n x n) characteristic impedance/admittance matrices. For n >
2, however, exact matching obviously becomes somewhat impractical, but if
termination interconnections are restricted to nearest neighbors the
approximation is usually pretty good (especially in microstrip). We've
done some research on this topic, so I felt it necessary to point out the
the possibilities of "multimode matching".
Josh G. Nickel
Graduate Reseach & Teaching Assistant
University of Illinois at Urbana-Champaign
**** To unsubscribe from si-list: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP. si-list archives are accessible at http://
www.qsl.net/wb6tpu/si-list ****
This archive was generated by hypermail 2b29 : Tue Feb 29 2000 - 11:38:58 PST | {"url":"http://www.qsl.net/wb6tpu/si-list3/0034.html","timestamp":"2014-04-20T13:33:03Z","content_type":null,"content_length":"9676","record_id":"<urn:uuid:e4cb1dfe-eaa0-49df-8dcf-bdb2f6f311c5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Height H (in Ft) Of A Rocket As A Function ... | Chegg.com
the height h (in ft) of a rocket as a function of time t (in s) of a flight is given by the following equation. determine when the rocket is at ground level
h= 40 + 280t - 16t^2
the rocket hits the ground at t?
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/height-h-ft-rocket-function-time-t-s-flight-given-following-equation-determine-rocket-grou-q2901530","timestamp":"2014-04-20T06:38:18Z","content_type":null,"content_length":"20396","record_id":"<urn:uuid:d737d13a-3dc7-43e9-ba87-1f8e7aeade4a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Test Problem Disagreement
"At least" how many cookies are left if " they (the cookies) all disappear"? If all the cookies are gone? The answer's 0. If the cookies are "all" gone, there aren't any left.
What a stupid question...
Mister C
orbitz... Wait until you take higher math like abstract algebra... You will probably be all over your instructor then...
Agree trick question..
5. Since "at least one" student took "at least" 5 cookies, 5 is the minimum for that student, and there's no requirement that any other student took any.
0 would only be correct if "left on the table" at the end of the problem statement meant "left after they were all taken". However, "left on the table" is stated at the beginning of the problem,
giving greater weight to the probability that that's what is meant by "left on the table". This is the only ambiguity.
Ironically, the reason my teacher is sticking to the answer being 69 is because of the 'must'. He says if atleast one person MUST take 5, then that means we are in a situation where everyone else
MUST take 4. BUT, unless he provides a solid definition of 'must', then we cannot be sure what definition he is using, and 'must' has a few definitions, which chagne the result of the answer.
This is ironic because my teacher is hanging onto a grammatical point, and completely ignoring all the other grammatical mistakes in the answer.
I think the question was made with good intentions, but is a horrible question, and a trick.
By the way, this question is taken forma Discrete Math 2 test. I mostly enjoy the class because it seems like alot of the questions try to make you think in a more logical way than other math
classes I have had. The problem comes into play with questions where the teacher is askign a question in a vague way because they are afraid they are going to give away too much of the answer,
but pay the price in the end because their question is actually to vague to answer. And then they cannot see the other interpretation because they are so focused on what they said. It happens to
all of us at some time or another.
dP munky
how the hell do you get everyone else HAS to eat 4 cookies? im not seeing it?
this is the absolute worst question, any answer (technically) would be correct because you have to assume on everything
I gotta go with 3. It sounds like a trick question, the first part says.
Some cookies are left on a table in a room with 17 unsupervised Math students, and they (the cookies) all disappear.
They all disappear, so wether one student must take 5 or not is irrelevant imo. There are no cookies left because they all disappear.
Your teacher is wrong. "..at least one student..." means only one student is required in the problem.
This is a basic Pigeonhole Principle problem. There is no trick in this question. The only ambiguity is if "how many cookies were left" meant after they were eaten or before they were eaten.
Considering this is a discrete math class and Im sure you guys recently studied the Pigeonhole Principle, it makes the most sense that the question meant before the cookies were eaten.
The correct answer is #1. The question states that the instructor was "absolutely certain that atleast one of those students must have taken 5 or more of the cookies", the only way he could be
absolutely certain is if there were enough cookies such that in every possible distribution of the cookies, someone would have at least 5 of them. The amount of cookies required is 4*17+1 to be
sure of this.
Think about it this way, say there were 10 cookies. Can you be sure that at least one person had 5 cookies? Of course not. 10 ppl could have had 1. Now consider 20. Still no, since 14 ppl could
have had 1 and 3 could have had 2. Continuing on in this pattern you get to 69 and see that no matter how the cookies are arranged, someone MUST have 5.
dP munky
>>"..at least one student..."
i think that says it all, 5 dude, 5
69 doesnt make sense, what if every kid only had 3? how do you know? you dont, NO FURTHER INFORMATION
69 doesnt make sense, what if every kid only had 3? how do you know? you dont, NO FURTHER INFORMATION
If every kid took 3 the instructor could not have been sure. To be sure that one kid took 5 minimum, there must have been 4 for each kid plus one. So even if each kid took only four, to
completely disappear it takes one kid to take five ( or more if someone else takes less ).
dP munky
>>one kid took 5 minimum, there must have been 4 for each kid plus one
this is i guess what i dont get then if one kid took 5 minimum, why do any of the other kids have to take any at all
I am not debating the Pigeon Hole Principle thinking, I am debating the grammar of the question and the possible interpretations.
Yes we had recently discussed PHP, but I do not think that recent discussion of the PHP is any excuse for asking a badly phrased question. As mathematictions we cannot assume any prior knowledge
as to what the question might be asking but need to answer what the question is asking. What a question is asking is based on how it can be interpreted. With a word problem, we are forced to
either A) Be as explicit as possible, leaving as little room for other interpretations as possible, or B) Understand that other interpretations rae possible and make room to allow these other
interpreations to be correct answers.
I do not see how the asker of this question did A, the question is somewhat ambigious in it's meaning.
If the asker does B, they have to understand that the language is full of these ambiguities. Also, in a college environment, some people are not as familiar with the language due to diversity,
and these interpretations should be taken in to consideration. I understand how the PHP thinking process comes about and what quesitont hat is being answered, but what if we take the sentence:
atleast one student must have taken 5 or more cookies.
If we use one possible definition of te word must: To be determined to; have as a fixed resolve.
Does the question not read:
atleast one student had determined to take 5 more cookies.
Does this meaning not give the answer of 5?
Perhaps I am over analysing this whole thing. But I just think the wording of the question ist oo lose and other interpretations need to be acknowledged as correct.
(4 * 17) + 1
Perhaps the students were a bunch of nice people and split the last cookie between them?
Mmm, 1/17th of a cookie :p
>Also, in a college environment, some people are not as familiar with the language due to diversity, and these interpretations should be taken in to consideration
I totally agree. Me saying that #2 didnt make sense was based on my understanding of the word "must" in that context. Of course, someone else with less (or more) experience with English may not
have seen it the way I did, and the professor should definitely take that into consideration.
dp munky: read my previous post again, I think my explanation is pretty clear.
punch your teacher in the face
and then POOP all over him (or her) | {"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/35083-math-test-problem-disagreement-2-print.html","timestamp":"2014-04-17T18:27:19Z","content_type":null,"content_length":"20309","record_id":"<urn:uuid:8687e151-d891-4adf-b913-c25b1b6de984>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elastic fuzzy logic system - Patent # 5751915 - PatentGenius
Elastic fuzzy logic system
5751915 Elastic fuzzy logic system
(4 images)
Inventor: Werbos
Date Issued: May 12, 1998
Application: 08/115,198
Filed: August 31, 1993
Inventors: Werbos; Paul J. (College Park, MD)
Primary Downs; Robert W.
Assistant Katbab; A.
Attorney Or Oblon, Spivak, McClelland, Maier & Neustadt, P.C.
U.S. Class: 706/2; 706/4; 706/52
Field Of 395/3; 395/61; 395/23; 395/24
U.S Patent 5179624; 5228113
Other Tseng, H. C., "Medical System with Elastic Fuzzy Logic," Fuzzy Systems, Int.'l Conference 1994, pp. 2067-2071..
References: Handbook of Intelligent Control: Neural Fuzzy and adaptive Approaches, Eds. D. White and D. Sofge, Van Nostrand, 1992, Chs. 3, 10, and 13..
Abstract: An artificial intelligence system is provided which makes use of a dual subroutine to adapt weights. Elastic Fuzzy Logic ("ELF") System is provided in which classical neural network
learning techniques are combined with fuzzy logic techniques in order to accomplish artificial intelligence tasks such as pattern recognition, expert cloning and trajectory control. The
system may be implemented in a computer provided with multiplier means and storage means for storing a vector of weights to be used as multiplier factors in an apparatus for fuzzy
Claim: I claim:
1. An apparatus for fuzzy control, comprising:
a membership memory for storing a plurality of membership functions for fuzzy control using at least one input variable and output variable;
a rule memory for storing a plurality of if-then rules in a form
where each .mu..sub.i, is one of the plurality of membership functions, where f and g are differentiable, and for which there exists a .gamma..sub.0 which causes R to equal zero, and
for which there exists a base value for .gamma..sub.i whichcauses clause i to effectively be removed from the rule;
an input device for entering data comprising at least one input value;
a processing unit for receiving said at least one input value from said input device, for retrieving at least one membership function of said plurality of membership functions from said
membership memory, for retrieving at least one rule of saidplurality of if-then rules from said rule memory and for producing an output representing a degree to which the at least one
rule applies to said at least one input value using said at least one membership function, said at least one input value, andsaid at least one rule.
2. The apparatus according to claim 1 wherein functions f and g are chosen such that said rule memory stores and said processing unit processes said plurality of if-then rules according
where at least one of (.gamma..sub.i0 through .gamma..sub.i,m) is not 1.0.
3. The apparatus according to claim 1, further comprising a defuzzification device for defuzzifying the output of the processing unit.
4. The apparatus according to claim 2, further comprising a defuzzification device for defuzzifying the output of the processing unit.
5. The apparatus according to claim 2, wherein said data entered by said input device further comprises corresponding output values for said at least one input device, and
wherein the rule memory comprises:
means for initially setting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m to
corresponding initial values;
means for adapting .gamma..sub.i.sbsb.o through .gamma..sub.i,m using a learning process based on said at least one input value and said corresponding output values.
6. The apparatus according to claim 5, wherein the means for initially setting comprises a means for setting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m to 1.0.
7. The apparatus according to claim 5, wherein the means for initially setting comprises a means for setting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m randomly.
8. The apparatus according to claim 5, wherein the means for adapting comprises means for updating .gamma..sub.i.sbsb.0 through .gamma..sub.i,m using a neural network learning process.
9. The apparatus according to claim 5, wherein the means for adapting comprises means for updating .gamma..sub.i.sbsb.0 through .gamma..sub.i,m using back propagation.
10. The apparatus according to claim 5, wherein the means for adapting comprises means for updating .gamma..sub.i.sbsb.0 through .gamma..sub.i,m using a dual subroutine.
11. A method for operating a fuzzy controller having a membership memory, a multiplier factor memory, a rule memory and a processing unit, comprising the steps of:
storing in said membership memory a plurality of membership functions for fuzzy control based on at least one input variable and an output variable;
storing in said rule memory a plurality of if-then rules in a form
where each .mu..sub.i is one of the plurality of membership functions, where f and g are differentiable, and for which there exists a .gamma..sub.0 which causes R to equal zero, and for
which there exists a base value for .gamma..sub.i whichcauses clause i to effectively be removed from the rule;
inputting data for said at least one input variable to said processing unit with an input device, said data comprising input values;
selecting at least one membership function from the plurality of membership functions;
selecting at least one if-then rule from the plurality of if-then rules; and
outputting an output corresponding to a degree to which the at least one rule applies to the at least one input variable by using the at least one membership function, and the at least
one rule.
12. A computer program product as in claim 11, wherein the second computer code device is configured to select functions f and g such that said plurality of rules are stored in a form
where at least one of (.gamma..sub.i0 through .gamma..sub.i,m) is not 1.0.
13. The method according to claim 11, further comprising the step of defuzzifying the output of the outputting step.
14. A method as in claim 11, the step of storing said plurality of if-then rules comprises choosing functions f and g such that said plurality of rules are stored in a form
where at least one of (.gamma..sub.i0 through .gamma..sub.i,m) is not 1.0.
15. The method according to claim 14, further comprising the step of defuzzifying the output of the outputting step.
16. The method according to claim 11, further comprising the steps of:
inputting corresponding output values for the input values received in the step of inputting data;
setting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m to initial values initially;
adapting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m using a learning process based on said input values and said corresponding output values.
17. The method of claim 16, wherein the step of setting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m initially comprises setting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m to 1.0.
18. The method of claim 16, wherein the step of setting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m initially comprises setting .gamma..sub.i.sbsb.0 through .gamma..sub.i,m randomly.
19. The method according to claim 16, wherein the step of adapting comprises updating .gamma..sub.i.sbsb.0 through .gamma..sub.i,m using a neural network learning process.
20. The method according to claim 16, wherein the step of adapting comprises updating .gamma..sub.i.sbsb.0 through .gamma..sub.i,m using back propagation.
21. The method according to claim 16, wherein the step of adapting comprises updating .gamma..sub.i.sbsb.0 through .gamma..sub.i,m using a dual subroutine.
22. The method according to claim 16, further comprising the steps of:
reporting to a user the adapted values of .gamma..sub.i.sbsb.0 through .gamma..sub.i,m.sbsb.0, and
updating at least one of the plurality of membership functions based on the updated values of .gamma..sub.i.sbsb.0 through .gamma..sub.i,m.
23. In an apparatus for fuzzy control, including a membership memory for storing a plurality of membership functions for fuzzy control using at least one input variable and output
variable; a rule memory for storing a plurality of if-thenrules; an input device for entering data comprising at least one input value; a processing unit for receiving said at least one
input value from said input device, for retrieving at least one membership function of said plurality of membershipfunctions from said membership memory, for retrieving at least one
rule of said plurality of if-then rules from said rule memory and for producing an output representing a degree to which the at least one rule applies to said at least one input
valueusing said at least one membership function, said at least one input value, and said at least one rule, the improvement comprising:
storing said if-then rules in a form
where each .mu..sub.i is one of the plurality of membership functions, where f and g are differentiable, and for which there exists a .gamma..sub.0 which causes R to equal zero, and for
which there exists a base value for .gamma..sub.i whichcauses clause i to effectively be removed from the rule.
24. The improvement of claim 23, wherein said if-then rules are stored in a form
25. A computer program product comprising:
a computer storage medium and a computer program code mechanism embedded in the computer storage medium for causing a computer to implement a fuzzy controller having a membership
memory, a multiplier factor memory and a rule memory, the computerprogram code mechanism comprising:
a first computer code device configured to store in said membership memory a plurality of membership functions for fuzzy control based on at least one input variable and an output
a second computer code device configured to store in said rule memory a plurality of if-then rules in a form
where each .mu..sub.i is one of the plurality of membership functions, where f and g are differentiable, and for which there exists a .gamma..sub.0 which causes R to equal zero, and for
which there exists a base value for .gamma..sub.i whichcauses clause i to effectively be removed from the rule;
a third computer code device configured to input data for said at least one input variable, said data comprising input values;
a fourth computer code device configured to select at least one membership function from the plurality of membership functions;
a fifth computer code device configured to select at least one if-then rule from the plurality of if-then rules; and
a sixth computer code device configured to output an output corresponding to a degree to which the at least one if-then rule applies to the at least one input variable by using the at
least one membership function, and the at least one if-thenrule.
Description: FIELD AND BACKGROUND OF THE INVENTION
The present invention relates in general to artificial intelligence systems and in particular to a new and useful device which combines artificial neural network ("ANN") learning
techniques with fuzzy logic techniques.
Both neural network learning techniques and fuzzy logic techniques are known. In fact, prior combinations of the two techniques are known as well, as for example U.S. Pat. No. 5,179,624
issued Jan. 12, 1993 to Amano ("Speech recognitionapparatus using neural network and fuzzy logic"), which is incorporated herein by reference.
Both techniques attempt to replicate or improve upon a human expert's ability to provide a response to a set of inputs. ANNs extract knowledge from empirical databases used as training
sets and fuzzy logic usually extracts rules from humanexperts.
In very brief summary, neural network techniques are based on observation of what an expert does in response to a set of inputs, while fuzzy logic techniques are based on eliciting what
an expert says he will do in response to a set of inputs. Many authors, including Applicant, have recognized the potential value of combining the capabilities of the two techniques.
Applicant is the author of Chapters 3, 10 and 13 of D. White & D. Sofge, Handbook of Intelligent Control: Neural, Fuzzy and Adaptive Approaches, Van Nostrand, 1992, ("HIC"), which was
published no earlier than Sep. 1, 1992 and which containsdisclosure of a number of novel inventions which will be summarized and claimed herein. The entirety of those chapters are
incorporated herein by reference.
The invention described and claimed herein comprises an Elastic Fuzzy Logic ("ELF") System in which classical neural network learning techniques are combined with fuzzy logic techniques
in order to accomplish artificial intelligence tasks such aspattern recognition, expert cloning and trajectory control. The ELF system may be implemented in a computer provided with
multiplier means and storage means for storing a vector of weights to be used as multiplier factors in an apparatus for fuzzycontrol. The invention further comprises novel techniques
and apparatus for adapting ELF Systems and other nonlinear differentiable systems and a novel gradient-based technique and apparatus for matching both predicted outputs and derivatives
to actualoutputs and derivatives of a system.
NEURAL NETWORKS
Artificial Neural Networks ("ANNs") are well known, and are described in general in U.S. Pat. No. 4,912,654 issued Mar. 27, 1990 to Wood ("Neural networks learning method") and in U.S.
Pat. No. 5,222,194 issued Jun. 22, 1993 to Nishimura("Neural network with modification of neuron weights and reaction coefficient"), both of which are incorporated herein by reference.
ANNs typically are used to learn static mappings from an "input vector," X, to a "target vector," Y. The first task is to provide a training set--a database--that consists of sensor
inputs (X) and desired actions (y or u). The training set may,for example, be built by asking a human expert to perform the desired task and recording what the human sees (X) and what
the human does (y). Once this training set is available, there are many neural network designs and learning rules (like basicbackpropagation) that can learn the mapping from X to y.
Given a training set made up of pairs of X and y, the network can "learn" the mapping by adjusting its weights so as to perform well on the training set. This kind of learning is
called"supervised learning" or "supervised control". Advanced practitioners of supervised control no longer think of supervised control as a simple matter of mapping X(t), at time t,
onto y(t). Instead, they use past information as well to predict y(t).
Broadly speaking, neural networks have been used in control applications:
1. As subsystems used for pattern recognition, diagnostics, sensor fusion, dynamic system identification, and the like;
2. As "clones" which learn to imitate human or artificial experts by copying what the expert does;
3. As "tracking" systems, which learn strategies of action which try to make an external environment adhere to a pre-selected reference model.
(4) As systems for maximizing or minimizing a performance measure over time. For true dynamic optimization problems, there are two methods of real use: (1) the backpropagation of
utility (which may be combined with random search methods); (2)adaptive critics or approximate dynamic programming. The backpropagation of utility is easier and more exact, but it is
less powerful and less able to handle noise. Basic backpropagation is simply a unique implementation of least squares estimation. In basic backpropagation, one uses a special, efficient
technique to calculate the derivatives of square error with respect to all the weights or parameters in an ANN; then, one adjusts the weights in proportion to these derivatives,
iteratively, untilthe derivatives go to zero. The components of X and Y may be 1's and 0's, or they may be continuous variables in some finite range. There are three versions of
backpropagating utility: (1) backpropagating utility by backpropagation through time, whichis highly efficient even for large problems but is not a true real-time learning method; (2)
the forward perturbation method, which runs in real time but requires too much computing power as the size of the system grows; (3) the truncation method, whichfails to account for
essential dynamics, and is useful only in those simple tracking applications where the resulting loss in performance is acceptable. D. White & D. Sofge, Handbook of Intelligent Control:
Neural, Fuzzy and Adaptive Approaches, VanNostrand, 1992, ("HIC") describes these methods in detail and gives pseudocode for "main programs" which can be used to adapt any network or
system for which the dual subroutine is known. The pseudocode for the ELF and F.sub.13 ELF subroutines providedbelow may be incorporated into those main programs (though the F.sub.-- X
derivatives need to be added in some cases).
Backpropagation cannot be used to adapt the weights in the more conventional, Boolean logic network. However, since fuzzy logic rules are differentiable, fuzzy logic and backpropagation
are more compatible. Strictly speaking, it is notnecessary that a function be everywhere differentiable to use backpropagation; it is enough that it be continuous and be differentiable
almost everywhere. Still, one might expect better results from using backpropagation with modified fuzzy logics,which avoid rigid sharp corners like those of the minimization operator.
One widely used neural network (a multi-layer perceptron) includes a plurality of processing elements called neural units arranged in layers. Interconnections are made between units of
successive layers. A network has an input layer, an outputlayer, and one or more "hidden" layers in between. The hidden layer is necessary to allow solutions of nonlinear problems. Each
unit is capable of generating an output signal which is determined by the weighted sum of input signals it receives and athreshold specific to that unit. A unit is provided with inputs
(either from outside the network or from other units) and uses these to compute a linear or non-linear output. The unit'output goes either to other units in subsequent layers or to
outsidethe network. The input signals to each unit are weighted either positively or negatively, by factors derived in a learning process.
When the weight and threshold factors have been set to correct levels, a complex stimulus pattern at the input layer successively propagates between hidden layers, to result in an
output pattern. The network is "taught" by feeding it asuccession of input patterns and corresponding expected output patterns; the network "learns" by measuring the difference (at each
output unit) between the expected output pattern and the pattern that it just produced. Having done this, the internalweights and thresholds are modified by a learning algorithm to
provide an output pattern which more closely approximates the expected output patter, while minimizing the error over the spectrum of input patterns. Neural network learning is an
iterativeprocess, involving multiple "lessons".
In contrast, some other approaches to artificial intelligence, i.e., expert systems, use a tree of decision rules to produce the desired outputs. These decision rules, and the tree that
the set of rules constitute, must be devised for theparticular application. Expert systems are programmed, and generally cannot be trained easily. Because it is easier to construct
examples than to devise rules, a neural network is simpler and faster to apply to new tasks than an expert system.
FUZZY CONTROL
Fuzzy logic or fuzzy control is also known and is described in general in U.S. Pat. No. 5,189,728 issued Feb. 23, 1993 to Yamakawa ("Rule generating and verifying apparatus for fuzzy
control"), which is incorporated herein by reference.
In conventional fuzzy control, an expert provides a set of rules--expressed in words--and some information about what the words in the rules mean. Fuzzy control then is used to
translate information from the words of an expert into a simplenetwork with two hidden layers, as described in detail in Yasuhiko Dote, "Fuzzy and Neural Network Controllers", in
Proceedings of the Second Workshop on Neural Networks, Society for Computer Simulation, 1991. Briefly, the expert knows about an inputvector or sensor vector, X. He knows about a
control vector u. He uses words ("semantic variables") from the set of words A.sub.i through A.sub.m when describing X. He uses words from the set Y.sub.l through Y.sub.n when
describing u. He then provides alist of rules which dictate what actions to take, depending on X. A generic rule number i would take the form:
To make these rules meaningful, he specifies membership functions .mu.(x) and .mu.(u) which represent the degree to which the vectors X and u have the properties indicated by the words
A.sub.i and Y.sub.j. Typically, a given word Ai appears inseveral different rules. This information from the expert is translated into a two-hidden-layer network as follows.
The set of input words across the entire system are put into an ordered list. The first word may be called A.sub.1, the second A.sub.2, and so on, up to the last word, A.sub.n. The
rules also form a list, from rule number 1 to rule number R.For each rule, the rule number j, one must look up each input word on the overall list of words A.sub.1 ; thus if "B" is the
second word in rule number j, then word B should appear as A.sub.k on the overall list, for some value of k. one may define"i.sub.j,2 " as that value of k. More generally, one may
define i.sub.j,n as that value of k such that A.sub.k matches the nth input word in the rule number j. Using this notation, rule number j may be expressed as:
where nj is the number of input words in the rule number j, and where u'(j) refers to u'(D) for the verb D of rule number j.
The first hidden layer is the membership layer:
The next hidden layer is the layer of rule-activation, which calculates the degree to which rule number j applies to situation X:
The output layer is the simple "defuzzification" rule used in most practical applications, and described in Yasuhiko Dote, supra: ##EQU1##
None of these equations contains any adjustable weights or parameters; therefore, there is no way to use the methods of neurocontrol on such a system directly.
Equations 3 through 6 can be expressed in pseudocode:
______________________________________ SUBROUTINE FUZZ(x,X); REAL u(n), X(m), x(na), R(r), RSIGMA, uprime(n,r), running.sub.-- product, running.sub.-- sum; REAL FUNCTION MU(i,X);
INTEGER j,k,l,nj(r),i(r,na) /* First implement equation.sub.3. Use k instead of i for computer.*/ FOR k=1 TO na; x(k) = MU(k,X); /* Next implement equation .sub.4.*/ FOR j=1 TO r:
running.sub.-- product=1; FOR k=1 TO nj(r); running.sub.-- product=running.sub.-- product*x(I(j,k)); end; R(j)=running.sub.-- product; /* Next implement equation .sub.6 */
running.sub.-- sum=0; FOR j=1 TO R: RUNNING.sub.-- sum=running.sub.-- sum + R(j); RSIGMA=1/running.sub.-- sum; /* Next implement equation .sub.5 */ FOR k=1 to n; running.sub.--sum=0;
FOR j=1 to r; running.sub.-- sum=running.sub.-- sum+R(j)*uprime(k,j); end; u(k)=running.sub.-- sum*RSIGMA; end; end; ______________________________________
The subroutine above inputs the sensor array X and outputs the control array u. The arrays uprime and i and the function MU represent u'(j), i.sub.j,k and the set of membership
functions, respectively; they need to be generated in additional,supplementary computer code.
In addition to adapting weights, the neural network literature also includes methods, described in detail in HIC, for adding and deleting connections in a network. Applied to fuzzy
systems, these methods would translate into methods for changingrules by adding or deleting words, or even adding new rules. However, those methods generally assume the presence of
adaptable weights.
Nevertheless, equations 3 through 6 can be differentiated, in most cases; therefore, it is still possible to backpropagate through the network, using the methods given in HIC. This
makes it possible to use conventional fuzzy systems as part of aneurocontrol scheme; however, neurocontrol cannot be used to adapt the fuzzy part itself.
While useful, this technique has limitations. It does not work well for tasks which require that an expert develop a sense of dynamics over time, based an understanding of phenomena
which are not directly observed. A design which is based onstatic mapping from X(t) to u(t) cannot adequately capture the behavior of the human expert in that kind of application.
Furthermore, the most common version of adaptable fuzzy logic is based on putting parameters into the membership functions rather than the rules. This has two disadvantages.
First, changing the membership function, changes the definition of the word A. Thus the system is no longer defining words in the same way as the expert. This could reduce the ability
to explain to the expert what the adapted version of thecontroller is doing, or even what was changed in adaptation.
Second, changing the membership functions does not allow changing the rules themselves; thus the scope for adaptation is very limited.
PRIOR ATTEMPTS TO COMBINE NEURAL NETWORKS WITH FUZZY LOGIC
There are many ways to combine neural network techniques and fuzzy logic for control applications, described in detail in Paul Werbos, "Neurocontrol and Fuzzy Logic; Connections and
Designs," International Journal on Approximate Reasoning, Vol.6, No. 2, February 1992, p.185. For example, one can use fuzzy logic to provide an interface between the statements of
human experts and a controller; neural network techniques can adapt that same controller to better reflect what the experts actuallydo or to improve performance beyond that of the
In the current literature, many people are using fuzzy logic as a kind of organizing framework, to help them subdivide a mapping from X to Y into simpler partial mappings. Each one of
the simple mappings is associated with a fuzzy "rule" or"membership function." ANNs or neural network learning rules are used to actually learn all of these mappings. There are a large
number of papers on this approach, reviewed by Takagi, Takagi, H., Fusion technology of fuzzy theory and neural networks,Proc. Fuzzy Logic and Neural Networks, Izzuka, Japan, 1990.
However, since the ANNs only minimize error in learning the individual rules, there is no guarantee that they will minimize error in making the overall inference from X to Y. This
approachalso requires the availability of data in the training set for all of the intermediate variables (little R) used in the partial mappings.
A paper submitted to The Journal of Intelligent and Fuzzy Systems by Applicant (Elastic Fuzzy Logic: A Better Fit With Neurocontrol), and awaiting publication shows how a modified form
of fuzzy logic--elastic fuzzy logic--should make this hybridapproach much more powerful, allowing the full use of the many methods now available in neurocontrol. A copy of the paper is
incorporated herein by reference and is attached as FIG. 4.
The basic idea is to use fuzzy logic as a kind of translation technology, to go back and forth between the words of a human expert and the equations of a controller, classifier, or
other useful system. One can then use neural network methods toadapt that system, so as to improve performance.
Other researchers have proposed something like ELF, but without the .gamma..sub.ij exponents. These exponents play a crucial role in adapting the content of each rule; therefore, they
are crucial in providing more complete adaptability.
An advantage of ELF is the ability to explain the adapted controller back to the expert. The .gamma..sub.j,0 parameters can be reported back as the "strength" or "degree of validity" of
each rule. The parameters .gamma..sub.j,k can be describedas the "importance" of each condition (input word) to the applicability of the rule. In fact, if the parameters .gamma..sub.j,k
are thought of as the "elasticities" used by economists; the whole apparatus used by economists to explain the idea of"elasticity" can be used here as well.
Another advantage of ELF is the possibility of adaptive adding and pruning of rules, and of words without words. When .gamma. parameters are near zero, then the corresponding word or
rule can be removed. This is really just a special case ofthe general procedure of pruning connections and neurons in neural networks--a well-established technique. Likewise, new
connections or rules could be tested out safely, by inserting them with Y's initialized to zero, and made effective only asadaptation makes them different from zero. In summary, neural
network techniques can be used with ELF nets to adapt the very structure of the controller.
Other authors have suggested putting weights into the membership functions, but this does not provide as much flexibility as one needs for true adaptation, in most applications. In most
applications, one needs to find a way to modify the rulesthemselves. (Modifying the membership functions is sometimes desirable, but it is not the same as modifying the rules,
because--for example--a given word usually appears in several rules; each rule needs to be modifiable independently.)
SUMMARY OF THE INVENTION
An object of the present invention is to provide a new and useful apparatus which can provide more powerful methods for artificial intelligence applications.
A further object of the invention is to provide a tool for artificial intelligence applications which allows weighting the importance of various factors without weighting the membership
A further object of the invention is to provide a means which is a framework for communication between an expert and a computer model which retains a format and vocabulary readily
understandable by a human expert.
A further object of the invention is to provide a means for providing the flexibility to introduce factors at the outset of analysis, without knowing whether they will turn out to be
relevant or not, in a manner which permits deleting themwithout undue complication should they turn out to be non-relevant.
A further object of the invention is to provide an intuitive means for communicating to a human expert the importance which a computer model attaches to a particular rule.
These and other objects may be accomplished by means of a central processing unit incorporating dual subroutines. These and other objects may also be accomplished by introducing a
weighting means of a multiplicative form, which may beconceptualized mathematically by replacing equation 2 above by:
and defining the weights in the network as the combined sets of parameters .gamma. and vectors u'. This has the advantage of allowing the translation the words of an expert into a
network as before, simply by initializing all the .gamma. parameters to one. A feature of the system is the resultant natural way to report the results. The modified u' vectors can be
reported out directly and reported in terms of their fit to the words .gamma..sub.i. The .gamma. coefficients can bedescribed as "elasticities," as measures of the degree of importance
of the semantic variable to the applicability of the rule. Elasticity coefficients have been widely used in economics, and can be understood very easily intuitively, by people
withlimited knowledge of mathematics. Thus, while elastic fuzzy logic makes it easy, as before, to translate back and forth between a human expert and a network, unlike the conventional
logic, it also makes it possible to carry out truly major adaptationsof the network using neural network methods. This kind of adaptation makes it easy as well to modify rules as part
of the adaptation; for example, words with an elasticity near zero can be deleted from a rule, and new words can be added to a rule in asafe way by initializing their elasticity to
The various features of novelty which characterize the invention are pointed out with particularity in the claims annexed to and forming a part of this disclosure. For a better
understanding of the invention, its advantages and objects,reference is made to the accompanying drawings and descriptive matter in which a preferred embodiment of the invention is
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and still other objects of this invention will become apparent, along with various advantages and features of novelty residing in the present embodiments, from study of
the following drawings, in which:
FIG. 1 is an overview schematic of an Apparatus for fuzzy Control using ELF.
FIG. 2 is a flow chart for operating a Fuzzy Controller according to the ELF process.
FIG. 3 is an overview of Operating Characteristics.
FIG. 4 is a flow chart illustrating a Stochastic/Encoder/Decoder/Predictor.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Elastic Fuzzy Logic (to be abbreviated ELF) provides a very broad range of new capabilities for fields such as control engineering, automatic pattern recognition, financial trading
systems, and so on. Generically, the types of application nowenvisioned involve "control" (automatic systems which output desired actions, such as motor controls, stock trades, or
settings for simpler controllers like thermostats), "mapping" (such as systems to input a picture and output a desired classification,or systems to input chemical sensor data and output
a prediction of actual chemical concentrations), "system identification" (such as systems to input past transactions data and predict future stock prices or systems to simulate future
prices accordingto an implicit probability distribution), data compression, and applications where knowledge-based systems or expert systems are now used.
ELF may be used in all these classical applications of fuzzy logic, because it too provides a way of translating from rules expressed in words into mathematical equations; however it
translates rules into a different class of mathematicalequations, which permit a later adaptation of the equations to improve performance in real applications. This application can be
done using extensions of adaptation techniques developed initially for artificial neural networks (ANNs). In a 1990 NASAconference paper, reprinted in IJAR 1992, Applicant proposed a
three--step process--translating rules into equations, adapting the equations, and then translating the results back into something the human expert can understand; however, the
mathematicalequations used in conventional fuzzy logic do not provide great scope for adaptation. Many, many researchers have tried to follow up on the 1990 suggestion, but the schemes
they came up with all included very little scope for adaptation, or use of ANNcomponents which turned the system into a "black box" (with the inherent lack of communication back to the
expert) or both. The class of functions used in ELF overcomes these imitations.
More precisely, ELF, refers to a particular technique for translating back and forth between rules expressed in words (and in simple numbers understandable to the expert) and the
corresponding mathematical equations to be implemented in computerhardware and software. It also subsumes the class of mathematical equations to be implemented in hardware and software,
for such applications, and the techniques used to adapt these equations automatically.
In addition to the fuzzy logic applications, the techniques used in ELF may be used as a new kind of artificial neural network (ANN) as shown in FIG. 3, which illustrates the process by
which a non-linear controller using ELF and Dual ELFsubroutine (32) interacts with an expert through words (31) and with an environment to be controlled (33) through the intermediary of
a neural net using ANN adaptive techniques (34). In other words, one may use the capabilities to be illustrated below,but without the human expert.
Referring to the drawings, the invention is an Elastic Fuzzy Logic ("ELF") System in which classical neural network learning techniques are combined with fuzzy logic techniques in order
to accomplish artificial intelligence tasks such as patternrecognition, expert cloning and trajectory control, shown in overview in FIG. 1.
In elastic fuzzy logic, the words coming from the expert can be translated initially into equations which are absolutely equivalent to those above. However, additional parameters are
inserted into the system for use in later adaptation. Equation 4 is replaced by the equation:
where the gamma parameters are all set to one initially.
A new subroutine ELF(u,X,gamma,uprime) is similar to the subroutine FUZZ above, except that space is allocated for the array gamma by:
and the block which implemented equation 4 is replaced by:
______________________________________ /* Implement equation 8 */ FOR j=1 TO r; running.sub.-- product = gamma(j,0); FOR k=1 to nj(r); running.sub.-- product=running.sub.-- product*(x(i
(j,k))**gamma (j,k)); end; R(j)=running.sub.-- product; end; ______________________________________
Referring to FIG. 1, in general, the invention may be implemented in an apparatus for fuzzy control, comprising:
1) a membership memory for (1) storing a plurality of membership functions for fuzzy control concerning one or more input variables and output variables;
2) a multiplier memory device (2) for storing a plurality of multiplier factors associated with each said membership function;
3) an input device (3) for entering data comprising one or more input values and associated output values;
4) a processing unit (4) for receiving said input values from said input device and for retrieving at least one of said plurality of functions from said membership memory and for
retrieving at least one of said plurality of multiplier factorsfrom said multiplier memory; and
5) an output device (5) for producing an output comprising said membership functions, said input data and said multiplier factors. The output device may feed data to a human expert or
to another computer program, such as another fuzzy logicdevice or an artificial neural network. The processing unit may also perform a fuzzy inference by applying said membership
functions and said multiplier factors to said input and outputting an output value as an end result.
6) The processing unit preferably will perform a calculation of the form
Referring to FIG. 2, in general, a fuzzy controller having a membership memory, a multiplier factor memory and a processing unit, may be operated according to the following steps:
1) store in said membership memory a plurality of predetermined membership functions for fuzzy control concerning an input variable and a plurality of predetermined membership functions
for fuzzy control concerning an output variable (21);
2) store in said multiplier factor memory a plurality of multiplier factors (22);
3) input to said processing unit with an input device data comprising input values and an output value to be later obtained by a fuzzy inference when an input value is given (23);
4) select a membership function from membership function memory (24);
5) select the multiplier factors associated with said membership function (which may initially be set to zero or to some other value) (25);
6) output the selected membership functions and multiplier factors concerning the input and output variables to an output device. The device may be another fuzzy controller, a neural
network or a human readable printout or screen display forexample (26).
7) Preferably, the processing unit will perform a calculation of the form
The above disclosure describes how to adapt ELF networks so as to make the actual outputs, Y hat, match desired outputs, Y, over a training set. Similar techniques, called supervised
learning, are in the public domain for many classes ofartificial neural network.
Not in the literature--for ELF or any other differentiable system--is a gradient-based technique designed to make the actual outputs Y hat match both the desired outputs Y and the
derivatives of the outputs Y. A technique which can match both isuseful in applications where there are target derivatives as well as target quantities Y, for example in an aerospace
application, where one may wish to adapt a simple network to approximate a very complex fluid-dynamics program. Using the techniquesgiven in HIC, one may write and debug a dual
subroutine for the fluid dynamics program, which then makes its derivatives available at a low cost (derivatives of a few key outputs) across all the inputs to the code, at a relatively
low computational cost. One may then adapt an ELF network or any other twice-differentiable system to approximate BOTH the raw, basic function and the derivatives of interest.
Mathematically, the approach is as follows. Suppose that we have an existing computer code C which we are trying to emulate, and a relatively small vector of key results V (which might
represent factors like total turbulence or heating or enginespeed output from the code). We may represent this as:
where X represents inputs to the code, Y represents outputs, V the figures of merit, and where there is some weight A.sub.i for the importance of V.sub.i and some weight B.sub.k for the
importance of each input X.sub.k. (An obvious choice is thevariance of X.sub.k, or a value-based weight of some kind). We may define the error function: ##EQU2## (or we may use some
other power besides the second power). As an initial stage, we may calculate the derivatives of V with respect to X by using thedual function for the computer code C; those are then
treated as constants in the adaptation stage. V hat represents the results from applying the known function V to the outputs of the ELF network (or other nonlinear system). Using the
techniques ofchapter 10 of HIC, we may simply write out the (dual) equations required to compute this error function. Using those techniques on the resulting forwards system, we then
derive the equations of a doubly-dual system to give us the required gradient oferror with respect to the weights in the ELF network (or alternate). We may then use these derivatives in
adapting those weights.
The following examples will further illustrate the invention.
EXAMPLE I
Adapting ELF by Supervised Control
In supervised control, the user first creates a database of "correct" control actions. For each example, example number t, the user provides a vector of sensor inputs X(t) and a vector
of desired or correct control actions, u'(t). The weightsin the system are usually adapted so as to minimize: ##EQU3## In ELF, the weights may be defined as the combination of the gamma
parameters and the u' vectors.
To minimize E.sub.tot as a function of the weights, the conventional neural-network technique of backpropagation, which is described in detail in P. Werbos, The Roots of
Backpropagation: From Ordered Derivatives to Neural Networks and PoliticalForecasting, Wiley, 1993 may be used. This can be described as an iterative approach. On the first iteration,
initialize the gamma parameters to one, and initialize uprime to the values given by the expert. On each subsequent iteration, take thefollowing steps:
1. Initialize the arrays of derivatives
F.sub.-- gamma.sub.-- total(j,k) and
F.sub.-- uprime.sub.-- total(j,k) to zero
2. For each example t do:
2a. CALL EFL(u'X(t), gamma, uprime)
2b. Calculate the vector of derivatives of E(t) with respect to the components of u(t):
2c. Using backpropagation--the chain rule for ordered derivatives--work the derivatives back to calculate F.sub.-- gamma(t) and F.sub.-- uprime(t), the derivatives of E(t) with regard
to the gamma and uprime parameters.
2d. Update the array F.sub.-- gamma.sub.-- total to F.sub.-- gamma.sub.-- total plus F.sub.-- gamma(t), and likewise for F.sub.-- uprime.sub.-- total.
3. Update the arrays of parameters:
where LR1 and LR2 are positive scalar "learning rates" chosen for convenience.
This procedure could be implemented through the following pseudocode:
______________________________________ INTEGER inter,t,T,k,j,nj(r) REAL gamma(r,0:na), uprime(n,r), F.sub.-- gamma.sub.-- total(r,0:na), F.sub.-- uprime total(n,r), F.sub.-- gamma
(r,0:na), F.sub.-- uprime(n,r), X(m,T), ustar(n,T), u(n), u(n), 1r1, 1r2 DO iter= 1 to maximum.sub.-- iterations; /* First implement step 1*/ FOR j=1 TO r; FOR k=0 to nj(r); FOR k=1 TO
n; F.sub.-- uprime.sub.-- total(k,j)=0; end; end; /*Next implement step 2, starting with 2a*. FOR t=1 TO T; CALLELF(u,X(,t),gamma,uprime); /*Next implement step 2b*/ FOR k=1 TO n;
F.sub.-- u(k)=2*(u(k) - ustar(k,t)); end; /* Express step 2c as a subroutine*/ CALL F.sub.-- EFL(F.sub.-- gamma,F.sub.-- uprime,F.sub.-- u); /* Implement step 2d */ FOR j=1 TO r; FOR k=
0 TO nj(r); F.sub.-- gamma.sub.-- total(j,k)=F.sub.-- gamma.sub.-- total(j,k)+ F gamma(j,k); end; FOR k=1 to n; F.sub.-- uprime.sub.-- total(k,j)=Fuprime.sub.-- total(k,j)+F uprime
(k,j); end; end; end; /* Finally, step 3*/ FOR j=1 TO r; FOR k=0 to nj(r); gamma(j,k)=gamma(j,k)-1r1*F.sub.-- gamma.sub.-- total(j,k); end; FOR k=1 TO n; uprime(k,j)=uprime(k,j)
-1r2*F.sub.-- uprime.sub.-- total(k,j); end; end; end; ______________________________________
The key challenge remaining is to program the dual subroutine, F EFL, which inputs the derivatives in the array F u and outputs the derivatives in the arrays F.sub.-- gamma and F
In order to calculate the derivatives efficiently, starting from knowledge of F.sub.-- u, one can use the chain rule for ordered derivatives, described in detail in HIC, to derive the
following equations:
where the center dot represents a vector dot product, and where F.sub.-- u' is a vector. These equations can be implemented through the following subroutine:
______________________________________ SUBROUTINE F.sub.-- ELF(F-gamma, F-uprime, F.sub.-- u); REAL u(n), X(m), gamma(r,0:na), uprime(n,r),base,F.sub.-- gamma(r,0:na), F.sub.-- uprime
(n,r),running.sub.-- sum,F.sub.-- u(n),R(r),RSIGMA,F.sub.-- R(r); INTEGER nj(r),k,j,i(r,na); /* First calculate F.sub.-- u dot u, the scalar "base". */ running.sub.-- sum=0; FOR k=1 TO
n; running.sub.-- sum=running.sub.-- sum + F.sub.-- u(k)*(k); end; base-running.sub.-- sum; /* Next, implement equations10 through 13 for each rule j*/ FOR j=1 TO r; /* Equation 10*/
FOR k=1 TO n; F.sub.-- uprime (k,j)=F.sub.-- u(k)*R(j)RSIGMA; end; /*Equation 11*/ running.sub.-- sum=0; FOR k=1 TO m; running.sub.-- sum=running.sub.-- sum+F.sub.--u(k)*uprime(k,j);
end; F.sub.-- R(j)=RSIGMA*(running.sub.-- sum - base); /*Equation 12*/ FOR k=1 TO nj(j); F.sub.-- gamma(j,k)=F.sub.-- R(j)*R(j)*log(x(i(j,k))); end; /*Equation 13*/ F.sub.-- gamma(j,0)=
F.sub.-- R(j)*R(j)/gamma(j,0); end; ______________________________________
This dual subroutine could be expanded further, so as to output F.sub.-- X, the derivatives of E with respect to the inputs X(t); however, that would require knowledge of the membership
functions (or another dual subroutine, F.sub.-- MU).
EXAMPLE II
Developing A New Controller
One way of using ELF might be in developing a controller for a new class of airplanes, developing as much of the control as possible before the airplane takes off, and building in
learning capabilities so as to minimize the cost of redesignduring the early test flights of the vehicles. One might develop a controller as follows, using ELF capabilities:
1. In the design stage, build a simplified (i.e. fast) simulator of the airplane. The simulator would specify the available controls (e.g.tension pulling the right front aileron, etc.)
and sensor data (e.g. altitude, air speed, etc.).
2. Run the simulator at reduced speed, so that a human expert can learn to control the airplane (i.e. handle low-level steering controls so that the plane stays on some desired course
set up as an example). If the human asks for additionalsensor input or controls, iterate until a compromise is reached between the expert and the designer.
3. After the human does an adequate job with the slow simulator, or makes a thorough effort to do so, ask the human to provide a list of if-then rules describing how to control the
airplane. Try this for several human experts.
4. Using an ELF computer program, input the rules. The ELF program would then output either: (1) a file of rules to be used by an ELF interpreter; (2) a compiled computer program to
implement the corresponding mathematical equations, when therules are translated into equations using the ELF conventions; the program would be set to run on an ordinary workstation;
(3) a compiled program to run on a specially equipped workstation, using ELF chips to accelerate the calculations. The ELF programshould preferably also implement a "dual subroutine" or
an equivalent hardware module for these rules. The implemented version of the rules will be referred to as the "ELF Action Network."
5. Direct inverse control (DIC) can then be used to adapt the ELF Action Network. More precisely, EACH ELF Actin Network coming from EACH expert can be used as the starting point in a
DIC adaptation process. Then, the performance of each canbe compared. The DIC process simply tries to keep the airplane on the desired course; performance is measured by comparing
actual to desired locations along the flight path.
6. If none of the humans or ELF nets did an adequate job, notify the design team. Report the rules developed by the human experts, AND the adapted versions. Report the resulting
performance. Also use backpropagation to report out thesensitivity of tracking error to the rule parameters at each time; for example, one might graph the implied derivatives versus the
time of impact of the parameter (using techniques parallel to those used in, A Generalization of Backpropagation . . .Recurrent Gas Market Model, Neural Networks, October 1990. This
information will help the redesign of the plane. If the humans performed acceptably, but the ELF nets did not, then the ELF nets can be trained using Supervised Control (SC)
trainingmethods to imitate the humans. If this fails, then the results should be explained back to the human experts, and new attempts made until the system works.
7. In any case, move on to a higher level adaptation, to improve performance of the controllers developed in stage 5. Define a performance measure which combines both tracking error and
fuel consumption and stresses which might tend to age thevehicle. Using techniques described by Applicant in Chapter 10 of HIC develop a "dual subroutine" for the airplane simulator.
Then use that simulator, its dual, the performance measure, and its dual to adapt the various ELF nets, using the technique of"backpropagation of utility." The resulting ELF Action nets
can be reported back to the human expert and design team, to see if the humans can come up with new ideas and starting points.
8. After the design passes these basic nominal performance standards, a stochastic design phase would begin. First, there would be an assessment of uncertainty and noise by the design
team. This would involve identifying uncertain parameterssuch as possible nonlinear couplings and novel winds effects above Mach 10. It would also involve identifying true noise
time-series such as the intensity and temperature of local wind effects buffeting the airplane. A stochastic simulator could bebuilt, in the form of a subroutine which inputs controls
and state information at time t, along with possible values for the noise parameters and for noise time-series at time t. A matching random-number generator would actually produce
possible valuesfor these random factors.
9. Construct a dual subroutine for the revised simulator.
10. Test the ability of human experts to fly the noisy simulator. Again ask them to provide rules. These rules might refer to the uncertain parameters, either directly or in terms of
words describing how the airplane "feels", if so, additionalrules for how to guess these parameters will also be needed.
11. Ask the humans to provide "Value Rules". These would be something like: "If (angle is less than desired angle) then one needs more (angle)." In other words, the "if" clause of a
value rule is just like the usual "if" clause, but the valueclause specifies a state variable which needs to be greater or lesser, to improve control of the vehicle, in the view of the
12. Use the ELF program to translate the value rules from step 11 into a derivative-style "Critic Network." In other words, the program creates a new network which inputs state
variables and sensor data, and outputs an estimate of the derivativeof future performance with respect to the important state variables.
13. Use the DHP procedure, described in detail in chapters 3 and 13 of HIC to adapt both the Critic networks and ELF networks. Also, for comparison, try other forms of ANN.
14. After all these design tests are complete, if performance is adequate, embed the resulting networks into chips to put into the actual airplane.
15. Steps 1-13 could, of course, be used for several alternative designs, so as to locate the design which is likely to have optimal performance.
Step 4 contains the key innovation.
Techniques already exist to query an expert about the meaning of phrases like "angle is less than desired angle", given a measurement of "angle" and "desired angle". Those techniques
can be used to get the required "membership functions". However, if the resulting membership functions are not adequate, an ELF computer expert could provide examples of possible
situations to the expert. In this example, the expert might provide examples of possible value pairs of "angle" and "desiredangle". (In fact, the ELF computer program might do this,
after some initial testing to develop reasonable examples). The expert would be asked to indicate the degree to which the clause applies to each example. The ELF program might then
adapt aconventional ANN, such as a multilayer perceptron, to include as a revised membership function in such cases. This does not destroy the "white box" character of the rules,
because the conventional ANN is used to develop an understanding of the clauseused by the expert.
The key step lies in how the rules are translated into equations. As a basic starting point, an ELF program might translate a rule of the form:
"If A and B and . . . C then do D" (where A . . . D are clauses) into:
where the .gamma. parameters are all initially set to 1, where .mu.A, etc. refer to the values of the various membership functions for the clauses A, B, . .. , C, where R refers to the
degree to which this particular rule is "invoked", andwhere the asterisk represents multiplication. In conventional fuzzy logic, a similar translation is made, but there are no gamma
parameters. The .gamma. exponents are "elasticities," understandable parameters which will be used later to explain theresults back to the human expert.
In implementation, the .mu. functions are normally allowed to vary between 0 and 1 in fuzzy logic. Thus taking exponents should be no problem in hardware implementation. However, one
can easily modify this to damp out actual values greaterthan 1 (if adapted membership functions allow such transient effects). In general, one can use any differentiable f and g in: ##
EQU4## where a base value (like zero) for .sub..gamma..sub.o has the effect of setting R to zero, and where a base value for.gamma..sub.i has the effect of removing clause number i from
the rule.
Finally, to complete the definition of the mathematical equations, one needs to use a "defuzzification" procedure. There are many standard versions of this, ranging from
center-of-gravity calculations through to ANNS. By way of simple example,a procedure which uses as inputs only the R for each rule and the desired action vector (based on the clause
"D") for each rule might be used; in that way, the desired action vector can be adapted, along with the gamma parameters and parameters (if any)in the membership functions. The standard
defuzzification rule might be used: ##EQU5##
Given these equations, one has a variety of choices for how to implement them in computer software and/or hardware, implementations of which are known to those skilled in the art.
Given the mathematical equations or network in step 4, there are nontrivial issues in how to adapt them (by SC, DIC, BU, DHP, etc.). Since the publication of Applicant's 1990 paper,
hundreds of experts with a great interest in neural networkshave applied ANNs to control, without being able to replicate the most important capabilities.
Among the important capabilities are those which become available after programming (directly or indirectly, in hardware or software) a "dual subroutine" for ELF. Chapter 10 of HIC,
describes how to construct dual subroutines. The specificequations for the dual subroutine for ELF nets appear for the first time in P. Werbos, Neurocontrol and Elastic Fuzzy Logic,
IEEE Trans. Industrial Electronics, cover date April 1993 (Delayed printing).
Given a dual subroutine, chapters 3 and 13 of HIC, give pseudocode for how to adapt the resulting action net, and any associated critic network. (They also describe how to adapt nets to
perform system identification, which would include rules toestimate hidden uncertain parameters or the equivalent). Pseudocode was also given in Miller, Sutton and Werbos, eds, Neural
Networks for Control, MIT Press, 1990 for the DHP system, but there was a missing term crucial for accurate results; whencalculating the target for the critic network, an additional
term is needed to account for the derivative in the action network. That missing term appears in Chapter 13 of HIC, a chapter authored by Applicant.
A correction to the published techniques is also necessary in order to improve the performance of Globalized Dual Heuristic Programming (GDHP), a general technique for use in adapting
ELF systems, or other nonlinear differentiable systems. Inthe notation of the Handbook of Intelligent Control (HIC), the correct adaptation procedure is as follows:
1. Obtain R(t), u(t) and R(t+1), for example by the methods discussed in HIC.
2. Calculate:
3. Adapt the weights W by exploiting the gradient F.sub.-- W. For example, use the update:
These equations assume a scalar critic, J hat, which may be used to adapt the Action component as with any other scalar criticm, as shown in the inventor's chapters in HIC. The constant
A.sub.o and the vector of weights A may be any vector ofweights; for example, they may all be chosen as 1, or they may be based on time averages of the vector lambda (giving greater
weight to components which have a bigger effect on J), etc. HIC describes how to program the dual functions shown here. Tocreate the dual subroutine G.sub.-- F.sub.-- J, simply write
out the equations of F.sub.-- J (using the methods of HIC), ADD an equation for a final result equal to: ##EQU6## and then use the procedures of chapter 10 of HIC to create the dual
subroutinefor the resulting ordered system.
At a low level, there are many procedures which can be used to adapt a controller when the requisite derivatives are available. Among these is the adaptive learning rate rule:
/(old grad DOT old grad), where LR refers to the learning rate used for some block of parameters or weights (such as the gamma parameters or obvious subsets of them), where a and b are
arbitrary parameters, where DOT refers to a vector dotproduct, and where "grad" refers to the currently available gradient, the set of derivatives of error whatever with respect to the
set of weights under consideration.
EXAMPLE III
Given the correct statement of GDHP provided in Example II, and HDP and ADHDP and DHP and ADDHP techniques known to those skilled in the art (as described, for example in HIC) it is
straightforward to modify GDHP to create an Action-Dependentversion, ADGDHP.
EXAMPLE IV
Adapting ELF by Backpropagating Utility
This example will describe the method for adapting a fuzzy controller which inputs X(t) and outputs u(t), starting from a fixed initial state X(0). It is easy to deal with the more
general case, as in Paul Werbos and Andras Pellionisz,"Neurocontrol and Neurobiology: New Developments and Connections", in Proceedings of the IJCNN (Baltimore), IEEE, 1992, but one
fixed starting value will be used for clarity of illustration. The object is to minimize: ##EQU7## for a known utilityfunction U. Again, for clarity, suppose that X(t+1) depends only on
X(t) and u(t), without noise.
To use the backpropagation of utility, it is first necessary to develop an explicit model of the system. For example, using the techniques in P. Werbos, Backpropagation through time:
what it does and how to do it, Proc. of the IEEE, October1990 issue or in chapter 10 of HIC, adapt an artificial neural network which inputs X(t) and u(t) and outputs a prediction of X
(t+1). Program that network into a computer subroutine, MODEL(Xold,u,Xnew). For the most common neural network models, HIC,describe how to program the dual subroutine for such a model,
F.sub.-- MODEL(F.sub.-- Xold,F.sub.-- u,F)Xnew); that subroutine inputs F.sub.-- Xnew and outputs F.sub.-- s and F.sub.-- Xold. Only one dual subroutine is needed for any network,
regardlessof whether it is being used to calculate the derivatives of error, the derivatives of utility, or anything else.
To adapt the ELF controller, iterate over the following steps:
1. Initialize F.sub.-- gamma.sub.-- total,F.sub.-- uprime.sub.-- total, and F.sub.-- X(T+1) to zeroes.
2. For each time, t=1 to time T, calculate X(t) and U(X(t)) by calling three subroutines in order:
CALL ELF(u(t-1),(X(t-1)) (to calculate U(t-1))
CALL MODEL(X(t-1),u(t-1),X(t)) (to calculate X(t)) CALL U(X(t))
3. For each time, starting from t=T and working back to t=0, perform the following calculations in order:
4. Adapt gamma and uprime:
The assignment statements in this algorithm all represent the addition or subtraction of arrays, rather than scalars.
The algorithm above should be very straightforward to implement. If desired, one can actually start out by using possible values for X(T-1) as a starting point, instead of X(0); one can
gradually work one's way back in time. Also, one must paycareful attention to the quality of the model (perhaps by testing for performance in simulations where the model generating the
simulations is known). Convergence can be sped up by using adaptive learning rates; for example, as in HIC, one could use theupdate rule: ##EQU8## for some "arbitrary" a and b (such as
0.2 and 0.9).
EXAMPLE V
General Method For "Elasticizing" A Fuzzy System
In general, one could "elasticize" a fuzzy system by sing the alternative "AND" operator described above. ("OR" operators follow trivially from AND, if one define "NOT" as one minus the
original truth value.) That, in turn, permits one to useneural network learning methods to adapt any kind of AI system, including systems used for complex reasoning and planning.
For example, one can build fuzzy Action networks, which input a vector of sensor inputs X(t) (or an expanded state vector R(t)) and output a control vector, u(t). Most fuzzy controllers
in the real world are fuzzy Action networks. One couldalso build fuzzy models of the system to be controlled, models which input R(t-1) and u(t-1) and output a prediction of R(t) and X
(t). One could even build a fuzzy Critic network, which inputs R(t) (and maybe u(t)), and outputs an evaluation of howdesirable the state R(t) is, as described in more detail in Paul
Werbos, "Neurocontrol and Fuzzy Logic; Connections and Designs," International Journal on Approximate Reasoning, Vol. 6, No. 2, February 1992, p.185.
There are many ways to exploit this approach in practical applications. For example, one can begin by asking a human expert how to control a system. Using fuzzy logic, one can translate
his words into a fuzzy Action network. Then one can use"cloning" methods to adapt that Action network, to make it represent what the expert actually does. (Kosko, Bart Kosko, Neural
Networks and Fuzzy Systems, Prentice-Hall, 1991, offers only one of the many methods which can be used to do this.) In asimilar way, one can adapt a model of the system to be
controlled. one can also ask the human to offer evaluation rules. Then one can use adaptive critic methods to adapt the Action network and the Critic network further, to yield a system
whichperforms better than the human. If these are still fuzzy networks, one can use fuzzy logic to explain to the human what the computer is now doing; the human can change the utility
function or performance measure, or suggest a new starting point, andstart over.
To make this kind of hybrid approach possible, one needs two things: (1) one needs an easy way to translate a fuzzy system into a simple network Y=f(X,W), so that one can use the
designs in HIC; (2) The fuzzy system must have sufficient degreesof freedom (weights W) so that adapting W will really provide enough flexibility for significant learning.
The simple kinds of fuzzy logic used in practical applications do not have these degrees of freedom. Also, they do not provide a true reasoning capability.
EXAMPLE VI
Elf Used In Place Of The Usual 0-1 Logic Used In Conventional Knowledge-Based Or Expert Systems
As an example, there has been widespread use in recent years of a system called the Real-Time Control (RCS) system, due to work by Albus and others. In at least some formulations, RCS
consists of a ste of systems to make inquiries from experts,which are translated into the following sort of rules. The actual implemented controller consists of several blocks of
"if-then" rules, each operating in parallel, each invoked independently on every time cycle. Each block is a set of simple if-thenrules, as described above, except that the user is not
restricted to input words and output words which describe external sensor input and actions. The user may also input from or output to a common shared memory. To use ELF to upgrade this
system, oneneeds only translate the original IF-THEN rules in each block to the corresponding ELF equations, using the translation procedure above. This results in an adaptable ELF
In order to adapt this whole system, one needs to construct a dual module for the entire system. To do this, one first constructs dual programs or modules for each of the IF-THEN
blocks. The dual module for the whole system is simply a modulewhich exercises each of these component dual modules in parallel. The resulting system is technically a time-lagged
recurrent network (TLRN), which can then be adapted by any of the several methods for adapting TLRNs given in the Handbook of IntelligentControl, described relative to any network
(including RCS networks) for which a primary and dual module are available.
EXAMPLE VII
A Stochastic Encoder/Decoder/Predictor
FIG. 5 is a flow chart illustrating a Stochastic Encoder/Decoder/Predictor. Information from time t-1 is input using input means (10) to a Predictor Network (20). The Predictor Network
(20) calculates R(t). Encoder Network (30) receives inputx(t) (40) and outputs a vector, R (50). Random numbers are added to R to produce output R' (60) as a function of .differential.
(which may be estimated, for example, by the observed root mean square average of the difference between predicted andobserved values). Signal R' and information from time (t-1) (70)
are input to Decoder Network (80) in order to generate a predicted value, X (90). Each of the networks has associated weights.
As can be seen from the above description, it is possible to implement the invention in a computer using the subroutines described herein and others that could be adapted by those
skilled in the art.
Thus, there has been described an Elastic Fuzzy Logic ("ELF") System is provided in which neural network learning techniques are combined with fuzzy logic techniques in order to
accomplish artificial intelligence tasks such as patternrecognition, expert cloning and trajectory control, that has a number of novel features, and a manner of making and using the
invention. The features involve the use of multiplier memory and multiplier means associated with each rule. The advantage ofthe invention is the resultant flexibility, power and
intuitive interface between a human expert and a computer system.
While specific embodiments of the invention has been shown and described in detail to illustrate the application of the principles of the invention, it will be understood that the
invention may be embodied otherwise without departing from suchprinciples and that various modifications, alternate constructions, and equivalents will occur to those skilled in the art
given the benefit of this disclosure. Thus, the invention is not limited to the specific embodiment described herein, but isdefined by the appended claims.
* * * * * | {"url":"http://www.patentgenius.com/patent/5751915.html","timestamp":"2014-04-16T07:19:55Z","content_type":null,"content_length":"87410","record_id":"<urn:uuid:d3f024c8-d5d0-4dd6-a8a9-4d9fed6fcead>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weekly Problem 34 - 2008
Copyright Β© University of Cambridge. All rights reserved.
'Weekly Problem 34 - 2008' printed from http://nrich.maths.org/
The dots are one unit apart. What is the area of the region common to both the triangle and the square (in square units)?
If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas.
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solutionView the current weekly problem | {"url":"http://nrich.maths.org/6209/index?nomenu=1","timestamp":"2014-04-18T06:32:08Z","content_type":null,"content_length":"3583","record_id":"<urn:uuid:30e66c70-86ee-4ac2-9309-b7c3ac745add>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Aarau Question and the Light Complex
There is an interesting connection between Einsteinβs earliest thoughts about special relativity and his 1905 paper. Between October 1895 and early fall of 1896 he attended school in Aarau,
Switzerland, preparing for the entrance exam at the ETH in Zurich. In his last autobiographical notes, published in 1956, he recalled:
During that year in Aarau the question came to me: If one runs after a light wave with light velocity, then one would encounter a time-independent wavefield. However, something like that does not
seem to exist! This was the first juvenile thought experiment which has to do with the special theory of relativity.
Ten years later, in Section 8 of his 1905 paper, he essentially presents this same thought experiment in his derivation of the formula for the transformation of the energy of a βlight complexβ. He
considers a plane wave of light, and imagines the part of this wave enclosed within a spherical region of constant radius R whose center is moving at the speed of light in the same direction as the
wave. Letting k[x], k[y], k[z] denote the cosines of the angles between the wave normal and the x, y, and z axes respectively, the equation of the surface of this moving sphere is
This is depicted in the figure below, showing the sphereβs position at four different times as it moves along with the plane wave.
Now, just as he did in 1895, Einstein considers the properties of the light complex contained within the co-moving sphere. Obviously no energy crosses the surface, because the surface is moving at
the speed of light in the same direction as the plane wave. Thus the energy contained within this moving spherical surface is constant, equal to the energy density of the wave multiplied by the
volume S = (4/3)pR^3 of the sphere.
In the special case when the wave normal is parallel to the x axis we have k[x] = 1, k[y] = 0, and k[z] = 0, so the equation of the spherical surface reduces to
In this special case the sphere moves along the x axis as shown below.
Now consider this same moving surface in terms of inertial coordinates X,Y,Z,T moving in the positive x direction with speed v. According to the Lorentz transformation we have
Making these substitutions into (2), we get the equation for the moving surface in terms of the transformed coordinates
This represents an ellipsoid moving in the positive X direction at the speed of light. It is elongated in the X direction by the reciprocal of the square root of the leading factor, as can be seen
from the fact that at Y = Z = T = 0 the value of X satisfies
and therefore
The volume of the enclosed region is increased by the same factor as the X extent, so the ratio of the volume of this ellipsoid to the volume of the original sphere is
Now, Einstein had already shown in Section 7 of his paper that the ratio of the energy densities relative to the two frames of reference, for the special case when the plane wave is moving parallel
to the x axis, is equal to the ratio of the squared amplitudes of the waves, which in this special case is
Therefore, the ratio of the total energy of the light complex contained within the surface relative to the two frames of reference in this special case is
More generally, if we return to equation (1) for a plane wave propagating in an arbitrary direction characterized by the direction cosines k[x], k[y], and k[z], the equation of the spherical surface
in terms of the X,Y,Z,T coordinates moving with speed v in the x direction is
Hence the surface locus at T = 0 has the equation
To evaluate the spatial volume enclosed within this surface (in terms of the X,Y,Z,T coordinates), itβs useful to notice that a space coordinate transformation of the βshearβ form
for any constant a and b preserves volume, as is evident from the fact that the determinant of the transformation matrix is unity. (For a simpler example of this, consider a two-dimensional shear
transformation Xβ = X, Yβ = Y + aX, which preserves the areas of plane figures, since the area of a triangle depends only on the altitude and the base, not on the lateral location of the apex.)
Therefore, the volume within the surface described by the above equation equals the volume within the surface described in terms of Xβ,Yβ,Zβ by
By definition k[x] = cos(f) where f is the angle between the wave normal and the x axis, so the ratio of the volume of this ellipsoid to the volume of a sphere of radius R is
Of course, if f = 0 this reduces to equation (3). Now, in the previous section of his 1905 paper, Einstein had derived the general form of the ratio of energy densities as
which of course reduces to (4) when f = 0. Combining this with the ratio of volumes, we get Einsteinβs general ratio between the energies of the light complex in terms of the two systems of reference
This of course is exactly the same as the ratio of the frequencies nβ/n of the light complex in terms of the two systems of reference. Einstein called this coincidence βremarkableβ, although he
didnβt explicitly link it to the fundamental relation E = hn that he had proposed for light quanta just weeks before in his 1905 paper on the photo-electric effect. Obviously that relation would have
been incompatible with relativity if energy and frequency did not transform in exact proportion to each other. From this point of view, establishing the identity between the laws of transformation
for the frequency and the energy of a βlight complexβ was one of the most significant results of Einsteinβs relativity paper. Itβs interesting that it emerged from consideration of the Aarau
question, i.e., from examining the characteristics of a light wave within a volume moving along with the wave at the speed of light. In his Autobiographical Notes written in 1949 he wrote about his
search for a universal formal principle that he had become convinced was needed (in 1905) to lead us to assured results.
After ten years of reflection such a principle resulted from a paradox upon which I had already hit at the age of sixteen: If I pursue a beam of light with velocity c (velocity of light in a vacuum),
I should observe such a beam of light as an electromagnetic field at rest though spatially oscillating. There seems to be no such thing, however, neither on the basis of experience nor according to
Maxwellβs equationsβ¦ One sees that in this paradox the germ of the special relativity theory is already contained. Today everyone knows, of course, that all attempts to clarify this paradox
satisfactorily were condemned to failure as long as the axiom of the absolute character of time, or of simultaneity, was rooted unrecognized in the unconscious. To recognize clearly this axiom and
its arbitrary character already implies the essentials of the solution of the problem.
As far as I know, the Autobiographical Notes of 1949 was the first time Einstein mentioned this early thought experiment from 1895, and he repeated essentially the same account to Selig (quoted at
the beginning of this article) in 1954. Itβs interesting that he never mentioned it in any of his many previous recollections describing his path to special relativity. This is strangely reminiscent
of Newtonβs βfalling appleβ story, which Newton never mentioned prior to the last couple of years of his life, when he apparently described it to several people, including his niece Catherine Barton,
who passed the story along to Voltaire. The most authoritative source is Stukeley, who dined with Newton in 1726 and reported on their after-dinner conversation:
The weather being warm, we went into the garden and drank tea, under shade of some apple trees, only he and myself. Amidst other discourses, he told me, he was just in the same situation, as when
formerly, the notion of gravitation came into his mind. It was occasion'd by the fall of an apple, as he sat in contemplative mood. Why should that apple always descend perpendicularly to the ground,
thought he to himself. Why should it not go sideways or upwards, but constantly to the earth's center.
Conduitt, who married Catherine Barton, gave a similar account, placing the incident in a garden near Newtonβs motherβs house in Lincolnshire in 1666.
Einstein might have mentioned his youthful paradox in the introduction of his 1905 paper, describing his motivations, but apparently by that time he was more impressed by the βasymmetries not
inherent in the phenomenaβ. Nevertheless, Section 8 of the paper clearly shows the outcome of a thought process trying to grasp the attributes of light by considering a βlight complexβ contained
within a surface co-moving with the light.
Return to MathPages Main Menu | {"url":"http://mathpages.com/home/kmath354/kmath354.htm","timestamp":"2014-04-19T22:19:48Z","content_type":null,"content_length":"30530","record_id":"<urn:uuid:0201c817-edc3-4dac-a821-27ffd7c8fcb3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
β’ across
MIT Grad Student
Online now
β’ laura*
Helped 1,000 students
Online now
β’ Hero
College Math Guru
Online now
Here's the question you clicked on:
tell whether each percent change is an increase or decrease. Then find the percent change. Round to the nearest percent. original amount: 45 new amount: 60
β’ 6 months ago
β’ 6 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
β’ Teamwork 19 Teammate
β’ Problem Solving 19 Hero
β’ Engagement 19 Mad Hatter
β’ You have blocked this person.
β’ β You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/523c74fde4b0fbf3cc7b9320","timestamp":"2014-04-16T17:14:58Z","content_type":null,"content_length":"49462","record_id":"<urn:uuid:25dde58c-ae98-44aa-a1da-a2651f2454d0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - what is zero
I think I'll start a new thread for this, but to answer some specific questions.
1. 'omthe' is a typo that should read 'other'
2. No 0 is defined in C already, you just can't divide by it - check the defintion of a field, F is a field if it is an abelian group, with identity 0 under operation +, and F, omitting 0, is also an
abelian group under the operation *.
3. In some structure ( ring usually) we say x divides y (is a divisor of) if there some other z with x*z=y. So in the ring of integers mod 8, 2 divides 0 in a non-trivial way (obviously x.0=0 is a
trivial statement), that is what we mean by zero-divisors (the non-trivial is implicit).
4. When I say it is not a good place to do arithmetic, I mean things like finding roots of ax+b=0 is not as easy as it ought to be, because usually we would say x= -b/a. However, when non-trivial
zero divisors exist this isn't true, as we can no longer divide by a. I mean the multiplicative inverse for 2 does not exist in mod 8 arithmetic.
To convince yourself of what's going on, lets do mod 3 arithemetic, what is 1/2? It is by definition the thing that when multiplied by two gives 1, agreed? So we are seeking a y such that 2y=1 (mod
3). By inspection 2*2=4=1 mod 3, so 1/2 = 2! Really we ought not to write 1/2 as it is too suggestive, but instead write 2^{-1}
In cases where x*y=0 for non-zer x and y we cannot say 0/x =y and vice versa - or at least whilst you may write it, it is not valid as a mathematical statement. To see why, consider mod 16 arithmetic
- 4*4=0 and 4*8=0, so you cannot define 0/4 - there are two possibitlities.
LOok out for a new posting. | {"url":"http://www.physicsforums.com/showpost.php?p=141894&postcount=34","timestamp":"2014-04-20T14:16:53Z","content_type":null,"content_length":"8479","record_id":"<urn:uuid:cb287ab5-b942-4b9d-bcc3-2926a776ac79>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Ross
Total # Posts: 74
You have prepared a 400 mL of a .210 M acetate buffer solution with a pH of 4.44. 1. Determine the concentration of both the acetate and acetic acid in the solution. 2. If you made this solution
using solid sodium acetate (MW = 136 g/mol) and liquid acetic acid (17.6 M, referr...
the larger of the two numbers is 8 more than twice the smaller. The sum of the numbers is 10 less than three time the larger. Find the numbers
a ball tossed to a height of 4 meters rebounds 40% of its previous height. the total distance travelled by the ball by the time it comes to rest is?
Where do you substitute Kc into the expression?
what type of a drink did not floated or sank?
solve 4x/3x+1 = 3?x+5
i read this book named abduction and my home work is to write a poem about it but the problem is i dont know how to write poem ?
a modified U-tube: the right arm is shorter than the left arm. The open end of the right arm is height d = 10.0 cm above the laboratory bench. The radius throughout the tube is 1.60 cm. Water is
gradually poured into the open end of the left arm until the water begins to flow ...
a modified U-tube: the right arm is shorter than the left arm. The open end of the right arm is height d = 10.0 cm above the laboratory bench. The radius throughout the tube is 1.60 cm. Water is
gradually poured into the open end of the left arm until the water begins to flow ...
p = 160/21~7.62
An object is dropped onto the moon (gm = 5 ft/s2). How long does it take to fall from an elevation of 250 ft.?
Which would be most appropriate to measure the mass of my pet panda in? a. kilograms b. centigrams c. Gigagrams d. milligrams its kilograms??????
9th grade
Name the set(s) of numbers to which 1.68 belongs. a. rational numbers b. natural numbers, whole numbers, integers, rational numbers c. rational numbers, irrational numbers d. none of the above my
answer a)rational numbers
Simplify the expression. 17 6 Γ 10 ΒΈ 2 + 12 a. 67 b. 59 c. 1 my answer -1 ?????
Evaluate the following expression when r = 2 and t = 5. (2r)^t - 2 a. 1022 b. 16 c. 64 d. 62
Evaluate the following expression when r = 2 and t = 5. (2r)t - 2 a. 1022 b. 16 c. 64 d. 62
10th grade
1,500 workers
10th grade
A computer manufacturer assembled 21,000 computers in October. For the month of November, they expect to double production. If a worker can assemble one computer per day, how many workers will be
needed to complete the job in 28 days? a. 1,500 workers b. 750 workers c. 1,429 w...
It takes 28 minutes for a certain bacteria population to double. If there are 4,241,763 bacteria in this population at 1:00 p.m., which of the following is closest to the number of bacteria in
millions at 4:30 pm on the same day?
The Omega pharmaceutical firm has five salespersons, whom the firm wants to assign to five sales regions. Given their various previous contacts, the salespersons are able to cover the regions in
different amounts of time. The amount of time (days) required by each salesperson ...
1/2 Please tell me if this is correct
A number cube, numbered 1 through 6, is rolled once. What is the probability of rolling an even number that is not prime? 1/6 1/3 1/2 2/3
Please tell me if this is correct 1/2
7. A bag contains 2 red cubes, 3 blue cubes, and 5 green cubes. If a cube is removed and replaced in the bag and another is drawn, what is the probability that both are green? 1/4 3/8 2/5 1/2
Please tell me if this is correct 7/20
Help please
1/16 Please tell me if this is correct
that will be 5 so you are saying that my answer is 5? I though that because it was positive my answers was going to be differents. For example I had x=-1 4x^3+8x-3x-2 and after doing all the problems
my answer was 13, but the x was a negative one that was the reason I though t...
that will be 5 so you are saying that my answer is 5? I though that because it was positive my answers was going to be differents. For example I had x=-1 4x^3+8x-3x-2 and after doing all the problems
my answer was 13, but the x was a negative one that was the reason I though t...
Please help me I have been trying to get the answer and for some reason is not coming up correct can someone please help. evaluate the polynomial for x=1 3x^2-4x+6
Wpw, way to post Ross problems on here.
Planes A and B are both perpendicular to line M. What is the relationship between planes A and B?
Hint: Pay attention to the units of measure. You may have to convert from feet to miles several times in this assignment. You can use 1 mile = 5,280 feet for your conversions. 1. Many people know
that the weight of an object varies on different planets, but did you know that t...
Hint: Pay attention to the units of measure. You may have to convert from feet to miles several times in this assignment. You can use 1 mile = 5,280 feet for your conversions. 1. Many people know
that the weight of an object varies on different planets, but did you know that t...
would like to see an example of what the plan should look like to get me started.
4th grade
how would i put these words in sentences:(PLEASE HELP ME) THANK YOU! l. general 2. inaccurate 3. interpret 4. evidence 5. demonstrate 6. relevant I start my tutoring classes in two weeks. I have been
reading much more which should help me.
I THINK a. 4,000 is the answer am I right
I THINK a. 4,000 is the answer am I right
hello- The question is round to the digit underlined Add or subtract? 7 is the underlined digit. 7,526-3,861. a. 4,000 b. 5,000 c. 3,000 d. not given
Find the word form 0.007 a. seven hundredths b. seven tenths c. seven thousand d. not given I selected a. seven hundredths
math ? FIND THE VALUE OF 5 - 35,791 a. five hundred b. five thousand c. five thousandths d. not given I selected - b. five thousand correct? or not?
4th grade-MATH
Are there any websites that will help me with my division?
Math Quotations place value
Good evening, I am not sure if Ross wrote the questions for homework correctly however, I am a little confused as to the difference in hundreds place value vs. hundreths place value is there such a
place value as HUNDRETHS? Would HUNDRETHS APPLY TO DECIMAL PLACE VALUE? Thanks ...
thanks much!!!!
please, does anyone know of a site I can go to online to help my son with some simple practice division? THX!!!
how can I learn simple division
The way I solved the first part of the question was: Vo=? d=7.18 m Vf = 7.80m/s a = 9.81m/s^2 So find the initial Vo velocity using: Vf^2 - Vo^2 = 2ad Then once you get Vo, use the same equation to
find the d, when Vf = 0, that's when the height is maximal. Hope that helps...
How do Christians help others?
3rd grade
Good evening ~ I have 2 questions. Kelsey was assigned math homework. He was given 8 word problems to solve. He estimated that it would take him 5 mins. to solve each problem (THAT WOULD TAKE 40
MINS. FOR KELSEY TO COMPLETE HIS MATH HOMEWORK) ? HOW TO WRITE A MULTIPLICATION SE...
3rd grade
Just would like to know if you have any suggestions on how I would help my son tell time better. He really struggles with this. Thankks!
3rd grade
thank you thank you thank you!!!
3rd grade
how do I write the time in three different ways for 3:50
What do we call the nunmbers that cannot be arranged ito 2-row arrays
what is meant by an animal or plant being in the same species
Calculate the solubility of calcium sulfate 0.010 mol/L calcium nitrate at SATP
Calculate the pH of the following aqueous solutions: 1.00 mol/L sulfuric acid H+ = 1.00 M for the first ionization THe second one is guided by K2 K2 = (H+)(HSO4-)/(HSO4-) Plugging in K2 as follows:
H+ = 1.00 + x SO4- = x HSO4= 1.00 - x Can someone please solve for x ?
Strychnine, C21H22N2O2(aq) is a weak base but a powerful poison. Calculate teh pH of a 0.001 mol/L solution of strychnine. The Kb of strychnine is 1.0 x 10^-6 My work: I called Strychnine SN SN + HOH
--> SNH+ + OH- Kb = (SNH+)(OH-)/(SN) (SNH+) = x (OH-) =x (SN) = 0.001-x Pl...
Strychnine, C21H22N2O2(aq) is a weak base but a powerful poison. Calculate teh pH of a 0.001 mol/L solution of strychnine. The Kb of strychnine is 1.0 x 10^-6 My work: I called Strychnine SN SN + HOH
--> SNH+ + OH- Kb = (SNH+)(OH-)/(SN) (SNH+) = x (OH-) =x (SN) = 0.001-x Pl...
Please calculate the pH and [H+] of a 0.30 mol/L solution of butanoic acid,. The Ka of butanoic acid is 1.52 x 10^-5
Wow , thanks you are a really excellent expert! :)
I am still getting the wrong answer. YOu are positive this is the answer? Perhaps my computer system is wrong?
0.003898-- this is H+? Ka = ( )(0.00398)/( ) What would be the concentration of C6H5COO- ?
Thank you! I still have to read over it but for now I beleive I got the jist of it
For the titration of 20.0 mL of 0.1500 mol/L NH3(aq) with 0.1500 mol/L HI(aq)(the titrant), calculate a) the pH before any HI(aq) is added b) the pH at the equivalence point
In a titration, how many millilitres of 0.23 mol/L NaOH(aq) must be added to 11 mL of 0.18 mol/L HI(aq) to reach the equivalence point?
To clean a clogged drain, 26 g of sodium hydroxide is added to water to make 150 mL of solution. What are the pH and pOH values for the solution?
Year 12 can't do year 7 simple math-help
i cant tell the difference .... i cant even do it and im in year 12
3 x -x factor completely x to the third power -x
What is the theme of the Medusa myth?
I wanted to now what is the molecular geometry of SeBr4, because I believe is a trigonal bipyramid...but I am not sure...please help me.... I thought it was a see-saw shape. See http://www.stolaf.edu
/depts/chemistry/mo/struc/explore.htm You can see SeF4, the shape of SeBr4 is ...
I am researching about why water is blue. I am wondering why is the water in the Caribbean bluer that that of the North Atlantic? Thank you for using the Jiskha Homework Help Forum. This is an
interesting question and here are some sites to explain it: http://www.dartmouth.edu...
making dot arrays
odd numbers | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Ross","timestamp":"2014-04-19T15:43:16Z","content_type":null,"content_length":"20538","record_id":"<urn:uuid:83f1f92b-6b3a-458c-b01a-0de974a25146>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determinant and symmetric power
up vote 3 down vote favorite
Let $V$ be a vector space over some field $k$ and $T \in \mathrm{GL}(V)$. Then, we can view $T\in \mathrm{GL}(\mathrm{Sym}^k(V))$ where $\mathrm{Sym}^k(V)$ denotes the $k^\mathrm{th}$ symmetric power
of $V$ and denote it $T_k$. Knowing $\det T$, is there a general formula for $\det T_k$?
rt.representation-theory determinants linear-algebra
add comment
1 Answer
active oldest votes
We have that $\det T_k$ is a fixed (depending on $n=\dim V$ and $k$ only) power of $\det T$. To see this, as well as getting the power, one can for instance note that $\mathrm{SL}(V)$ is
the commutator subgroup of $\mathrm{GL}(V)$ (except for extremely small finite fields but we can always increase the size of the field) and hence if $\det T=1$ then $\det T_k=1$. We can
up vote 6 then write any $T\in\mathrm{GL}(V)$ in the form $DS$, where $S\in\mathrm{SL}(V)$ and $D$ a diagonal matrix with diagonal entries $(t,1,1,\ldots,1)$. Then $\det((DS)_k)=\det(D_k)\cdot1$ so
down vote it suffices to compute $\det(D_k)$ but in the standard basis of $\mathrm{Sym}^kV$, given a basis $e_1,\ldots,e_n$ of $V$, $D_k$ is a diagonal entries and its determinant is $t^R$, where
accepted $R=\sum_{0\leq i\leq k}is^{n-1}_{k-i}$. Here $s^{a}_{b}=\dim \mathrm{Sym}^bU$ where $\dim U=a$ which equals $\binom{a+b-1}{b}$.
Thanks a lot! Could you please tell me where I can read about these things? β Brian Sep 27 '10 at 4:38
@Brian, try a text on multilinear algebra, for example "finite dimensional multilinear algebra" by M. Marcus discusses such topics in chapter 2. β Gjergji Zaimi Sep 27 '10 at 5:11
@Gjergji: Thanks a lot! β Brian Sep 27 '10 at 5:34
1 More precisely, $$\det T_k=(\det T)^N,\qquad N=C_{n-1}^{k-1}.$$ $N$ is a binomial coefficient. β Denis Serre Sep 27 '10 at 5:46
Actually, a density argument would also work (assume that everything is diagonalizable). β Brian Dec 8 '10 at 1:22
show 1 more comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory determinants linear-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/40077/determinant-and-symmetric-power?sort=newest","timestamp":"2014-04-21T10:16:47Z","content_type":null,"content_length":"56646","record_id":"<urn:uuid:ccde064f-b263-47f9-be8e-546659d7f2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crest Hill Algebra 2 Tutor
Find a Crest Hill Algebra 2 Tutor
...I studied Kempo from 05-09. I taught, from 07-09, an age group from 14-16 year olds. I had four years of experience with this program in High school.
26 Subjects: including algebra 2, reading, chemistry, algebra 1
...I have used statistical software for these courses, but I am most comfortable with use of either M.S. Excel or Minitab. I am not proficient at the use of SPSS, although I have used it somewhat.
13 Subjects: including algebra 2, statistics, algebra 1, geometry
...I look forward to working with your student to help them develop confidence in learning.I have previous experience as a mathematics instructor in a variety of settings including public high
schools, an alternative high school, and a juvenile detention center. I have served as an ACT Preparation ...
16 Subjects: including algebra 2, calculus, geometry, ASVAB
...I have been tutoring honors chemistry, AP chemistry, Intro. and regular chemistry and college level general chemistry 1 and 2 for last several years and have acquired the proficiency of making
difficult aspects easy to understand for the students.There are two main aspects in learning chemistry -...
23 Subjects: including algebra 2, chemistry, geometry, biology
I am a former family doctor, who on the side has always liked to teach. Teaching patients is a large part of primary care, as I sought to increase patients' understanding and motivation. It was
also important in truly informing them of their options so they could make an intelligent decision in treatment plan.
17 Subjects: including algebra 2, chemistry, statistics, reading | {"url":"http://www.purplemath.com/crest_hill_il_algebra_2_tutors.php","timestamp":"2014-04-17T13:19:21Z","content_type":null,"content_length":"23796","record_id":"<urn:uuid:e39b1edf-4a1f-45cc-9e01-2b2b7f6faefe>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime factorization of n+1
up vote 8 down vote favorite
If $n=\prod_{i=1}^{k} p_i^{e_i}$ is a prime factorization of integer $n$.
Is there a quick way to find the prime factorization of $n+1$?
Or the only way to do it is recalculating the whole factorization?
Any references and/or articles on this problem?
nt.number-theory prime-numbers
2 A lower bound on the difficulty: if it were too easy to do this, we could do it twice and either prove or disprove the twin prime conjecture. β Qiaochu Yuan Feb 10 '11 at 12:40
1 One special case not mentioned so far: With the factorization of $n$ you can determine if $n + 1$ is prime or not, see en.wikipedia.org/wiki/Lucas_primality_test (This is significantly faster than
finding the factorization if $n+1$ is not a prime.) β Tapio Rajala Feb 10 '11 at 14:23
2 A trivial observation: $n+1$ and $n$ have no prime factors in common. β Michael Lugo Feb 10 '11 at 21:54
add comment
6 Answers
active oldest votes
Check out the literature on Fermat numbers, $2^{2^n}+1$. If factoring $m$ helped you factor $m+1$, these numbers would be a cinch, but they're not.
up vote 11 down
vote accepted
And Fermat numbers are just one such example. Other examples with the factorization of $m$ being known and that of $m+1$ being hard include primorial $n$#$ + 1$, factorial $n!+1$,
4 Proth $kβ
2^n+1$, generalized Fermat $b^{2^n}+1$, ... Actually, I might start to cry if the factorization of these numbers turned out to be trivial. β Tapio Rajala Feb 10 '11 at
add comment
I don't think there is a way to do so, because then factoring large numbers would be trivial. Assuming 'quick' means polynomial time, we can build up a series of polynomial time
computations, starting from a given number, whose factorization is known. This since each subroutine is in polynomial time, and all the main program does is call subroutines to factor
consecutive numbers, we'd end up with a polynomial time algorithm to factor integers.
up vote 5
down vote EDIT: the above logic isn't formal, but I'm sure somebody else here can do a better job than me at making it rigourous.
1 "Polynomial time" in this context means polynomial in the length of $n$, which is to say polynomial in $\log n$. So to carry out this idea, you'd have to find an easily factored number
pretty close to $n$ first, and I'm not sure there's a guarantee such a thing exists. You're right that factoring $n$ doesn't much help you to factor $n+1$, but I don't think the
supporting argument is right. β Gerry Myerson Feb 10 '11 at 11:24
Thanks for the upvotes people, I'm only 15. β user12877 Feb 10 '11 at 13:47
A remark regarding the supporting argument merely rephrasing azome's answers a bit, which I believe at least under standard conjectures on gaps between primes makes it sound. There
1 (conjecturally) always exists a prime below $n$ at distance polynomial in $\log n$. So, one could start testing for primes below $n$, using for each test something polynomial in $\log n$
polynomialy in $\log n$ often so still polynomial. Having found a prime, factorization of it is of course for free, and then go back up the polynomially in $\log n$ many steps to $n$. β
quid Feb 10 '11 at 18:24
add comment
To elaborate on azorne's answer. We can do it in a way reminding of how we can take $n$'th powers modulo a number in about $\log n$ time.
Assume that there is a fast way to do what you want, and that we want to factor $n$. Then either $n$ or $n-1$ is divisible by 2. If $n-1$ is divisible by 2 then this reduces down to factor $
(n-1)/2$ + one operation of knowing the factorization of $n-1$ to obtain a factorization of $n$. If $n$ is divisible by two we can just divide by two to reduce the factorization to the
factorization of $n/2$.
up vote Thus we see that to factor $n$ takes at most the time to factor $[n/2]$ + One operation of knowing the factor of $n-1$ to factor $n$.
5 down
vote If we do this in $\log_2 n$ steps we will come down to trivial numbers to factor, and thus we see that the time it takes to factor a number $n$ will be at most $\log_2 n \times $ "the
maximum time it takes to go from knowing the factorization of $m$ to factor $m+1$ for $m \leq n$". This can certainly not be fast (e.g. polynomial time in $\log n$) since it would give a
polynomial time algorithm (in $\log n$) to factor an arbitrary number $n$. No such algorithm is known of course. The number field sieve is expected to be the fastest known algorithm
discovered yet, although the estimates for the time complexity it takes is just estimated heuristicly and is not proven rigorously.
add comment
The general philosophy is that multiplication and addition do not "see" each other. So the fact that one knows the multiplicative structure of n does not say anything about the
multiplicative structure of n+1. There are several demonstrations of this philosophy.
One of them concerns twin primes: a well-known heuristic, first exploited by Cramer, says that a random number $n$ is prime with probability $1/\log n$ (this is supported by the Prime
Number Theorem). However, if we assume that $n$ is a prime number, then it is believed that the probability of $n+2$ being a prime number is still $1/\log n$, provided that there no local
obstructions to this (for example, if $n\equiv1\pmod 3$, then this is trivially false). In other words, $n+2$ does not "know" whether $n$ is prime or not. Indeed, a quantitative form of the
twin prime conjecture states that
$$|\{n\le x:n~{\rm and}~n+2~{\rm are~prime~numbers}\}|\sim\frac{cx}{\log^2x},$$
where $c$ is some constant which arises due to the local obstructions mentioned above.
up vote 3
down vote A second demonstration of this philosophy is the Erdos-Szemeredi conjecture which, in its simplest form, states that if $A$ is a set of integers and we set
$$A+A=\{a+b:a,b\in A\}\quad{\rm and}\quad A\cdot A=\{a\cdot b:a,b\in A\},$$
$$\max\{|A+A|,|A\cdot A|\}\ge c_\epsilon|A|^{2-\epsilon}$$
for every $\epsilon>0$, where $c_\epsilon$ is some constant that depends only on $\epsilon$. Roughly, this conjecture says that $A$ cannot have both additive and multiplicative structure,
which would reduce the cardinality of $A+A$ and $A\cdot A$.
add comment
Here is an elaboration on the idea. Suppose we knew the prime factorization of many numbers near n. Could we use that information in factoring n?
The one thing we can say: if k is relatively prime to (n+k), then none of the prime factors of (n+k) can be factors of n. Since there are (on average) roughly log(log n) distinct prime
factors for (n+k) for small k, one would not be able to elimnate many of the pi(sqrt(n)) candidates for the smallest prime factor of n, unless n is of a special form like a Mersenne or
Fermat number, which has its own theory for factorization.
up vote 2 So, apart from elimnating O(log(log(n))) prime factors from consideration (or providing a small factor which could be found quickly with trial factorization), knowing the prime
down vote factorization of many numbers near n itself is not likely to help. Even in the special case that n is one away from a power or a small multiple of a power still leaves a lot of work to be
Gerhard "Ask Me About System Design" Paseman, 2011.02.10
add comment
There are two questions asked:
Q1. Does the prime factorization of $n$ give a quick way to find the prime factorization of $n+1$?
A1.No it does not (as noted in other answers). Knowing $n$ is prime is of no use.
up vote Q2. Is it ever of any use or do you always just have to start from scratch? (as I will choose the rephrase the question.)
1 down
vote A2. It is sometimes of use but not usually. When $n$ is a power of a smaller number there may be some help. Since the case of Fermat numbers was raised I will comment that factoring numbers
of the form $2^e+1$ is somewhat easier (relative to the size) than numbers of the form $2^e+3$. Given $n=2^e$ we know that $2^{e/f}+1$ is a factor of $n+1$ where $f$ is any odd divisor of
$e$. So that is a start to factoring $n+1$. In the case $n=2^{2^e},$ $n+1$ might be prime. To test primality one has PΓ©pin's test. The numbers involved are so huge that this is not practical
very far out. ALSO it is known that any candidate factor must be of the form $k2^e+1$ so in attacking $2^{2^{20}}+1$ (known composite, no factors known) we have already ruled out 99.9999% of
the possible factors.
I would say "not much use", as knowing n prime and sufficiently large gives that n+1 is even. Otherwise, I agree with your post. For more on the subject, I recommend Hans Riesels book on
computer methods for factorization and primality testing. Gerhard "Ask Me About System Design" Paseman, 2011.02.24 β Gerhard Paseman Feb 24 '11 at 18:25
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory prime-numbers or ask your own question. | {"url":"http://mathoverflow.net/questions/55010/prime-factorization-of-n1/55020","timestamp":"2014-04-18T00:35:53Z","content_type":null,"content_length":"82037","record_id":"<urn:uuid:f5613b70-2716-494a-bf6b-3d76896a8d28>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Control points
One possible way of defining parametric curves:
Define points near which the curve should pass:
p1 = (x1,y1), p2 = (x2,y2)...
For each point pi choose a parameter value ti.
Simplest choice: 1,2,3,... (uniform parameterization).
For each point specify a blending function Bi(t),
which defines how much it contributes to the value | {"url":"http://www.mrl.nyu.edu/~dzorin/ug-graphics/lectures/lecture16/tsld010.htm","timestamp":"2014-04-20T10:46:19Z","content_type":null,"content_length":"1456","record_id":"<urn:uuid:136dac18-7d37-4786-a0c7-0b9d00470f5d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry problems
1. November 23rd 2012, 07:48 PM #1
Geometry problems
Last edited by chubakueno; November 23rd 2012 at 07:52 PM.
2. November 24th 2012, 03:59 AM #2
Re: Geometry problems
Could you resubmit, including a diagram, with labels for the various points so that we can refer to lines and angles ?
3. November 24th 2012, 05:31 AM #3
4. November 24th 2012, 11:08 AM #4
Re: Geometry problems
1. You are dealing with 2 right triangles: $\Delta(ABD)$ and $\Delta(ACE)$.
So the point A is located on a half circle (circle of Thales) over BD and point A is located on a half circle (circle of Thales) over CE.
2. You certainely have made an exact drawing(?). See attachment.
a) The point A is the intersection of the green circle and the circle of Thales over BD. The center of the green circle has the coordinates $\left(\frac32 , \frac32 \right)$ with the radius $
\frac32 \sqrt{2}$ (why?)
b) Draw the line AC. Construct a right angle in A on AC. The leg of this right angle intersects the prolonged line BD in E.
3. I've done the construction in a coordinate system so you can read the length of x.
4. The line segment BD is divided harmonically by C (inner point) and E (outer point). According to the harmonic partition you'll get the proportion:
With your values:
Solve for x.
5. November 24th 2012, 04:00 PM #5
Re: Geometry problems
No, it was just illustrative, but that's the way the problem was given to me. Thank you!
6. November 25th 2012, 01:41 AM #6
Re: Geometry problems
My solution is rather more workmanlike !
Start with the right angled triangle ABD and let the side AB = a.
$a^{2}+AD^{2}=5^{2},$ so $AD = \sqrt{25-a^{2}}.$
Now let the angle ACD = $\theta,$ and use the sine rule in each of the triangles ABC and ACD.
From ACD,
$\frac{2}{\sin 45}=\frac{\sqrt{25-a^{2}}}{\sin \theta},$
$\sin \theta = \frac{\sin 45\sqrt{25-a^{2}}}{2}.$
From ABC,
$\frac{a}{\sin(180-\theta)}=\frac{3}{\sin 45},$
$\sin \theta=\frac{a\sin 45}{3}.$
Equate the two expessions for $\sin \theta,$ simplify, and you find that $a=\frac{15}{\sqrt{13}},$
and from which it follows that AD = $10/\sqrt{13}.$
Call the angle ABD $\phi,$ then from the triangle ABD, $\tan \phi=\frac{10\sqrt{13}}{15\sqrt{13}}=\frac{2}{3},$ and from that you can deduce that $\sin \phi=2/\sqrt{13}$ and $\cos \phi=3/\
Now, finally, turn your attention to the triangle at the other end, ADE.
The angle AED will equal $45 - \phi,$ so making use of the sine rule again,
$\frac{AD}{\sin (45-\phi)}=\frac{x}{\sin 45},$
$x = \frac{AD\sin 45}{\sin (45 - \phi)}=\frac{10\sin 45}{\sqrt{13}(\sin 45 \cos \phi-\cos 45 \sin \phi)}.$
$\sin 45 = \cos 45,$ so after cancelling and then substituting for $\cos \phi$ and $\sin \phi,$
we finish up with
$x = \frac{10}{\sqrt{13}(3/\sqrt{13}-2/\sqrt{13})}=10.$
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/geometry/208280-geometry-problems.html","timestamp":"2014-04-19T11:13:00Z","content_type":null,"content_length":"53666","record_id":"<urn:uuid:92ac687b-04b5-41fe-afab-c014f1b62b41>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
A-level Mathematics/OCR/C2/Logarithms and Exponentials
Operations With Exponential FunctionEdit
An exponential function is a function where a constant base (b) is raised to a variable.
Firstly, $b^x \times b^{2x}$ is $b^{\left(x + 2x\right)}\,$ which is $b^{3x}\,$. So when you multiply a base by the same base you add the variables. To clarify, here is an example with numbers:
β $x\,$ β $2^x\,$ β $2^{2x}\,$ β $2^x \times 2^{2x}\,$ β $2^{3x}\,$ β
β 1 β 2 β 4 β 8 β 8 β
β 2 β 4 β 16 β 64 β 64 β
Secondly $\frac{b^{2x}}{b^y}$ is $b^{\left( 2x - y \right)}\,$(also $y eq 0$). So when a base is divided by the same base you subtract the variables.
Here is an example with numbers: $\frac{2^4}{2^2}=\frac{16}{4}=4=2^2$.
Base raised to two powersEdit
Thirdly $\left(b^{2x}\right)^{3x}$ is $b^{\left(2x\right) \times \left(3x\right)}$ which is $b^{6x^2}$. So when a base with a variable is raised to a variable you multiply the variables. Here is
another example with numbers: (when x = 1) $\left(2^2\right)^3=4^3=64=2^6$.
Multiple basesEdit
Fourthly when $a^2 \times b^2 = ab \times ab$ it is the same as $\left(ab\right)^2\,$. Here is an example with numbers: $2^2 \times 3^2 = 36 = 6^2$. There is a similar situation with division: $\left
(\frac{a}{b}\right)^2 = \frac{a}{b} \times \frac{a}{b} = \frac{a^2}{b^2}$. So when you multiple or divide two different bases raised to the same variable you can multply or divide them first and then
raise them to the variable.
Fractional exponentsEdit
The last case is when x is presented as a fraction, you can make a square root function, for example $b^\frac{1}{x}$ becomes $\pm \sqrt[x] b$. However it is customary to only use the positive root
and so $b^\frac{1}{x}$ is defined as $\sqrt[x] b$. Another similar case is when the fraction has a constant (designated as c) other than 1 in the numerator , for example $b^ \frac {3}{x} = \left( \
sqrt[x] b \right)^3$ so $b^ \frac {c}{x} = \left( \sqrt[x] b \right)^c$.
The Laws of ExponentsEdit
The rules that have been suggested above are known as the laws of exponents and can be written as:
1. $b^xb^y = b^{x+y}\,$
2. $\frac{b^x}{b^y} = b^{x-y}$
3. $\left(b^x\right)^y = b^{xy}$
4. $a^n b^n = \left(ab\right)^n\,$
5. $\left(\frac{a}{b}\right)^n = \frac{a^n}{b^n}$
6. $b^{-n}=\frac{1}{b^n}$
7. $b^ \frac {c}{x} = \left( \sqrt[x] b \right)^c$ where c is a constant
8. $b^1=b\,$
9. $b^0=1\,$
Solving exponential equationsEdit
In order to solve an exponential equation you need to make sure that all the bases are the same. Then you can remove the base and solve for the variable. Here is an example:
Solve for x. $2^{\left(x-1\right)} = 16\,$
Now we convert 16 to a base 2 raised to a number.
$2^{\left(x-1\right)} = 2^4\,$
Now we can remove the base. So we have:
$x-1 = 4\,$
Finally solve for x.
$x = 5\,$
Graphing an Exponential FunctionEdit
When you graph an exponential function you use the same methods as with a regular function. There is a graph below that you can look at.
Logarithmic FunctionsEdit
In mathematics you can find the inverse of an exponential function by switching x and y around: $y = b^x\,$ becomes $x = b^y\,$. The problem arises on how to find the value of y. The logarithmic
function solved this problem. All conversions of logarithmic function into an exponential function follow the same pattern: $x = b^y\,$ becomes $y = \log_b x\,$. If a log is given without a written b
then b=10. Also with logarithmic functions, b > 0 and $b e 1$. There are 2 cases where the log is equal to x: $\log_bb^X = X\,$ and $b^{\log_bX} = X\,$.
Laws of Logarithmic FunctionsEdit
When X and Y are positive.
β’ $\log_bXY = \log_bX + \log_bY\,$
β’ $\log_b \frac{X}{Y} = \log_bX - \log_bY\,$
β’ $\log_b X^k = k \log_bX\,$
Change of BaseEdit
When x and b are positive real numbers and are not equal to 1. Then you can write $\log_a x\,$ as $\frac { \log_b x}{ \log_b a}$. This works for the natural log as well. here is an example:
$\log_2 8 = \frac { \log 8}{ \log 2} = \frac {.9}{.3} = 3\,$ now check $2^3 = 8\,$
Last modified on 13 November 2010, at 14:42 | {"url":"http://en.m.wikibooks.org/wiki/A-level_Mathematics/OCR/C2/Logarithms_and_Exponentials","timestamp":"2014-04-20T01:33:52Z","content_type":null,"content_length":"26663","record_id":"<urn:uuid:edb52099-48d8-42f3-a513-98c83612fef3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equation for an oblique cone?
For my own purposes I have been trying to write a program that draws a radial gradient. In computer graphics, a radial gradient blends a colour from the edge of a circle to a focus point within.
When the focus is at the centre of the circle, it is simple enough to treat the gradient as a right circular cone with a height of 1.0. Given a pixel at position x and y, z will represent the
ratio with which to interpolate between colours. The equation I used is:
${x^2 \over a^2} + {y^2 \over b^2} = {z^2 \over c^2}$
A gradient with a non-centred focus is an oblique cone, but I am having a difficult time finding the equation used to represent an oblique cone. The extent of my math education is single variable
calculus about a dozen years ago, so I am not up to the task of trying to derive the equation myself. Does such an equation exist? Or is an oblique cone simply a right cone whose base is an
inclined section? Any help would be appreciated.
Thank you. | {"url":"http://mathhelpforum.com/geometry/202428-equation-oblique-cone.html","timestamp":"2014-04-20T23:52:52Z","content_type":null,"content_length":"38798","record_id":"<urn:uuid:0aff71cc-f137-43de-9e71-cbf712147fcf>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] nonlinear optimisation with constraints
Sebastian Walter sebastian.walter@gmail....
Mon Jun 22 06:54:31 CDT 2009
2009/6/22 Ernest AdroguΓ© <eadrogue@gmx.net>:
> Hi Sebastian,
> 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter:
>> are you sure you can't reformulate the problem?
> Another approach would be to try to solve the system of
> equations resulting from equating the gradient to zero.
> Such equations are defined for all x. I have already tried
> that with fsolve(), but it only seems to find the obvious,
> useless solution of x=0. I was going to try with a
> Newton-Raphson alorithm, but since this would require the
> hessian matrix to be calculated, I'm leaving this option
> as a last resort :)
Ermmm, I don't quite get it. You have an NLP with linear equality
constraints and box constraints.
Of course you could write down the Lagrangian for that and define an
algorithm that satisifies the first and second order optimality
But that is not going to be easy, even if you have the exact hessian:
you'll need some globalization strategy (linesearch, trust-region,...)
to guarantee global convergence
and implement something like projected gradients so you stay within
the box-constraints.
I guess it will be easier to use an existing algorithm...
And I just had a look at fmin_l_bfgs_b: how did you set the equality
constraints for this algorithm. It seems to me that this is an
unconstrained optimization algorithm which is worthless if you have a
constrained NLP.
To compute the Hessian you can always use an AD tool. There are
several available in Python.
My biased favourite one being pyadolc (
http://github.com/b45ch1/pyadolc ) which is slowly approaching version
>> maybe you should try an interior point method. By definition, all
>> iterates will be feasible.
>> There is a python wrapper for IPOPT out there. It's called pyipopt. It
>> worked reasonably well when I tried it.
>> OPENOPT also interfaces to IPOPT as far as I know, but I have never
>> used that interface.
> Thanks, this looks interesting. I'm going to check out
> this pyipopt.
> --
> Ernest
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2009-June/021566.html","timestamp":"2014-04-19T09:45:29Z","content_type":null,"content_length":"5347","record_id":"<urn:uuid:5016fcf6-a025-406f-8113-f578715b610d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fermats criteria - Correct?
September 15th 2008, 06:26 AM #1
Fermats criteria - Correct?
Fermats kriterium:
If $f$ takes in $x_0$ a (local) extremvalue and can be differentiated in $x_0$, then $f'(x_0) \ = 0$
Fermat showed this in the 17th century.
THEOREM: $f: R \longrightarrow \ R$ , $x_0 \ \in \ D_f$
If $f$ takes in $x_0$ an extremevalue and $f$ can be differentiated in $x_0$ then $f'(x_0) \ = \ 0$
PROOF: Look at $\frac{\triangle f}{\triangle x} \ = \ \frac{f(x) - f(x_0)}{x - x_0}$
If $x_0$ is local maximum: $f(x) \leq f(x_0)$ for alla $x$ in a surrounding (close).
For $x > x_0$ :
$x - x_0 > 0$
and $\frac{\triangle f}{\triangle x} \ = \ \frac{f(x) - f(x_0)}{x - x_0}$$\leq 0 \ \Rightarrow \ \lim_{\triangle x \rightarrow 0+} \frac{\triangle f}{\triangle x} = f'(x_0) \ \leq \ 0$
For $x < x_0$:
$x - x_0 < 0$ and $\frac{\triangle f}{\triangle x} \ = \ \frac{f(x) - f(x_0)}{x - x_0}$$\geq 0 \Rightarrow \lim_{\triangle x \rightarrow 0-} \frac{\triangle f}{\triangle x} = f'(x_0) \geq 0$
This about $f'(x_0) = 0$ is such an important quality that it has its own name:
DEFINITION: $f: R \longrightarrow \ R , x_0 \in \ R$
$x_0$ is called a STATIONARY POINT TO $f$, if $f$ can be differentiated in $x_0$ and $f'(x_0) = 0$
A STATIONARY POINT $x_0$ that is NOT an extremepoint is called a SADELPUNKT.
Example) $f(x) = x^3$ has a SADELPUNKT (terasspunkt) in $x=0$
In English, such a point is usually called a point of inflection (or you can spell it inflexion if you prefer).
The term saddle point is used for a point where a function of more than one variable has a stationary point that is not a local extremum. For a function of two variables, this usually means that
some of the cross-sections through that point will have a local minimum there, and others will have a local maximum.
September 15th 2008, 08:06 AM #2 | {"url":"http://mathhelpforum.com/calculus/49154-fermats-criteria-correct.html","timestamp":"2014-04-18T01:22:50Z","content_type":null,"content_length":"40475","record_id":"<urn:uuid:ee72603c-14c3-47c4-8b37-b15d8041ecfe>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
β’ across
MIT Grad Student
Online now
β’ laura*
Helped 1,000 students
Online now
β’ Hero
College Math Guru
Online now
Here's the question you clicked on:
Look at the graph. What type of population growth does this graph represent? A. Limited growth B. Logistic growth C. Exponential growth D. J-shaped growth
β’ one year ago
β’ one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
β’ Teamwork 19 Teammate
β’ Problem Solving 19 Hero
β’ Engagement 19 Mad Hatter
β’ You have blocked this person.
β’ β You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a52587e4b0329300a8d04c","timestamp":"2014-04-18T13:47:52Z","content_type":null,"content_length":"35872","record_id":"<urn:uuid:3c37d425-6f9d-4101-9254-df35519cc874>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Essington Geometry Tutor
Find an Essington Geometry Tutor
...I would be happy to share that love with any student if they wished. While tutoring French, I focus on drawing parallels between French and other languages, particularly English, to enhance the
retention of meaning. I primarily focus increasing ability to communicate and confidence in one's abilities to do so.
33 Subjects: including geometry, English, French, physics
...I specialize in math and science classes from algebra up to calculus and physics. My personal challenge for each lesson is making sure to close the studentsβ βconcept gapβ: often, students do
not have trouble with the process of problem solving, but understanding what the problem is. Every sess...
9 Subjects: including geometry, calculus, physics, algebra 1
...From my therapist background, I work from a person centered approach, incorporating knowledge from previous courses taken. I am available most weeknight and weekends, sometimes as early as 5.
All lessons are in person, and won't be conducted via internet/skype, etc.
11 Subjects: including geometry, reading, algebra 1, grammar
...I look forward to meeting you and will be happy to answer any questions you may have!I have played volleyball for the last ten years, starting my freshman year of high school. For those four
years of high school I played club ball as well. I love it and play whenever possible.
10 Subjects: including geometry, algebra 1, ASVAB, logic
...I focus at identifying common errors and mistakes made by the student, point them out and thoroughly explain the right concept and how it comes about. I have a great passion for the combination
of numbers and letters and I was a certified math tutor who focused on algebra, prealgebra, precalculu...
28 Subjects: including geometry, chemistry, biology, algebra 1 | {"url":"http://www.purplemath.com/Essington_geometry_tutors.php","timestamp":"2014-04-20T21:09:57Z","content_type":null,"content_length":"24014","record_id":"<urn:uuid:59286ed4-1977-4832-be0f-6bbaf35e6945>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Matheology Β§ 214
Date: Feb 11, 2013 3:39 AM
Author: mueckenh@rz.fh-augsburg.de
Subject: Re: Matheology Β§ 214
On 10 Feb., 23:59, fom <fomJ...@nyms.net> wrote:
> On 2/10/2013 3:55 PM, Virgil wrote:
> >>> Please explain "existing set".
> >> An existing set is a set that is finite or potentially infinite.
> > That would require all of them to already exist, implying that no new
> > ones could ever be created, or invented, or discovered.
> > Thus in WMYTHEOLOGY there can never be anything new.
> What would be the consequence of that invariance?
> Every potentially infinite set already exists.
Who said so?
I said if existing, then finite or pot infinite.
Now you return if pot infinite then existing.
Try to understand: A ==> B does not imply B ==> A.
Then you may go on to learn logic step by step, but not before
understanding this (small step for mankind, but obviously big step for
> Thus, potential infinity is immanent infinity.
> This is Cantor's argument.
Yes he made the same step. And his followers gladly accepted it. He
exchanged quantifyers on his "extended integers":
"For every integer n, there exists integer m: m >= n"
"There exists integer m, such that for every integer n: m >= n."
No reason to be proud about "understanding" that.
Regards, WM | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8300359","timestamp":"2014-04-21T08:00:42Z","content_type":null,"content_length":"2705","record_id":"<urn:uuid:9a90e913-8a59-4d3b-b47d-6a98147ca58e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the difference benween numeric integration and riemann sums?
January 18th 2009, 11:23 AM
What is the difference benween numeric integration and riemann sums?
I am a student in AP Calulus BC and I have been wondering this question for a while. I am doing the course online, and one lesson was called "Riemann Sums". The next lesson was "Numeric
Integration".... and they were very similar. Why are they separated like this? Why wouldn't riemann sums just be included in the numeric lesson?
Are riemann sums a type of numeric integration? If not, what is the difference between riemann sums and numeric integration?
January 18th 2009, 11:30 AM
I am a student in AP Calulus BC and I have been wondering this question for a while. I am doing the course online, and one lesson was called "Riemann Sums". The next lesson was "Numeric
Integration".... and they were very similar. Why are they separated like this? Why wouldn't riemann sums just be included in the numeric lesson?
Are riemann sums a type of numeric integration? If not, what is the difference between riemann sums and numeric integration?
Riemann sums are a type of numerical integration but you'd need a lot of function evaluations (i.e. rectangles) to get a fairly accurate answer. However if you approximate the area under a curve,
say with trapezoids (the trapezoid rule) or parts of parabola's (Simpson's rule), with the same number of function evaluations you can get a far more accurate answer or use few function
evaluations to get an anwer accurate to a certain tolerence.
January 18th 2009, 12:18 PM
OK, thats what I was hoping the answer was! Thanks | {"url":"http://mathhelpforum.com/calculus/68726-what-difference-benween-numeric-integration-riemann-sums-print.html","timestamp":"2014-04-18T12:04:13Z","content_type":null,"content_length":"5563","record_id":"<urn:uuid:b966cc12-db63-4999-b607-ecc00d1b5f5b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
FIGURE 1
Figure 1
Kummer Landfill and the proposed location of
the highschool
Estimating VOC Emissions
University of Minnesota (U of MN) staff estimated emissions rates from the passive vents at the Kummer landfill for 18 VOCs (Maier and Tam 1994). In order to estimate VOC emissions from the passive
vents it is necessary to estimate flow rates through the vents as well as the concentrations of VOCs in the landfill gas itself. U of MN staff estimated the flow rates based on conservative and
non-site specific assumptions. VOC concentrations used to estimate emissions were based on a one time sampling event of Kummer landfill gas conducted in August 1992.
In order to provide a better estimate of emissions from the landfill, MDH applied flow rates measured from the passive vents on the Kummer landfill. MDH assumed a flow rate of 791 liter per minute (L
/min) which was the average flow rate taken from the 23 passive vents on May 26, 1993. These flow rates were then combined with NMOC landfill gas estimates used by the U of MN for estimating total
VOC emissions. The emission rates for the four VOCs of highest concern are found on Table 1.
Table 1.
Estimated Emissions for Four VOCs Using the U of MN Method with Site Specific Flow Rates from
the Passive Vents (Maier and Tam 1994 and MPCA 1993b)
Contaminant Average Emission Rate (megagrams /yr) Average Emission Rate (grams/second)
Benzene 2.0 E -2 6.3 E-4
Vinyl Chloride 1.9 E-3 6.0 E-5
Ethylene dibromide 1.2 E-2 3.8 E-4
Chloroform 1.0 E-2 3.3 E-4
USEPA Screening Model
The emission estimates in Table 1 were then applied to a USEPA air screening model to estimate air concentrations of these VOCs in the area where the school is proposed (USEPA 1995a). The following
inputs were used in the volume source EPA screening model:
β’ The landfill area is assumed to be 125 X 105 meters (Actually the landfill area is larger than this, however, MPCA staff believe most of the landfill gas is being emitted by a few vents that make
up a 125 X 105 meter area. Therefore all the landfill gas vents are assumed to be in this smaller area. This assumption results in a more conservative estimate.
β’ The source release height the pollutants is assumed to be 8 meters. This is the mid-point elevation for all the vents above the base elevation for the surrounding area.
β’ The area is assumed to rural
β’ The proposed school is assumed to be between 200 and 300 meters from the passive venting system.
β’ The initial lateral dimension is assumed to be 29 meters (125 meters divided by 4.3)
β’ The initial vertical dimension is assumed to be 1.6 meters (7 meters divided by 4.3).
The model results are reported below
Pollutant Average Annual Air Pollutant Concentrations estimated by EPA Screening Model between 200 and 300 meters (Β΅/m^3)
Vinyl Chloride 0.03
Ethylene Dibromide 0.02
Chloroform 0.01
Benzene 0.03 | {"url":"http://www.atsdr.cdc.gov/HAC/pha/pha.asp?docid=691&pg=4","timestamp":"2014-04-18T10:37:20Z","content_type":null,"content_length":"24383","record_id":"<urn:uuid:1422451a-d008-47b3-aec2-ed7d9df71905>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Noninvasive Biomechanical Assessment of the Rupture Potential of Abdomial Aortic Aneurysm
Wang, Hong Jun (2002) Noninvasive Biomechanical Assessment of the Rupture Potential of Abdomial Aortic Aneurysm. Doctoral Dissertation, University of Pittsburgh.
PDF (Noninvasive Biomechanical Assessment of the Rupture Potential of Abdominal Aortic Aneurysms) - Primary Text
Download (2938Kb) | Preview
Image (GIF) (Figure 1 - An artist's rendering of a AAA in its in situ position) - Supplemental Material
Download (51Kb) | Preview
PDF (Figure 17 - An excised AAA from an autopsy sample (htttp://www.vascularsurgery.com)) - Supplemental Material
Download (26Kb) | Preview
PDF (Figure 2 - Illustration of the traditional open AAA surgical procedure) - Supplemental Material
Download (129Kb) | Preview
PDF (Figure 23 - The "Virtual AAA" with a constant, patient specific wall thickness and included ILT) - Supplemental Material
Download (83Kb) | Preview
PDF (Figure 25 - Comparison of 3-D wall stress distribution between AAA models with and without ILT. The individual color scales to the right indicate von Mises stress. Both the posterior and
anterior views are shown for each case) - Supplemental Material
Download (671Kb) | Preview
PDF (Figure 27 - Von Mises stress for the case for which Ξ©0 was loaded (i.e., step A-C in Figure 26, or the true stress distribution) and for the case which Ξ©CT was assumed stress free and loaded
(i.e., step B-D in Figure 26, as was done in this work)) - Supplemental Material
Download (571Kb) | Preview
PDF (Figure 28 - Structure of the normal artery (http://www.heartcenteronline.com)) - Supplemental Material
Download (562Kb) | Preview
PDF (Figure 3 - Illustration of minimally invasive endovascular repair of AAA) - Supplemental Material
Download (29Kb) | Preview
PDF (Figure 31 - Immunohistochemistry staining on wall specimen section from thick ILT group (A and D), thin ILT group (B and E), and primary-deleted negative control (C and F)) - Supplemental
Download (641Kb) | Preview
PDF (Figure 32 - Neovascularization in wall with adjacent thick ILT, thin ILT, and nonaneurysmal control. New vessels were identified via staining for von Willebrand factor, which is a protein
marker for endothelial cells. Figure from Vorp et al. [183]) - Supplemental Material
Download (768Kb) | Preview
PDF (Figure 35 - ILT Thickness/Local Diameters) - Supplemental Material
Download (531Kb) | Preview
Image (GIF) (Figure 4 - A cross-sectional view of a typical intraluminal thrombus specimen) - Supplemental Material
Download (141Kb) | Preview
PDF (Figure 58 - Local wall strength distribution estimated by using the developed statistical model (equation 8.16) for all four AAA studied) - Supplemental Material
Download (635Kb) | Preview
PDF (Figure 59 - Local ILT thickness distribution for the four AAA studied) - Supplemental Material
Download (688Kb) | Preview
PDF (Figure 60 - Local diameter distribution for the four AAA studied) - Supplemental Material
Download (661Kb) | Preview
PDF (Figure 62 - RPI distribution of the four AAA evaluated in this study) - Supplemental Material
Download (655Kb) | Preview
PDF (Figure 63 - von Mises stress distribution (top) and maximum principal stress distribution (bottom) on AAA #4. Note the similarity between the two stress distribution patterns) - Supplemental
Download (520Kb) | Preview
Abdominal aortic aneurysm (AAA) is a localized dilation of the infrarenal aorta. Ruptured AAA has a mortality rate of 95% and is ranked as the 13th leading cause of death in the US. The ability to
reliably evaluate the susceptibility of a particular AAA to rupture could vastly improve the clinical management of AAA patients. Currently, no such reliable evaluation technique exists. The purpose
of this work was to develop a noninvasive technique to evaluate the rupture potential of individual AAA.To predict the wall strength distribution, experimentally determined wall strength data were
used for construction of a mathematical model using multiple linear regression techniques. The developed model was then validated using data from a different group of specimens. The strength
distributions for four different AAA were then generated using the validated model. The finite element method was used to estimate the wall stress distribution for all four AAA based on their
realistic geometries (reconstructed from CT images) which included intraluminal thrombus (ILT). The measured systolic blood pressure was applied as the loading condition. Nonlinear hyperelastic
constitutive models for AAA and ILT tissue were used, the latter being developed here based on uniaxial tensile testing data. For each patient, a local Rupture Potential Index (RPI) distribution was
calculated as local (nodal) wall stress divided by local wall strength. The developed model contains four independent variable parameters: AAA size, patient's age, family history, local ILT
thickness, and normalized local AAA diameter (R Squared = 0.86, p = 0.001). The model predicted the actual (measured) strength very accurately (R Squared = 0.81 for model validation). The wall
strength values predicted for the four AAA studied ranged from 130 to 306 N/(cm squared), whereas the measured wall strength values ranged from 39 to 324 N/(cm squared). The peak wall stress for the
four AAA studied ranged from 19 to 37 N/(cm squared). The peak RPI values ranged from 0.15 to 0.55. This patient-specific, computer-based, noninvasive RPI estimation technique could become an import
and reliable diagnostic tool for AAA patient management. However, further clinical studies are needed to validate this technique.
Social Networking: Share |
Item Type: University of Pittsburgh ETD
β ETD Committee Type β Committee Member β Email β
β Committee Chair β Vorp, David A. β vorpda@msx.upmc.edu β
ETD β Committee Member β Robertson, Anne M. β annerob@engrng.pitt.edu β
Committee: ββββββββββββββββββββββΌβββββββββββββββββββββββΌβββββββββββββββββββββββββββ€
β Committee Member β Borovetz, Harvey S. β borovetzhs@msx.upmc.edu β
β Committee Member β Webster, Marshall W. β webstermw@msx.upmc.edu β
β Committee Member β Sacks, Michael S. β msacks@pitt.edu β
Title: Noninvasive Biomechanical Assessment of the Rupture Potential of Abdomial Aortic Aneurysm
Status: Unpublished
Abdominal aortic aneurysm (AAA) is a localized dilation of the infrarenal aorta. Ruptured AAA has a mortality rate of 95% and is ranked as the 13th leading cause of death in the US. The
ability to reliably evaluate the susceptibility of a particular AAA to rupture could vastly improve the clinical management of AAA patients. Currently, no such reliable evaluation
technique exists. The purpose of this work was to develop a noninvasive technique to evaluate the rupture potential of individual AAA.To predict the wall strength distribution,
experimentally determined wall strength data were used for construction of a mathematical model using multiple linear regression techniques. The developed model was then validated using
data from a different group of specimens. The strength distributions for four different AAA were then generated using the validated model. The finite element method was used to estimate
the wall stress distribution for all four AAA based on their realistic geometries (reconstructed from CT images) which included intraluminal thrombus (ILT). The measured systolic blood
Abstract: pressure was applied as the loading condition. Nonlinear hyperelastic constitutive models for AAA and ILT tissue were used, the latter being developed here based on uniaxial tensile
testing data. For each patient, a local Rupture Potential Index (RPI) distribution was calculated as local (nodal) wall stress divided by local wall strength. The developed model
contains four independent variable parameters: AAA size, patient's age, family history, local ILT thickness, and normalized local AAA diameter (R Squared = 0.86, p = 0.001). The model
predicted the actual (measured) strength very accurately (R Squared = 0.81 for model validation). The wall strength values predicted for the four AAA studied ranged from 130 to 306 N/
(cm squared), whereas the measured wall strength values ranged from 39 to 324 N/(cm squared). The peak wall stress for the four AAA studied ranged from 19 to 37 N/(cm squared). The peak
RPI values ranged from 0.15 to 0.55. This patient-specific, computer-based, noninvasive RPI estimation technique could become an import and reliable diagnostic tool for AAA patient
management. However, further clinical studies are needed to validate this technique.
Date: 30 August 2002
Date Type: Completion
Defense 17 July 2002
Approval 30 August 2002
Submission 24 June 2002
Access No restriction; Release the ETD for access worldwide immediately.
Patent No
Institution: University of Pittsburgh
Thesis Type: Doctoral Dissertation
Refereed: Yes
Degree: PhD - Doctor of Philosophy
URN: etd-06242002-114810
Uncontrolled 3D Reconstruction; Abdominal Aortic Aneurysm; Biomechanics; Finite Element Method; Hyperelastic; Intraluminal Thrombus; Microstructure; Cardiovascular Disease; Multiple Linear
Keywords: Regression
Schools and Swanson School of Engineering > Bioengineering
Date 10 Nov 2011 14:48
Last 19 Jun 2012 11:58
Other ID: http://etd.library.pitt.edu:80/ETD/available/etd-06242002-114810/, etd-06242002-114810
Actions (login required)
Document Downloads | {"url":"http://d-scholarship.pitt.edu/8179/","timestamp":"2014-04-18T11:27:31Z","content_type":null,"content_length":"62229","record_id":"<urn:uuid:3d560f41-2f98-460d-be20-c9cbdfae66ab>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
complex numbers
Re: complex numbers
Some good methods have been suggested in previous posts. You can do this in yet one more way, by direct computation,
from Pascals triangle:
put a = 1
separate the odd and even powers and plug in (β3)i for b. Even ones first,
then odd.
Add 64 + 0, answer is 64.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=18942","timestamp":"2014-04-16T16:25:45Z","content_type":null,"content_length":"22370","record_id":"<urn:uuid:a1d025c2-320c-4625-89db-ebd459da6d60>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
β’ Teamwork 19 Teammate
β’ Problem Solving 19 Hero
β’ Engagement 19 Mad Hatter
β’ You have blocked this person.
β’ β You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/lilcrunk/asked","timestamp":"2014-04-17T18:43:03Z","content_type":null,"content_length":"91884","record_id":"<urn:uuid:20ad9cc6-d63b-4033-b694-0c7c4a4467b2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: A dual view of foundations
Todd Wilson twilson at csufresno.edu
Wed Mar 15 20:37:12 EST 2000
I would like to thank the FOM readers who responded to my FOM posting
of Sun, 27 Feb 2000 16:19:09 PST, "A dual view of foundations",
including Joe Shipman, Stephen Simpson, and A.R.D.Mathias, and the FOM
moderator for allowing such a discussion. First a reply to Simpson.
In a later posting, I hope to respond to Mathias.
Stephen G Simpson:
> Wilson's Aspect 1 [what I called "ontological" --TW] seems to
> correspond closely to what I would call *interpretational richness*
> of the given foundational scheme. For instance, ZFC is well known
> to be interpretationally rich, in the sense that a great many
> (actually, almost all) mathematical theories can be *interpreted* (a
> la Tarski/Mostowski/Robinson) in ZFC. (Wilson speaks of
> ``mappings'' rather than interpretations, but his intention seems
> clear enough. Perhaps Wilson could comment on whether I am reading
> him correctly.)
Yes, this is what I had in mind. Thank you for the clarification.
> The naive idea of ``set'' is easy to think about and work with, and
> this makes the interpretation of many mathematical concepts and
> theories in ZFC almost routine. For instance, the interpretation of
> group theory into ZFC presents no difficulty, because a group
> consists of an underlying *set* together with operations on it, etc
> etc. The set-theoretical interpretation of certain concepts of
> analysis and geometry (real numbers, continuity, probability, etc
> etc) is more difficult, but the foundational work of certain 19th
> and early 20th century mathematicians serves as our guide, and this
> is another success story.
Despite the obvious success and utility of these "reductionist"
treatments of real numbers, continuity, probability, etc., in ZFC, I
wonder whether they, like reductionist treatments of, say, biology in
physics (via chemistry), have missed out on any important (should we
call them "emergent"?) features of the original phenomena. Do we, for
example, declare our intuitions of "nonpunctiform infinitesimals"
(Bell, "A primer of Infinitesimal Analysis", Introduction) vague
musings that were finally and definitively clarified by the arithmetic
continuum, or is there the possibility that the arithmetization
elucidated but one aspect of the continuum, there being still others
to capture. In particular, we know that nilpotent infinitesimals are
incompatible with the usual picture of the reals as a field, and that
invertible infinitesimals are incompatible with the usual picture of
the reals as Archimedian, so these notions become, under the usual
treatment, fictions or facons de parler rather than primal or
foundational aspects of our view of the continuum. Is this the way it
should be, or are we perhaps putting the cart before the horse?
The recent work in category theory reported in Bell's book shows that
"worlds" are possible in which the reals contain nilpotent and
invertible infinitesimals -- simultaneously, if desired -- and that
all functions definable on the reals are continuous. These worlds are
very nice for the development of "smooth" analysis; in fact, arguably,
they are aesthetically the proper place to do it. Other worlds might
be the proper places to develop probability theory (for example, see
Nelson's book "Radically Elementary Probability Theory", Princeton
Univ Press, 1987, for a treatment using non-standard analysis) or the
semantics of programming languages (as with the work on synthetic
domain theory). If we grant that, in each of these areas, our
concepts may be most pleasingly developed in worlds individually
tailored for this development, we are then left with a situation in
which we have many different foundational pictures, each addressing a
limited set of phenomena, and we are in need of some understanding of
the connections between them.
It turns out that all of the worlds described above are toposes
(including the worlds, such as where all functions on the reals are
continuous, that are incompatible with the law of excluded middle),
and the connections and mappings bewteen toposes called for above have
been studied in great detail over the last several decades, both from
an "external", set-theoretic vantage point, and from within the system
of toposes itself. So, if such a multi-foundational approach is worth
considering at all, then topos theory is the natural place in which to
formulate it. Thus, perhaps it's best to say that the value of topos
theory to f.o.m. is not as a single foundational scheme rivaling ZFC,
but rather as a framework for an interconnected system of foundational
views, each of which is quite extensive (though not necessarily as
extensive as ZFC, as Simpson correctly points out), but none of which
is given a universal role.
So what are the weak points of this argument? How about these:
1. We want a *single* foundational system. The idea of a connected
system of partial foundational worlds -- even if we understand
each world separately and understand the connections between them
-- is too complicated.
2. (Simpson) Where is the compelling underlying "pre-mathematical
picture" that this multi-foundational situation is giving us?
Topos theory seems to be a "largely unmotivated generalization of
set theory". How is the move from (a single) set theory to (a
multiple) topos theory an improvement?
3. (Feferman, Simpson) We can't even adequately describe topos theory
without reference to sets and elements. This shows that set
theory is prior.
4. Topos theory doesn't seem to be able to address the higher reaches
of consistency (one hopes!) strength, for example the hierarchy of
large cardinals.
As for 1 and 2, first steps toward answering these objections, as I
have pointed out before, can be found in the Epilog of Bell's book,
"Toposes and Local Set Theories" (and the writings cited therein).
Bell makes an analogy with the emergence of relativity in physics.
Newtonian physics can be carried a long way, but when we get "near the
fringes", the notion of an absolute frame of reference breaks down,
and we are forced to accept that there is no priviledged frame of
reference (or "world") and that all we can do is describe what is
common to all frames of reference and how to negotiate our way between
them. The flood of independence results in set theory starting in the
1960s has sometimes been taken to imply the same thing about set
theory. In short, Bell is proposing the analogy
Topos Theory : Set Theory :: Relativity : Newtonian physics
As for 3 and 4, topos theory is a first-order theory in the language
of categories (in fact, it is an "essentially algebraic theory" in the
sense of Freyd and is even purely equational in the language of graphs
-- prima facie much more simple logically than set theory), and as
such it doesn't rely on prior notions of sets and elements any more
than any first-order theory (including set theory) does. That aside,
it *does* appear that if topos theory were to be able to include
notions similar to large cardinals, it would need to have more in the
way of "reflective" capabilities than it currently possesses. It is
an interesting challenge to the category theory community to
investigate such possibilities. Perhaps the work of Benabou cited
earlier by McLarty is a good first step in this direction.
> My impression is that Harvey Friedman has some other ideas about what
> would need to be done to make NF and topos theory and other
> alternative foundational schemes viable. Perhaps he will explain if
> we ask him nicely.
Pretty please? :-)
Todd Wilson
Computer Science Department
California State University, Fresno
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-March/003870.html","timestamp":"2014-04-21T15:17:58Z","content_type":null,"content_length":"10448","record_id":"<urn:uuid:6f5562e0-0ddc-4dbe-8137-0bbea28148e6>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
`Naturally occuring' $K(\pi, n)$ spaces, for $n \geq 2$.
up vote 27 down vote favorite
[edited!] Given a group $\pi$ and an integer $n>1$, what are examples of Eilenberg-Maclane spaces $K(\pi, n)$ that can be constructed as "known" manifolds? (or if not a manifold, say some space
people had a pre-existing desire to study before $K(\pi,n)$ spaces were identified as being of interest)
Constructing $K({\bf Z}, 2)$ as ${\bf CP}^{\infty}$ is the only example I know - but there must be more out there.
I'm interested in concrete examples (like the one above) that could, e.g., be given in a Topics grad course for topology students. They seem to be scarse, so it would be nice to know what was known.
Note: I've excluded $n=1$ because most people know examples (or can figure them out) in this case.
at.algebraic-topology classifying-spaces homotopy-theory examples
11 In what sense is $\mathbb{C} \mathbb{P}^{\infty}$ a manifold? β Pete L. Clark Oct 29 '10 at 2:17
1 I believe there's a theorem to the effect that they will not be finite-dimensional manifolds, so one necessarily needs to consider Frechet, Banach etc. manfolds. β David Roberts Oct 29 '10 at
8 Given a nice inclusive definition of "manifold" that allows some examples, what would be an example of a weak homotopy type that is not represented by a manifold? β Tom Goodwillie Oct 29 '10 at
13 Any countable simplicial complex is homotopy-equivalent to a Hilbert manifold. The idea is to inductively embed the skeleton into the Hilbert cube in such a way that you have a regular
neighbourhood, making your simplicial complex homotopy-equivalent to the open regular neighbourhood -- and since it's open in Hilbert space it's a manifold. β Ryan Budney Oct 29 '10 at 2:49
8 There's a lovely paper of Kodama and Michor (2006) where they show that the component of $Imm(S^1,\mathbb R^2)/Diff^+(S^1)$ corresponding to the the figure-8 immersion has that homotopy-type of a
$K(\mathbb Z,2)$. Here $Imm(S^1,\mathbb R^2)$ denotes immersions of $S^1$ in the plane, and we're modding out by orientation-preserving reparametrizations. β Ryan Budney Oct 29 '10 at 4:37
show 11 more comments
5 Answers
active oldest votes
Let BTOP and BPL be the classifying spaces of topological/PL-sphere bundles and $TOP/PL$ the homotopy fiber of the map $BPL \to BTOP$. The $TOP/PL$ is a model for a $K(\mathbb{Z}/2\mathbb
up vote 20 {Z},3)$ by Kirby and Siebenmann. This identifies a third cohomology class as obstruction to get a PL-structure on a topological sphere bundle.
down vote
Ah, I had thought there was some result like this, but had only very vague recollections about it. Are there any proofs known other than Kirby and Siebenmann's? Are there any people
working on this kind of thing in this decade? All the results I know in this direction are quite "old". β Romeo Oct 30 '10 at 15:26
1 I think, in the book by Madsen and Milgram are some results of this sort. And I don't really know, what you mean, but while there's certainly less activity in the PL-world today than a
few decades ago, at least topological manifolds are a topic, quite a few people are still working on. And the quoted result is surely essential to compare the topological, PL and
smooth world. β Lennart Meier Oct 31 '10 at 21:12
2 I was just wondering who was carrying on the Kirby-Siebenmann, Ranicki, et all torch in the 21st century. A lot of topology grad students I know these days have never really heard the
word "PL"... β Romeo Nov 4 '10 at 0:50
@Lennart: What is a reference for this neat fact? Thanks! β David Carchedi May 20 '13 at 13:06
@David: I think, it is in the Kirby-Siebenmann book "Foundational Essays on Topological Manifolds...". β Lennart Meier May 20 '13 at 17:36
show 2 more comments
Following up on Dai's answer, one can go a step further since $P U(H)$ is obviously a group. So if we can find a contractible space on which it acts freely, the quotient will be the next
level up (namely, a $K(\mathbb{Z},3)$.
Such a space can be constructed as follows: take our favourite (separable, though that's not necessary) Hilbert space, $H$, and consider $HS(H)$, the space of Hilbert-Schmidt operators on
up vote 17 $H$. This is isomorphic to the Hilbert tensor product $H^* \widehat{\otimes} H$ so is a Hilbert space. Its unitary group is thus contractible. The group $U(H)$ acts on $HS(H)$ by
down vote conjugation, and once we divide out by the centre this becomes free. Thus $P U(H)$ acts on $U(HS(H))$ freely and so the quotient is a $K(\mathbb{Z},3)$.
However, as $P U(H)$ does not act centrally on $U(HS(H))$, the iteration stops here.
That is very nice! β Andreas Thom Oct 29 '10 at 7:37
Very cool, thanks for the details. β Romeo Oct 30 '10 at 15:27
add comment
The following example appears in the definition of twisted $K$-theory.
Let $H$ be an infinite dimensional separable Hilbert space over $\mathbb{C}$. Since the unitary group $U(H)$ is contractible, the projective unitary group $PU(H)= U(H)/S^1$ has the
up vote 16 homotopy type of $K(\mathbb{Z},2)$. The fact that $BPU(H)\simeq K(\mathbb{Z},3)$ and the fact that $PU(H)$ acts on the space of Fredholm operators $\mathrm{Fred}(H)$ are essential in the
down vote definition of twisted $K$-theory.
Argh, was too slow in my comment to the question. β David Roberts Oct 29 '10 at 5:38
Very nice. Is there a place you recommend for an exposition on this? β Romeo Oct 29 '10 at 6:28
1 Atiyah and Segal's paper on Twisted K-theory is where I would start reading about this. β Andrew Stacey Oct 29 '10 at 6:58
The first one... β David Roberts Oct 29 '10 at 10:11
add comment
If $M$ is a hyperfinite type $I\!I\!I_1$ factor, then (at least conjecturally), its group of outer automorphisms is a $K(\mathbb Z,3)$.
This is based on the following three properties of that von Neumann algebra:
β’ The group of unitary central elements of $M$ is a circle, and thus a $K(\mathbb Z,1)$.
β’ The group of unitaries in $M$ is contractible.
β’ The automorphism group of $M$ is contractible (conjectural).
up vote 14 down
vote To see that $Out(M)\cong K(\mathbb Z,3)$, apply the long exact sequence of homotopy groups to the following two fiber sequences: $$ U(Z(M)) \to U(M) \to Inn(M) $$ $$ Inn(M) \to Aut
(M) \to Out(M) $$
As a consequence, we also get that $BOut(M)\cong K(\mathbb Z,4)$.
This sounds interesting, but is new to me - any beginning references to recommend? β Romeo Nov 19 '10 at 19:07
Popa, Sorin; Takesaki, Masamichi; The topological structure of the unitary and automorphism groups of a factor. ams.org/mathscinet/search/β¦ β AndrΓ© Henriques Nov 19 '10 at 21:33
How far are we from knowing that $Aut(M)$ is contractible? Is it just tricky, or do people have no idea how to do it? β David Roberts Oct 2 '13 at 6:00
By now, I think that I know how to do it. It's not very different from the type $II_1$ case (done by Popa and Takesaki). β AndrΓ© Henriques Oct 2 '13 at 12:08
add comment
There is a very nice model of $K(\mathbb Z,n)$ which is given by the free abelian topological group on the pointed space $(S^n,\star)$, let us call that $F(S^n,\star)$. An element in $F(S^
n,\star)$ is given by a finite set of points in $S^n \setminus \lbrace\star\rbrace$ such that each point in this finite carries a non-zero integer as a label with the obvious addition. The
topology is more subtle to describe and made in such a way that $F(S^n,\star)$ is an abelian topological group, the inclusion $S^n \subset F(S^n,\star)$ is continuous and $\star=0$ in $F(S^
up vote Though, I am not sure whether $F(S^n,\star)$ is an infinite-dimensional manifold (I think not), it is still pretty regular being a topological group and a CW-complex at the same time.
13 down
vote This is all very classical and was studied in detail in
Dold, Albrecht; Thom, RenΓ©, Quasifaserungen und unendliche symmetrische Produkte., Ann. of Math. (2) 67 1958 239β281.
2 More generally, if M is any Moore space (many of these are finite dimensional manifolds!) then taking the geometric realization of the free abelian group on the singular simplices of M
will give you the corresponding Eilenberg-MacLane space. β Saul Glasman Oct 29 '10 at 13:13
Or you could take symmetric prodcuts of $S^n$ labelled by elements of any abelian group $A$: that produces $K(A,n)$. But is that "geometric"? β Johannes Ebert Oct 29 '10 at 14:00
@Saul: Wow, nice, I've never seen that construction before. Where does it come from? β Romeo Oct 30 '10 at 15:28
@Andreas, cool, thanks, this kind of thing is perfect (and definitely wouldn't have found that on my own). Never read a paper of Dold before... β Romeo Oct 30 '10 at 15:30
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology classifying-spaces homotopy-theory examples or ask your own question. | {"url":"http://mathoverflow.net/questions/44045/naturally-occuring-k-pi-n-spaces-for-n-geq-2/44121","timestamp":"2014-04-21T15:53:47Z","content_type":null,"content_length":"99705","record_id":"<urn:uuid:7f89621a-b019-4254-92b7-bba0436bceac>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cambridge Interview Advice (Maths)
Post reply
Cambridge Interview Advice (Maths)
I just finished my AS levels and I'm about to head into Year 13. My teachers have said that it's very likely that I would be having an interview for the university of Cambridge (mathematics G100) in
November, but I am very worried about it. Does anyone here know anything that might be useful for preparing for it? For example, one person I heard got a question asking him to derive an expression
for e^A where A is a matrix. The solution was to use Taylor series, but it never occurred to me to do that until after half an hour of thought (is there any other way of getting there, by the way?).
In an interview it's expected that you'd get that answer quite quickly. How can I prepare for this sort of thing? Any advice would be much appreciated, as I do not have a lot of time left.
Real Member
Re: Cambridge Interview Advice (Maths)
Hi zetafunc.
I know a guy who goes to Cambridge and had the interview, but I would have to see if he would like telling you about it.
On a different note, you could also derive an expression for e^A from the spectral decomposition of the matrix A.
The limit operator is just an excuse for doing something you know you can't.
βIt's the subject that nobody knows anything about that we can all talk about!β β Richard Feynman
βTaking a new step, uttering a new word, is what people fear most.β β Fyodor Dostoyevsky, Crime and Punishment
Re: Cambridge Interview Advice (Maths)
but it never occurred to me to do that until after half an hour of thought
The time taken is related to your experience and what fields of math you prefer and major in. It only takes someone in computational mathematics 5 seconds to think of that since they use the Taylor's
series every day and for darn near every problem.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Cambridge Interview Advice (Maths)
hi zetafunc,
They will already have your school reference so they'll know you are a good mathematician. But they want to find out how you respond to a new question. So trying to 'second guess' what they may ask
defeats their objective.
And they won't necessarily expect a full, 'robust', and complete answer in such a short time. After all, Andrew Wiles took 7 years to prove Fermat's LT (at Cambridge of course!)
Just give them your ideas, showing you can think about a problem and apply maths to it. That should do it.
Good luck!
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Cambridge Interview Advice (Maths)
anonimnystefy wrote:
Hi zetafunc.
I know a guy who goes to Cambridge and had the interview, but I would have to see if he would like telling you about it.
On a different note, you could also derive an expression for e^A from the spectral decomposition of the matrix A.
Can you show me how to do this? Is this the same as matrix decomposition?
Re: Cambridge Interview Advice (Maths)
bobbym wrote:
The time taken is related to your experience and what fields of math you prefer and major in. It only takes someone in computational mathematics 5 seconds to think of that since they use the
Taylor's series every day and for darn near every problem.
But at the moment I'm not supposed to have developed a preference yet -- I'm still in the last year where there is pretty much a set syllabus everyone should be able to use. I am just worried because
I was not able to get that solution...
Re: Cambridge Interview Advice (Maths)
bob bundy wrote:
hi zetafunc,
They will already have your school reference so they'll know you are a good mathematician. But they want to find out how you respond to a new question. So trying to 'second guess' what they may
ask defeats their objective.
And they won't necessarily expect a full, 'robust', and complete answer in such a short time. After all, Andrew Wiles took 7 years to prove Fermat's LT (at Cambridge of course!)
Just give them your ideas, showing you can think about a problem and apply maths to it. That should do it.
Good luck!
I forgot about the reference... although regarding second-guessing, I have noticed that they repeat some questions from time to time, and I just found the e^A where A is a matrix question in an old
STEP paper. So, it might be useful... although I'm just worried about being thrown into a new environment! What if I get completely stuck and get nowhere at the interview?
Thanks everyone for your replies.
Re: Cambridge Interview Advice (Maths)
Thinking on your feet is an attribute like having a high IQ. It pretty much comes built in, you do not train to get it. Some people are faster than others, some cleverer, some more dogged.
I am sure you will get in. Anyways, do not worry about it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Cambridge Interview Advice (Maths)
Hi zetafunc;
It seems that style of interview originated at M__icro$oft. Someone once said that If the zombies of Redmond developed it, it is sure to be of no value. You might like to read this fellow's comments:
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Post reply | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=229899","timestamp":"2014-04-18T13:26:25Z","content_type":null,"content_length":"20964","record_id":"<urn:uuid:7623abf3-7353-47ab-84a7-5ad7d821f721>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability question
I would appreciate help with this question: What is the probability that two nine digit numbers, each using all of the digits 1 through 9, will have exactly two of the digits in the same
place in the number?
Thank you.
Bliss -Ron
Is there a Mathematica angle?
This sounds like homework.
Bruce Miller
Not homework, I am having a debate with my adult son and neither of us knows how to answer the question.
Thank you.
Ronald Bliss -Ron
We could simulate an experiment with 10^5 random pairs:
In[2]:= N[Sum[a = RandomSample[Range[9]]; b = RandomSample[Range[9]];
Boole[Count[a - b, 0] == 2], {10^5}]/10^5]
Out[2]= 0.18364
That's not too far from the exact probability
In[3]:= (1/2) Sum[(-1)^j/j!, {j, 0, 7}]
Out[3]= 103/560
Ilian Gachevski
In[4]:= N[%]
Out[4]= 0.183929
See the following link for a proof:
The Matching Problem
Oh my. More complex than I had thought.
Thank you very much.
Ronald Bliss -Ron | {"url":"http://community.wolfram.com/groups/-/m/t/141136?p_p_auth=4ngQkFqQ","timestamp":"2014-04-21T04:31:20Z","content_type":null,"content_length":"76393","record_id":"<urn:uuid:b1c164e6-9b0d-4d8d-ac39-3afeb6f9e16f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunnyvale, TX Algebra 1 Tutor
Find a Sunnyvale, TX Algebra 1 Tutor
...These will be "graded" and returned back to allow the student maximum potential in the topic. Since I hold myself and my students to high standards I will NOT charge any lesson that the student
is not satisfied in. Under NO circumstance should anyone pay for the service they are not receiving correctly.
16 Subjects: including algebra 1, chemistry, reading, grammar
...I was a tutor in college for students that needed help in math. I have a Master's degree in civil engineering and have practiced engineering for almost 40 years where math important to
performing my job. I hold a Master's Degree in Education with emphasize on instruction in math and science for grades 4th through 8th.
11 Subjects: including algebra 1, geometry, algebra 2, precalculus
...This involves everything from how to use an agenda, time management, organizing their study area, effective communication techniques to understand what the teacher really wants, how to approach
the subjects necessary to study that evening. I also, teach my students to recognize their learning st...
21 Subjects: including algebra 1, English, reading, writing
...I tutor students regular,Pre-Ap and Ap Physics B,C. The courses include topics of Kinematic motions, Forces and newton's laws, Circular Motion, Impulse and Momentum, Work and Energy, Rotational
Dynamics, Simple harmonic motion and Elasticity, Fluids, Thermodynamics, Waves and Sound, Electromagne...
20 Subjects: including algebra 1, calculus, physics, geometry
I am certified to teach middle school mathematics (grades 4-8) in the state of Texas. I spent two years tutoring in a charter school that served low income communities in South Dallas. My approach
to tutoring reflects my philosophy that learning should be tailored to the student.
4 Subjects: including algebra 1, elementary math, prealgebra, government & politics | {"url":"http://www.purplemath.com/Sunnyvale_TX_algebra_1_tutors.php","timestamp":"2014-04-18T21:46:02Z","content_type":null,"content_length":"24141","record_id":"<urn:uuid:8812a567-0eeb-43f9-a27b-34f289be4cde>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Proofs
1) if n is not a prime number, at least n can be written as n=a b where a and b βN, if a=b, a=βn, n divisible by βn, if aβ b,
a and b are not both >βn, find the smaller one, n is divisible by the smaller one who is smaller than βn.
2) if n= a b, where a is a prime number, and b isn't divisible by a. we can get p= a c, q= b d, where c and d are prime numbers and isn't divisible by a or d.
hence n is divisible by pq, but not by p or q
3) nΒ³+1 = (n+1)(nΒ²-n+1) 2 is a special case when nΒ²-n+1=1 | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=33256","timestamp":"2014-04-19T04:57:37Z","content_type":null,"content_length":"10016","record_id":"<urn:uuid:7e874a70-91ac-4c5a-a43c-41ffbd368092>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notes on Delta-generated spaces
These are some very informal notes on the category of Delta-generated spaces, advocated by Jeff Smith. I don't give Jeff's elementary proof that they form a locally presentable category, but I do
explain why this fact is a consequence of Vopenka's principle on the existence on inaccessible cardinals. | {"url":"http://pages.uoregon.edu/ddugger/delta.html","timestamp":"2014-04-19T12:02:32Z","content_type":null,"content_length":"1000","record_id":"<urn:uuid:059fa648-1e2a-4451-8040-b25fe35218fc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
QDoubleValidator Class Reference
The QDoubleValidator class provides range checking of floating-point numbers. More...
#include <QDoubleValidator>
Inherits QValidator.
Public Types
enum Notation { StandardNotation, ScientificNotation }
Public Functions
QDoubleValidator ( QObject * parent = 0 )
QDoubleValidator ( double bottom, double top, int decimals, QObject * parent )
~QDoubleValidator ()
double bottom () const
int decimals () const
Notation notation () const
void setBottom ( double )
void setDecimals ( int )
void setNotation ( Notation )
virtual void setRange ( double minimum, double maximum, int decimals = 0 )
void setTop ( double )
double top () const
Reimplemented Public Functions
virtual QValidator::State validate ( QString & input, int & pos ) const
Additional Inherited Members
Detailed Description
The QDoubleValidator class provides range checking of floating-point numbers.
QDoubleValidator provides an upper bound, a lower bound, and a limit on the number of digits after the decimal point. It does not provide a fixup() function.
You can set the acceptable range in one call with setRange(), or with setBottom() and setTop(). Set the number of decimal places with setDecimals(). The validate() function returns the validation
QDoubleValidator uses its locale() to interpret the number. For example, in the German locale, "1,234" will be accepted as the fractional number 1.234. In Arabic locales, QDoubleValidator will accept
Arabic digits.
In addition, QDoubleValidator is always guaranteed to accept a number formatted according to the "C" locale. QDoubleValidator will not accept numbers with thousand-separators.
See also QIntValidator, QRegExpValidator, and Line Edits Example.
Member Type Documentation
enum QDoubleValidator::Notation
This enum defines the allowed notations for entering a double.
Constant Value Description
QDoubleValidator::StandardNotation 0 The string is written as a standard number (i.e. 0.015).
QDoubleValidator::ScientificNotation 1 The string is written in scientific form. It may have an exponent part(i.e. 1.5E-2).
This enum was introduced in Qt 4.3.
Property Documentation
bottom : double
This property holds the validator's minimum acceptable value.
By default, this property contains a value of -infinity.
Access functions:
double bottom () const
void setBottom ( double )
See also setRange().
decimals : int
This property holds the validator's maximum number of digits after the decimal point.
By default, this property contains a value of 1000.
Access functions:
int decimals () const
void setDecimals ( int )
See also setRange().
This property holds the notation of how a string can describe a number.
By default, this property is set to ScientificNotation.
This property was introduced in Qt 4.3.
Access functions:
Notation notation () const
void setNotation ( Notation )
See also Notation.
top : double
This property holds the validator's maximum acceptable value.
By default, this property contains a value of infinity.
Access functions:
double top () const
void setTop ( double )
See also setRange().
Member Function Documentation
QDoubleValidator::QDoubleValidator ( QObject * parent = 0 )
Constructs a validator object with a parent object that accepts any double.
QDoubleValidator::QDoubleValidator ( double bottom, double top, int decimals, QObject * parent )
Constructs a validator object with a parent object. This validator will accept doubles from bottom to top inclusive, with up to decimals digits after the decimal point.
QDoubleValidator::~QDoubleValidator ()
Destroys the validator.
void QDoubleValidator::setRange ( double minimum, double maximum, int decimals = 0 ) [virtual]
Sets the validator to accept doubles from minimum to maximum inclusive, with at most decimals digits after the decimal point.
QValidator::State QDoubleValidator::validate ( QString & input, int & pos ) const [virtual]
Reimplemented from QValidator::validate().
Returns Acceptable if the string input contains a double that is within the valid range and is in the correct format.
Returns Intermediate if input contains a double that is outside the range or is in the wrong format; e.g. with too many digits after the decimal point or is empty.
Returns Invalid if the input is not a double.
Note: If the valid range consists of just positive doubles (e.g. 0.0 to 100.0) and input is a negative double then Invalid is returned. If notation() is set to StandardNotation, and the input
contains more digits before the decimal point than a double in the valid range may have, Invalid is returned. If notation() is ScientificNotation, and the input is not in the valid range,
Intermediate is returned. The value may yet become valid by changing the exponent.
By default, the pos parameter is not used by this validator. | {"url":"http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qdoublevalidator.html","timestamp":"2014-04-19T02:12:10Z","content_type":null,"content_length":"22461","record_id":"<urn:uuid:2ab033c7-fa71-4556-82bc-a152034d6814>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Practical arithmetic
Practical arithmetic: in four books ... Extracted from the large and entire treatise, and adapted to the commerce of Ireland as well as that of Great Britain ... (Google eBook)
We haven't found any reviews in the usual places.
Numeration 16
Multiplication 26
Division 32
Problems 38
BOOK II 106
Subtraction 121
Division 127
Decimal Fractions 137
Mercantile Arithmetic 164
Practice casting 190
Tare and Tret 196
Interest 212
Annuities and Pensions 226
BOOK IV 284
The Cube Root 291
Popular passages
Upon this, they called a truce, and agreed that the Β£ of the whole, left by A at first, should be equally divided among them. How much of the prize, after this distribution, remained with each of the
competitors ? β’«»»⒠AVA A; jfiflflfe B; jfoflfc C; iΒ£ff& D; 85.
Then multiply the second and third terms together, and divide the product by the first term: the quotient will be the fourth term, or answer.
In any continued geometrical progression, the product of the two extremes is equal to the product of any two means...
Suppose a ladder 40 feet long be so planted as to reach a window 33 feet from the ground, on one side of the street, and without moving it at the foot, will...
Multiply the whole number by the denominator of the fraction, and divide the product by the numerator.
When the principal, amount, and time are given to find the rate per cent.
To Divide One Number by Another, Subtract the logarithm of the divisor from the logarithm of the dividend, and obtain the antilogarithm of the difference.
Suppose there is a mast erected, so that i of its length stands in the ground, 12 feet of it in the water, and Β£ of its length in the air, or above water ; 1 demand the whole length ? Anti.
If a man performs a journey in 5 days, when the day is 12 hours long, in how many days will he perform it when the day is but 10 hours long?* Jlns.
... performed by removing the separatrix in the dividend, so many places towards the left hand as there are cyphers in the divisor.
Bibliographic information | {"url":"http://books.google.com/books?id=Fbs2AAAAMAAJ&lr=","timestamp":"2014-04-20T02:16:26Z","content_type":null,"content_length":"129645","record_id":"<urn:uuid:80100689-0efb-4ea4-89ed-a7e355012de7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |