content
stringlengths
86
994k
meta
stringlengths
288
619
Statics problem solving strategies, hints and tricks Statics problem solving strategies, hints and tricks Contents 1 Solving a problem in 7 steps 3 1.1 To read CE 211 – Statics CE 211 – Statics Required 2007 Catalog Description: CE 211 – Statics. Three credits. Engineering mechanics concepts; force systems; static equilibrium; Engineering Mechanics I- Statics EASTERN ARIZONA COLLEGE - 1 - Engineering Mechanics I - Statics Equal Opportunity Employer and Educator Course Information Division Mathematics Statics of Bending: Shear and Bending Moment Diagrams Statics of Bending: Shear and Bending Moment Diagrams David Roylance Department of Materials Science and Engineering Massachusetts Institute of Technology Statics - Truss Problem V2 Chapter 2 - Static Truss Problem Page 1 of 14 Statics Truss Problem 2.1 Statics We are going to start our discussion of Finite Element Analysis (FEA) with Two areas of study to investigate forces Johns Hopkins University What is Engineering? M. Karweit STATICS--AN INVESTIGATION OF FORCES CEE 101 : Statics and Dynamics Fall 2007 CEE 101 : Statics and Dynamics Department of Civil and Environmental Engineering University of California, Los Angeles Course Description: Newtonian Vector Mechanics for Engineers: Statics 1 © 2007 The McGraw-Hill Companies, Inc. All rights reserved. Eighth Vector Mechanics for Engineers: Statics Edition 3 - 1 How to prepare for the final Bedford, Fowler: Statics. Chapter 6: Structures in Equilibrium Bedford, Fowler: Statics. Chapter 6: Structures in Equilibrium, ExAMPLes via TK Solver Copyright J.E. Akin. All rights reserved. Economics 202N: Comparative statics Economics 202N: Comparative statics Luke Stein Stanford University December 5, 2008 ABabcdfghiejkl Text: EngineERING MECHANICS STATICS & DYNAMICS, by R. C. Hibbeler, 11th Edition; 2007 ISBN: 0-13-221509-8 Student Audience : Students who take this course are QUIZ / Mechanics - Statics LESSONS MECHANICS / Statics STUDENT QUIZ LESSONS MECHANICS / Statics QUIZ 1 Vex 2.0 © Robotics Academy Inc. QUIZ / Mechanics - Statics NAME DATE CLASS PERIOD Practice Problems on Fluid Statics - manometry 01 Practice Problems on Fluid Statics C. Wassgren, Purdue University Page 1 of 12 Last Updated: 2010 Aug 30 manometry_01 Compartments A and B of the tank shown in the Eighth Edition VECTOR MECHANICS FOR ENGINEERS: STATICS VECTOR MECHANICS FOR EngineERS: STATICS Eighth Edition Ferdinand P. Beer E. Russell Johnston, Jr. Lecture Notes: J. Walt Oler Texas Tech University Sample Problems from Solving Statics Problems in MATLAB SAMPLe Problems from Solving Statics Problems in MATLAB by Brian D. Harper Ohio State University Solving Statics Problems in MATLAB is a supplement to the textbook MAE 130A / SE 101A Mechanics I: Statics MAE 130A / SE 101A Mechanics I: Statics Designation: Required course for ME, AE, and SE Catalog Description: MAE 130A/SE 101A: Mechanics I: Statics (4) EN3: Introduction to Engineering and Statics Introduction to Statics: Moments http://www.engin.brown.edu/courses/en3/Notes/Statics/moments/momen 1 of 15 8/10/2006 4:57 PM EN3: Introduction to Engineering and Statics and Strength of Materials Formula Sheet Statics and Strength of Materials Formula Sheet 12/12/94 | A. Ruina Not given here are the conditions under which the formulae are accurate or useful. Bedford, Fowler: Statics. Chapter 9: Friction, Examples via TK Bedford, Fowler: Statics. Chapter 9: Friction, ExAMPLes via TK Solver Copyright J.E. Akin. All rights reserved. Page 1 of 19 An experiment in hands-on learning in engineering mechanics: statics An experiment in hands-on learning in Engineering mechanics: statics B.D. Coller Department of Mechanical Engineering Northern Illinois University Web-based Course Materials for Engineering Statics Web-based Course Materials for Engineering Statics . Paul S. Steif1, Anna Dollár2. 1Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA Civil Engineering 201: Statics Civil Engineering 201: Statics Course Description: 3 cr. U. Principles of mechanics force systems, equilibrium structures, distributed forces, centroids and friction. STATICS & STRENGTH OF MATERIAL UW-Stout - Physics 372-321 Statics index frame bottom STATICS & STRENGTH OF MATERIAL [UW-Stout - Physics 372-321] Welcome Statics Web-Syllabus Email Instructor Student Grades Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education Meriam Engineering Mechanics: Statics, SI 6th Edition Brochure More information from http://www.researchandmarkets.com/reports/587006/ Meriam Engineering Mechanics: Statics, SI 6th Edition Description: Meriam and Kraige EngineERING MECHANICS: STATICS. 1. COURSE TITLE - Statics 20-011 (86-87 2 . nd . Semester) 2. INSTRUCTORS - Lecturer: M. Ghaemian, Room 417, Ext. EM 306 Course Syllabus, Fall 2008 THE UNIVERSITY OF TEXAS AT AUSTIN Department of Aerospace Engineering and Engineering Mechanics EM 306 STATICS Spring 2009 SYLLABUS Unique Number: 13535, 13540, 13545 Statics - Quiz 4 Name _____ Course No. _____ Section No. _____ Date _____ Statics - Quiz 4 Given the vectors A = -7i +6j +3k and B = 3i + 2j Statics: the abusive power of trimming Statics: the abusive power of trimming CREWES Research Report Š Volume 12 (2000) Statics: the abusive power of trimming John C. Bancroft, Alan Richards, and Charles Statics - Grading In The Name of God Yazd University School of Engineering Department of Mechanical Engineering Statics Instructor: Dr. Abbas Mazidi [amazidi@yazduni.ac.ir] Welcome to Statics Online Welcome to Statics Online. Dear Student, Welcome to distance education at Cuesta College! My name is Jeff Jones, and I will be your instructor in the upcoming online Unit Guide Statics unit guide Statics BCE-1-118 Faculty of Engineering, Science and the Built Environment 2007-08 become what you want to be LECTURE 3: Fluid Statics LECTURE 3: Fluid Statics We begin by considering static fluid configurations, for which the stress tensor reduces to the form T = −pI, so that n·T·n = −p, and KINEMATICS, STATICS, AND DYNAMICS OF TWO-DIMENSIONAL MANIPULATORS BERTHOLD K. P. HORN In order to get some feeling for the kinematics, statics, and dynamics of Robust Comparative Statics Robust Comparative Statics Susan Athey, Paul Milgrom, and John Roberts DRAFT ONLY - DO NOT CITE OR CIRCULATE Comments Welcome This Draft: October, 1998
{"url":"http://pdf.analysis3.com/Statics-pdf.html","timestamp":"2014-04-18T15:39:22Z","content_type":null,"content_length":"49247","record_id":"<urn:uuid:0e615aa3-0375-4c77-a6e7-f51f226315e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimation of exposure to toxic releases using spatial interaction modeling The United States Environmental Protection Agency's Toxic Release Inventory (TRI) data are frequently used to estimate a community's exposure to pollution. However, this estimation process often uses underdeveloped geographic theory. Spatial interaction modeling provides a more realistic approach to this estimation process. This paper uses four sets of data: lung cancer age-adjusted mortality rates from the years 1990 through 2006 inclusive from the National Cancer Institute's Surveillance Epidemiology and End Results (SEER) database, TRI releases of carcinogens from 1987 to 1996, covariates associated with lung cancer, and the EPA's Risk-Screening Environmental Indicators (RSEI) model. The impact of the volume of carcinogenic TRI releases on each county's lung cancer mortality rates was calculated using six spatial interaction functions (containment, buffer, power decay, exponential decay, quadratic decay, and RSEI estimates) and evaluated with four multivariate regression methods (linear, generalized linear, spatial lag, and spatial error). Akaike Information Criterion values and P values of spatial interaction terms were computed. The impacts calculated from the interaction models were also mapped. Buffer and quadratic interaction functions had the lowest AIC values (22298 and 22525 respectively), although the gains from including the spatial interaction terms were diminished with spatial error and spatial lag regression. The use of different methods for estimating the spatial risk posed by pollution from TRI sites can give different results about the impact of those sites on health outcomes. The most reliable estimates did not always come from the most complex methods. Environmental pollution data such as that collected by the United States Environmental Protection Agency's Toxic Release Inventory (TRI) have been used extensively for studies in environmental justice and medical geography [1]. These studies involved estimating an individual's or a community's exposure to pollution using the spatial information contained in the TRI database. Despite the use of this spatial information, the geographical theory used to guide the estimation of location-based exposure to pollution has frequently been limited to basic containment and buffer analysis, especially at the national scale. The aim of this research is to improve the spatial analysis of TRI data by incorporating distance decay effects derived from spatial interaction modeling in order to provide a more realistic approach to the estimation of location-based exposure to pollution, particularly airborne pollution. This is achieved by using several different functions for calculating this exposure and comparing the results when they are used in multivariate regression analyses with lung cancer mortality rates. The different methods for estimating the risk at a location are evaluated because, while many studies have explored and demonstrated a link between environmental pollution and a variety of adverse societal and medical effects [1], understanding the nature of this relationship is equally important. As the variety of methods used to estimate these impacts attests, the nature of this relationship is not as well understood as the existence of the relationship. The form of this relationship greatly impacts the answers to questions that may arise from the discovery of a relationship, such as the extent to which rural counties experience adverse impacts from urban polluters. A visual cartographic comparison of some approaches has been explored by McMaster et al [2], although they do not make the statistical comparison carried out here. Prior work Spatial analyses of toxic pollution data, whether for environmental justice or for medical geography, have typically used a simple spatial estimate of exposure. The exposure has been recorded as a binary variable (exposed or not exposed) either through spatial containment such that a person is exposed if they live in the same census tract or county as an industrial site [1,3-8], or a spatial buffer such that a person is exposed if they live within a threshold distance (e.g., 1 mile) of an industrial site [5,9-12]. Variations on the latter use multiple buffers to approximate decreasing risk with increased distance, or select a small number of neighborhoods at increasing distances which can be treated as samples from multiple buffers. This enables the study to reflect decreased exposure as the distance increases [1,9,13-15]. To provide a better measure of the impact of sites on a census tract, four studies [16-19] use a raster grid that can account for whether a site is in the center of the tract, or near an edge, and whether any sites are just over the border in neighboring tracts. These raster grids reflect the density of TRI sites around each raster cell, although the density is calculated using a small buffer, such as the density of sites within a one mile radius of the cell. A gradual decay of impact as distance increases is still lacking. Accounting for the volume of the release is another important factor missed by some TRI studies [6,10,16,18]. A binary approach that considers all TRI sites equally does not allow for gradients of risk, treats exposure to one site as equivalent to exposure to many sites, and does not account for the volume and toxicity of releases at each site. The release volumes vary by orders of magnitude (figure 1). Recognizing this, many researchers do account for varying release volumes from each site [7,9,12,20-27]. They often use variations of the spatial containment and buffer models described above which can incorporate the release volume (equations 1 and 2 respectively). Here, k[ij ]is the impact of site i on county j, t[i ]is the volume of releases at site i, d[ij ]is the distance between site i and county j, and T is the threshold distance. As a result of this, most studies that use these techniques to account for the release volume still reflect a simple treatment of geography by not including distance decay effects. The toxic impacts of the different chemicals on human health vary as well [3,9,12,22-25,27], although this variation is not addressed in the current Figure 1. Histogram of the volume of TRI releases for 1987. Histogram of the TRI release volumes measured in pounds for 1987. Note the log scale on the horizontal axis. To address these simplifying assumptions, Dent et al [26] have proposed using a GIS to combine atmospheric modeling with the release data and health outcomes and provide a detailed analysis of the potential effects and risks associated with TRI releases. Morello-Frosch et al [23,24] and Fisher et al [21] similarly incorporate atmospheric modeling in their analysis. These models are typically used for local, rather than national-scale analysis. The United States Environmental Protection Agency's Risk-Screening Environmental Indicators (RSEI) Model uses principles of atmospheric modeling to derive a level of risk across the entire United States [28]. This has been used by Abel [9] and Downey and Hawkins [27], and is used in this resesarch. This background discussion is summarized by table 1, which shows that while distance decay approaches have been used, e.g. [20], containment and buffers are the most common with atmospheric modeling becoming more prevalent in local studies. Exponential and power-based distance decay approaches as found in spatial interaction modeling have, to the author's knowledge, not been used at all. Spatial interaction modeling In this research, I use a spatial interaction modeling approach that is more flexible than the binary approaches commonly used in spatial analysis of TRI data, yet is fast enough and generic enough to apply to the thousands or millions of release sites involved in a national scale study. Spatial interaction modeling was developed in economic geography to estimate the level of economic interaction between two towns [29-32]. The underlying assumptions are analogous to the physics theory of gravity. Just as two objects in space exert a stronger gravitational pull on each other as they increase in size and move closer to each other, two towns are expected to have a stronger level of economic interaction as the towns increase in size and as the distance between them decreases. These broad trends are applicable in many fields within geography, even though the specific functional form from physics (equation 3), may not be as useful as other functional forms. These models are used to estimate the effect of each TRI site on each county. As the toxic release volume increases, the impact of that site on the county increases. Likewise, the impact of nearby sites is assumed to be greater than the impact of more distant sites, as from Tobler's First Law of Geography [33]. There are two common distance decay functions used to model spatial interaction, which control the rate at which the impact of a site decreases with distance. The first, taken from the physics model of gravity, is the power equation, in which the impact of a site is proportional to the size of the release and inversely proportional to the distance raised to a parameterized exponent (equation 3). Here, α and θ are positive constant parameters. The location of a county is given by its centroid. Because other functional forms may be more applicable than the gravitational form, exponential decay functions (equation 4) have also been developed and used. The models in economic geography, such as those in Sen and Smith [31], give equations with a third term which in this work corresponds to the population of the county and a related positive constant β, such that the power model becomes k[ij ]= t[i]^α p^β[i ]exp(-θ d[ij]). Because I use age-adjusted rates of lung cancer rather than unadjusted counts for the dependent variable, these population terms are set to 1 and effectively removed from the equations. The only application of a distance decay function to TRI data is a comparison of toxic releases and federally assisted housing which uses a quadratic distance decay function [20] (equation 5). This is referred to here as the Cutter function after the lead author of the publication in which it was first proposed. It uses a constant parameter, θ, controlling the rate of decrease, and a threshold distance beyond which the impact is zero. The equation given here modifies equation 1 from Cutter et al [18] to incorporate the volume of the release. As in the other equations, k[ij ]is the impact of site i on county j, t[i ]is the volume of releases at site i, d[ij ]is the distance between site i and county j, and T is the threshold distance. Figure 2 shows the effect of increasing distance on all the models except the containment model. The parameters of the models shown are 1.0 for α, 2.0 for θ, and 100 for T, with a release volume of 10,000. More complex atmospheric models, which can incorporate distance decay concepts, have been used predominantly in studies at a local scale [21,23,24,26], with only the RSEI dataset used at the national scale [27]. Figure 2. Example graph of the distance decay functions. Example graph of the four distance decay functions examined in this study: a buffer, Cutter's quadratic decay function, a power-based decay function, and an exponential decay function. All functions use 1.0 for α, 2.0 for θ, and 100 for T, with a release volume of 10,000. Data used Four sets of data are used in this paper. The first are lung cancer age-adjusted mortality rates from the National Cancer Institute's Surveillance Epidemiology and End Results (SEER) database [34]. These rates are from the years 1990 through 2006 inclusive. The second are TRI releases from 1987 to 1996. The years chosen for the TRI databases reflect a lag time between chronic exposure to toxic chemicals and the development of lung cancer. All data are temporally aggregated to the entire time series, rather than evaluating year-by-year temporal lags. The third are risk estimates computed by the EPA's RSEI program to be used as a basis for comparison against the spatial interaction estimates The RSEI data are the risk-related results calculated from airborne releases of chemicals that are flagged as carcinogenic and have a non-zero Inhalation Unit Risk. The final dataset, the covariates, come from multiple sources. One source is the United States Census Area Resource File [35]. Thun et al [36] show variable risks for age, sex, and racial categories, so census data for the proportion of the population which is male and the proportion of the population which is non-white are included. Hendryx et al [37] note that lung cancer mortality is impacted by socioeconomic factors and access to health care, so additional covariates include the percent of the population with a less than high school education, the percent of the population with a college education, the percent of families below the poverty level, the unemployment rate, and the number of physicians per 1000 residents. Because smoking is the most significant risk factor for lung cancer [36], I also include the smoking rate of the county based on the BRFSS survey data from 2003 to 2006. Different regions of the United States have different rates of lung cancer [38], so the covariate data also includes spatial indicator variables recording whether a county is in the American south, northeast, Midwest, or western region, and whether a county is part of Appalachia, a regional designation from the Appalachian Regional commission which overlaps parts of the northeast, Midwest and southern regions. Due to a lack of data, information regarding personal movements is not included, although analysis comparing place of birth with place of death may partially account for this [39]. The Modifiable Areal Unit Problem [40] introduces difficulties into the interpretation at the county scale, especially in the larger counties in the western United States where the county centroid may be tens of miles away from the county's population center. Additionally, in these larger counties, the risk may vary within the county, and this variance is masked by calculating the risk at the county scale. However, some of the covariate data (e.g., the BRFSS-derived smoking rate) is not available at a finer scale, necessitating a county-level analysis. In the research presented here, the impacts of the Modifiable Areal Unit Problem and large county sizes are expected to have a similar impact across all models because all tests use the same spatial scale. An examination using a synthetic dataset was considered, but the results of such a test would minimize the AIC in the situation reflecting the way the dataset is constructed (e.g., the impact falls off according to an exponential distance decay function), which may or may not correspond to a real world situation. Therefore, actual, rather than synthetic, data are used in this research. Methods Applied Three sets of releases from 1987 to 1996 in the TRI database are used. The first is all releases flagged as carcinogenic. The second is all releases of chemicals identified as inducing lung cancer. These chemicals are those from a parallel study [41] plus beryllium and lead, which were identified as related to lung cancer by the lead author of [41] in a private communication. The total list of chemicals is arsenic, beryllium, 1,3-butadiene, cadmium, chromium, formaldehyde, lead and nickel. The third set of releases adds to the second set those releases identified as generic compound categories of elements in the first set. An example of this is a release of "arsenic compounds" in addition to releases of plain arsenic. The impacts of these three sets of releases on all counties in the contiguous United States were calculated using the containment, buffer, power, exponential and Cutter models given above. These release impacts are summed to create the cumulative impact on a county (equation 6), where k[ij ]is the impact of site i on county j and K[j ]is the cumulative impact on county j. Because the release amounts vary by several orders of magnitude and have an approximately lognormal distribution, as shown in figure 1, both the log[10]-transformed and the untransformed release volumes were tested. The calculated impacts and covariates are then used in multivariate regression models calculated with the R software package [42]: ordinary least squares (OLS) linear regression, general linear model regression (GLM), spatial lag regression, and spatial error regression. The latter two incorporate spatial dependence in the regression model and are detailed below. There is spatial autocorrelation in the response variable (Moran's I = 0.69, p < 0.01), demonstrating the existence of spatial dependence and suggesting the applicability of the spatial regression techniques. Geographically Weighted Regression [40], which can vary the regression coefficients across the study area was not applied because it is unlikely that the nature of the relationship between toxic releases and mortality changes across the country. This situation is not strictly one of evaluating a single function against a null hypothesis of zero impact from toxic emissions, but is rather evaluating many different functions against both each other and the null hypothesis. This makes the task more akin to model parameter optimization than traditional statistical hypothesis testing. It is considered here that, of the different functions and their parameterizations, the most appropriate representation is the one that minimizes the Akaike Information Criterion (AIC) of a regression test in which the modeled risk is one of the independent variables. In all the regression tests used in this comparison, the remainder of the independent variables are the demographic, behavioral and regional covariates, and the dependent variable is lung cancer mortality. Experiments testing many parameterizations of the buffer, Cutter, power, and exponential functions were used to guide the results given here. These parameterizations are for the contiguous United States, and were evaluated on which parameterizations gave the lowest AIC values when combined with the covariate data in an OLS regression using the lung cancer mortality rate as the dependent variable. The same tests are conducted for the containment and RSEI approaches to risk estimation for comparison. AIC values were also computed for generalized linear model regressions, although none of the generalized linear model regressions produced lower AIC values than linear regression. As a result, the generalized linear model regressions are not further discussed. Preliminary experiments (not presented) demonstrated that the parameterizations that perform well for OLS regression also typically perform well for the spatial lag and spatial error regressions. The tests presented here evaluate α and θ values of 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, and 5.0, and distance thresholds of 5, 10, 15, ... 500 miles (8, 16, 24, ... 805 km) for the buffer and Cutter functions. After the parameterizations were found, spatial lag regression and spatial error regression models were computed. Also, linear regression models for each of the rural-urban continuum codes from the United States Department of Agriculture. Lastly, maps of proportion of the TRI impact for each county that originated in release sites located in urban areas were produced. This allows an examination of whether the impact of sites in urban areas is limited to those cities, or whether it extends far into the surrounding rural areas, and demonstrates that some environmental justice questions are not robust to the choice of risk model. Spatial Regression Methods The two spatial regression methods, spatial lag and spatial error regression, both account for the spatial autocorrelation that is almost always present in geographic data by adding a term to the regression equation. This spatial autocorrelation can be the result of diffusion effects of the dependent variable, which is unlikely in this situation, or the result of risk factors which have not been accounted for elsewhere in the regression model inducing spatial autocorrelation of the dependent variable [43]. The standard OLS regression (equation 7) estimates the dependent variable, which is the lung cancer mortality rate, as a linear combination of the independent variables, here the TRI interaction term and the demographic covariates. The county is j, the dependent variable at county j is Y[j], the independent variables at county j are X[j,a], ε[j ]is the error term, and β[0 ]... β[n ]are the regression coefficients. Spatial lag regression incorporates the autocorrelation directly into the model by including a term where the dependent variable at county j is dependent not only on the independent variables at county j, but also on the dependent variable values of county j's neighbors [44]. The neighbors are defined by a weights matrix typically using one of the following three options: (1) all counties which share a border with county j as its neighbors, (2) all counties within a threshold distance of county j as its neighbors, or (3) the nearest neighbors of county j. Here, option (2) is used a distance threshold of 92 miles (148 km), which is the minimum distance that ensures all counties have at least one neighbor. Option (1) is also used considering shared corners (eg, the Four Corners meeting between Arizona, New Mexico, Colorado and Utah) as neighbors, which is called "queen contiguity" to assess the sensitivity of the results to the choice of weights matrix. Thus spatial lag regression uses the dependent variable values of the neighbor counties to calculate a new independent variable for county j, giving equation 8, where ρ is a coefficient describing the strength of the spatial autocorrelation, w[j,k ]is the spatial weight between counties j and k (typically 1 for neighbors and 0 for non-neighbors, but a distance decay form for the weights is possible), and N[j ]is the neighborhood of counties around county j. The ρ coefficient can be estimated in the same way that the β coefficients are estimated. A computationally efficient approach is given in Smirnov and Anselin [45]. Spatial error regression (equation 9) works similarly to spatial lag regression, except the autocorrelation term applies to the error terms of the neighboring counties rather than their dependent variable values [46]. Because of the circular dependence of the error terms (ie, if county j is in county k's neighborhood and vice versa, the value of ε[j ]is affected by the value of ε[k ]while the value of ε[k ]is affected by the value of ε[j]), standard estimation techniques will not work. An estimation procedure for this is also given by Smirnov and Anselin [45]. Table 2 presents the parameterizations that gave the lowest AIC values and thus are used for further analysis. The results shown use the lung carcinogens with compounds dataset. All three TRI release sets described above (all carcinogens, lung carcinogens, lung carcinogens with compounds) were evaluated as were the log[10 ]transformed release values, and the lung carcinogens and related compounds gave the lowest overall AIC values. While the containment approach was best fit with the log-transformed releases of lung carcinogens and related compounds, and the exponential and power functions had the best fits with releases of all carcinogens, the improvements were minimal; the differences in AIC are less than 2.0 for containment and the exponential function and approximately 10.0 for the power function. Therefore, to ensure consistency in the later tables, the untransformed releases of lung compounds with carcinogens are used. Table 3 shows the R-squared values, the Akaike Information Criterion, and the probabilities that the spatial interaction terms are non-zero. For each regression model (OLS, spatial lag, or spatial error), the best-performing distance decay function is highlighted in bold. In all cases, this was the buffer model. Table S1 in Additional file 1 shows the equivalent table for the queen contiguity weights matrix. The choice of weights matrix did not alter the results for most decay functions, only substantially increasing the AIC of the buffer model, but not enough to make another decay function better. As such, the change in weights matrix did not alter conclusions about which decay function performed best. Table 4 gives the full regression results for the overall best parameterization: the buffer model at 500 miles (804 km). This table also gives values of each independent variable's variance inflation factor (VIF). Since all the VIFs are less than 10, collinearity is not problematic in this model. As some of the covariates did not have significant coefficients in the best-performing model, the least significant covariate was iteratively removed from the model until all independent variables were significant, producing the model in the right side of table 4. Similarly, non-linear functions of each of the independent variables were also applied to each of the six parameterizations in Table 3, following [47]. While the fits are improved (minimized AIC = 22126.54 with the buffer function), the more complex regression models do not alter the conclusions about which spatial interaction models perform well and which perform poorly. Table S2 in Additional file 2 gives the best performing model results. Table 5 gives the R-squared values of the OLS regressions by urban-rural code. The values of these codes are in table 6. Additional file 1. Table S1 - Results for the multivariate ordinary least squares, spatial lag, and spatial error regressions of age-adjusted lung cancer mortality versus covariates and risk estimates from releases of lung carcinogens and related compounds calculated with each spatial interaction model. Spatial regressions here use queen contiguity matrix to determine whether two counties are neighbors. Bold entries indicate which spatial interaction model performed best. Note that lower values for the Akaike Information Criterion are preferred. Format: XLS Size: 20KB Download file This file can be viewed with: Microsoft Excel Viewer Additional file 2. Table S2 - Results for the multivariate ordinary least squares regression of age-adjusted lung cancer mortality versus nonlinear functions of both covariates and risk estimates from lung carcinogens and related compounds calculated with the buffer model. This is shown as it minimizes the AIC across all decay functions (Table 3). This is equivalent to Table 4 in the main document, but includes natural logarithms (e.g., log(pov)), squared values (e.g., pov2) and cubed values (e.g., pov3). Format: XLS Size: 28KB Download file This file can be viewed with: Microsoft Excel Viewer Table 5. OLS regression R-squared values by rural-urban code Table 6. Definition of each rural-urban code Maps of the percent of TRI impact in each county that is due to source locations in urban counties according to each of the distance decay functions are given in figure 3. The buffer, Cutter, exponential and power functions using the parameterizations in table 2 are shown. The darker counties have a greater percent of their impact from releases in urban counties, whether the total impact is high or low. Figure 3. Percent of TRI impact from urban releases. Percent of TRI impact from urban releases using the different decay functions. Darker counties have a higher percentage of their impact coming from release sites in urban counties, while lighter counties have a higher percentage of their impact coming from release sites in suburban and rural counties. The buffer and Cutter functions outperform the containment, power and exponential functions (Table 3). This improvement is notable both for the R-squared values of the OLS regressions and the Akaike Information Criterion (AIC) for all regressions. Because the AIC penalizes regression models with more parameters, lower values are preferred. These functions also outperformed the RSEI risk-related results values, which was not the expected outcome. Other RSEI products, the hazard, modeled hazard, and modeled hazard*population product were also tested, but all performed worse than the buffer and Cutter functions. However, the power and exponential functions from the spatial interaction literature did not perform much better than leaving out the toxicity term, and occasionally even increased the AIC value, which may result from the coarse resolution of the county-level dataset. These two functions may yet be useful at a finer scale. The improvement of the buffer and Cutter functions over the RSEI data demonstrates that despite the difficulties posed by the Modifiable Areal Unit Problem and the size of large western counties obscuring variation of risk within the county, these spatial interaction approaches may still be an accurate reflection of the risks posed by TRI facilities. It should be cautioned that while this work demonstrates a relationship between TRI facilities and lung cancer, it does not yet indicate a causal link, nor does it indicate that the best-fitting risk estimation method, a large buffer around the TRI site, has the strongest causal relationship with lung cancer mortality. Additionally, AIC values are better for the spatial regression techniques compared to the OLS regression values. However, including the TRI term in the spatial regression techniques does not lead to as much improvement over the base case of no interaction term. Even so, the improvement in the buffer model gives the spatial error regression of the buffer model the lowest AIC value. Both the Moran's I spatial autocorrelation statistic given above and the lower AIC values for the spatial regression techniques indicate that there is spatial dependence in lung cancer mortality. Moreover, this spatial dependence is not accounted for by the independent variables. This dependence is most likely the result of one or more additional spatial processes affecting lung cancer that are not accounted for in these data, rather than a simple diffusion or contagion process of lung cancer itself. The limited improvement from adding the TRI impacts strengthens the suggestion that there remain geographic processes affecting lung cancer that are not accounted for in these datasets. While it is not executed in this study, GWR may also reveal further evidence of confounding processes by revealing interactions with modeled covariates via non-stationary regression coefficients. As with the different regression methods, the buffer and Cutter functions have the best R-squared values across the entire range of rural-urban continuum codes (Table 5). Also, category 5, defined as counties containing a larger town (more than 20,000 residents) but which are not adjacent to a metropolitan area, has much higher R-squared values than the other rural-urban codes across all models. It is not yet clear why this would be the case. These results suggest that changing the method used to estimate risk will change the representation of the spatial impacts of the TRI sites on public health. As others have noted, the scope and scale of analysis can substantially impact the results [48], so researchers should be cautious when generalizing these findings at a county scale and national scope to more local scales and scopes. Nonetheless, researchers using the TRI dataset to estimate the health risks from pollution should carefully consider the method used to estimate the risk, as the most sophisticated model used here, the RSEI data, did not provide the lowest AIC values. The maps in figure 3 display the percent of the TRI impact on each county from sources in urban areas calculated using the functions that performed best in the earlier results. As estimated by these models, the potential effects of pollution from urban TRI releases extend far beyond the limits of the urban areas. However, the extent varies depending on which function is used and how it is parameterized, highlighting the importance of using an appropriate function. In the power and exponential maps (figure 3a and 3b), the impacts from urban release sites are more limited to urban areas and the nearby rural communities. In both the buffer and Cutter maps (figure 3c and 3d), rural areas in the northeastern and southwestern United States, have between 75 and 100% of their estimated TRI impact from release sites in urban areas. These extended effects of urban areas are related to the large radii used in the distance decay functions. Additional work is needed to examine the environmental toxicology to determine whether the chemicals being released could travel such large distances or whether these models are simply capturing spatial dependence of the outcome that is induced by a confounding spatial process. Future work will investigate the parameterization choices of the functions. This ad hoc approach to parameterization-examining different possibilities of the α, θ and threshold parameters-is not the ideal approach. Statistical approaches to finding the optimal α and θ parameters can be incorporated to improve the spatial interaction models that are generated [29,30,32]. A geostatistical approach can be applied to determine the decay function form and parameters. A correlogram plot comparing the distance between two counties and the difference between their mortality rates or their residuals from a regression function could be used to parameterize the function. Additionally, subsets of the correlogram could be examined separately to investigate anisotropy and non-stationarity. However, with both the ad hoc approach in this paper and a statistical model-fitting approach, using the data to optimize the parameters and then using those parameters to analyze the same data introduces circularity into the model-fitting process that would best be avoided. A more theoretically sound approach would be to vary the α and θ parameters based on the properties of the toxic chemicals that are released. Varying α is similar to methods used somewhat frequently to account for the different toxicity of the chemicals released [2,8,11,20-22], although the studies cited here use multiplicative rather than exponential modifiers (α t[i ]instead of t[i]^α). In each case, higher values of α correspond to more toxic chemicals. Different studies have made this adjustment using different references, including American Conference of Governmental Industrial Hygienists Threshold Limit Values [3,22], a chronic toxicity index [12], an inhalation unit risk [23], a lifetime cancer risk [24], and the RSEI model [9,27]. Similarly, θ and T can be varied to reflect differences in airborne transport of the chemicals. If a chemical travels more easily and farther, lower values of θ and higher values of T can be used. These parameters can also be varied based on the direction from the release site to the affected community, thus incorporating anisotropy. Ongoing work includes the refinement of at-risk population estimates using the LandScan USA population dataset [49] which can explore variation missed by county-level populations unable to capture fine-scale risks. For example, if a chemical is only present in the atmosphere within a mile of the release site, any county-by-county analysis will be problematic because the spatial resolution of county-level data are coarser than a square mile. The LandScan dataset provides population estimates at a 3 arc-second resolution (roughly 90 meters). This can then provide improved estimates of the number of people within one mile of the release site instead of assigning the impact of a release site on the county as if everybody lived at the centroid of the county. This approach will have stronger effects on the power and exponential models because they have more rapid decreases in the impact as one travels farther from the release site (figure 2). This ongoing work also incorporates the adjustments given above varying the parameters to account for properties of the chemicals released and local climatic conditions to account for prevailing wind directions. The research in this paper demonstrates that the use of simple containment techniques for estimating the spatial risk posed by pollution from TRI sites as well as the RSEI risk-related results can give misleading results about the impact of those sites on health outcomes. This is done through a comparison of multivariate regression results using inputs of six different functions for estimating the impact of a release site on a county: containment, buffering, the quadratic distance decay function proposed by Cutter et al [20], an inverse power distance decay function, an exponential distance decay function, and the RSEI risk-related results. The buffer and Cutter approaches consistently performed the best among these methods. The effects of this function choice are also demonstrated through mapping the percent of the overall impact that comes from urban TRI sites for all models except containment. As refinements to the parameterization process are made, the utility of more theoretically sound spatial interaction models will improve further. Support for this report was provided by the Office of Rural Health Policy, Health Resources and Services Administration, PHS Grant No. 1 U1CRH10664-01-00. The author also acknowledges Dr. Michael Hendryx for support and comments on an earlier draft of this manuscript. Lastly, the author thanks two anonymous referees for their valuable comments and suggestions. 1. Pastor M Jr, Sadd JL, Morello-Frosch R: Waiting to Inhale: The Demographics of Toxic Air Release Facilities in 21^st-Century California. Soc Sci Quart 2004, 85:420-440. Publisher Full Text 2. McMaster RB, Leitner H, Sheppard E: GIS-based Environmental Equity and Risk Assessment: Methodological Problems and Prospects. Cart and Geog Info Sys 1997, 24:172-189. Publisher Full Text 3. Bowen WM, Salling MJ, Haynes KE, Cyran EJ: Toward Environmental Justice: Spatial Equity in Ohio and Cleveland. Annals of the Assn of Am Geog 1995, 85:641-663. Publisher Full Text 4. Cohen MJ: The spatial distribution of toxic chemical emissions: Implications for nonmetropolitan areas. 5. Croen LA, Shaw GM, Sanbonmatsu L, Selvin S, Buffler PA: Maternal Residential Proximity to Hazardous Waste Sites and Risk for Selected Congenital Malformations. Epidemiology 1997, 8:347-354. PubMed Abstract | Publisher Full Text 6. Cutter SL, Solecki WD: Setting environmental justice in space and place: Acute and chronic airborne toxic releases in the southeastern United States. Urb Geog 1996, 17:380-399. Publisher Full Text 7. Daniels G, Friedman S: Spatial Inequality and the Distribution of Industrial Toxic Releases: Evidence from the 1990 TRI. 8. Shaw GM, Schulman J, Frisch JD, Cummins SK, Harris JA: Congenital Malformations and Birthweight in Areas with Potential Environmental Contamination. Arch of Env Health 1992, 47:147-154. Publisher Full Text 9. Abel TD: Skewed Riskscapes and Environmental Injustice: A Case Study of Metropolitan St. Louis. Env Mgmt 2008, 42:232-248. Publisher Full Text 10. Gragg RD III, Christaldi RA, Leong S, Cooper M: The location and community demographics of targeted environmental hazardous sites in Florida. 11. Kearney G, Kiros GE: A spatial evaluation of socio demographics surrounding National Priorities List sites in Florida using a distance-based approach. Int J Health Geog 2009, 8:33. BioMed Central Full Text 12. Neumann CM, Forman DL, Rothlein JE: Hazard Screening of Chemical Releases and Environmental Equity Analysis of Populations Proximate to Toxic Release Inventory Facilities in Oregon. Env Health Persp 1998, 106:217-226. Publisher Full Text 13. Berry M, Bove F: Birth Weight Reduction Associated with Residence near a Hazardous Waste Landfill. Env Health Persp 1997, 105:856-861. Publisher Full Text 14. Knox EG, Gilman EA: Hazard proximities of childhood cancers in Great Britain from 1953-80. J Epid Comm Health 1997, 51:151-159. Publisher Full Text 15. Nordström S, Beckman L, Nordenson I: Occupational and environmental risks in and around a smelter in northern Sweden: I. Variations in birth weight. Hereditas 1978, 88:43-46. PubMed Abstract 16. Downey L: Spatial Measurement, Geography, and Urban Racial Inequality. Social Forces 2003, 81:937-952. Publisher Full Text 17. Mennis J: Using Geographic Information Systems to Create and Analyze Statistical Surfaces of Population and Risk for Environmental Justice Analysis. Soc Sci Quart 2002, 83:281-297. Publisher Full Text 18. Mohai P, Saha R: Reassessing racial and socioeconomic disparities in environmental justice research. Demography 2006, 43:383-399. PubMed Abstract | Publisher Full Text 19. Mennis JL, Jordan L: The Distribution of Environmental Equity: Exploring Spatial Nonstationarity in Multivariate Models of Air Toxic Releases. Annals of the Assn of Am Geog 2005, 95:249-268. Publisher Full Text 20. Cutter SL, Hodgson ME, Dow K: Subsidized Inequities: The Spatial Patterning of Environmental Risks and Federally Assisted Housing. Urb Geog 2001, 22:29-53. Publisher Full Text 21. Fisher JB, Kelly M, Romm J: Scales of environmental justice: Combining GIS and spatial analysis for air toxics in West Oakland, California. 22. Horvath A, Hendrickson CT, Lave LB, McMichael FC, Wu TS: Toxic Emissions Indices for Green Design and Inventory. 23. Morello-Frosch R, Pastor M Jr, Sadd J: Environmental justice and Southern California's 'riskscape'. Urb Affairs Rev 2001, 36:551-578. Publisher Full Text 24. Morello-Frosch R, Pastor M Jr, Porras C, Sadd J: Environmental Justice and Regional Inequality in Southern California: Implications for Future Research. 25. Pastor M Jr, Morello-Frosch R, Sadd JL: Breathless: Schools, Air Toxics, and Environmental Justice in California. The Policy Studies J 2006, 34:337-362. Publisher Full Text 26. Dent AL, Fowler DA, Kaplan BM, Zarus GM, Henriques WD: Using GIS to Study the Health Impact of Air Emissions. Drug and Chem Toxic 2000, 23:161-178. Publisher Full Text 27. Downey L, Hawkins B: Single-Mother Families and Air Pollution: A National Study. Soc Sci Quart 2008, 89:523-536. Publisher Full Text 28. United States Environmental Protection Agency: Risk-Screening Environmental Indicators (RSEI) Model. [http://www.epa.gov/opptintr/rsei/] webcite 29. Batty M, Mackie S: The calibration of gravity, entropy, and related models of spatial interaction. Env and Planning 1972, 4:205-233. Publisher Full Text 30. Sheppard ES: The distance-decay gravity model debate. In Spatial Statistics and Models. Edited by Gaile GL, Willmott CJ. Boston, MA: D. Reidel Publishing Company; 1984:367-388. 31. Tobler WR: A computer movie simulating urban growth in the Detroit region. Econ Geog 1970, 46:234-240. Publisher Full Text 32. SEER. Surveillance, Epidemiology, and End Results (SEER) Program: SEER*Stat Database: Incidence - SEER 13 Regs Limited-Use. Nov 2009 Sub (1992-2007) <Katrina/Rita Population Adjustment>. [http:// www.seer.cancer.gov] webcite National Cancer Institute, Cancer Statistics Branch, released April 2010, based on the November 2009 submission; 33. ARF: Area Resource File. Rockville, MD: 2006 U.S. Department of Health and Human Services, Health Resources and Services Administration, Bureau of Health Professions; 2005. 34. Thun MJ, Henley SJ, Burns D, Jemal A, Shanks TG, Calle EE: Lung Cancer Death Rates in Lifelong Nonsmokers. J of Natl Cancer Inst 2006, 98:691-699. Publisher Full Text 35. Blot WJ, Fraumeni JF Jr: Geographic Patterns of Lung Cancer: Industrial Correlations. 36. Hendryx M, O'Donnell K, Horn K: Lung Cancer Mortality Is Elevated in Coal-Mining Areas of Appalachia. Lung Cancer 2008, 62:1-7. PubMed Abstract | Publisher Full Text 37. Sabel CE, Boyle PJ, Löytönen M, Gatrell AC, Jokelainen M, Flowerdew R, Maasilta P: Spatial Clustering of Amyotrophic Lateral Sclerosis in Finland at Place of Birth and Place of Death. Am J Epidem 2003, 157:898-905. Publisher Full Text 38. O'Sullivan D, Unwin DJ: Geographic Information Analysis. 2nd edition. Hoboken, NJ, USA: John Wiley & Sons, Inc; 2010. 39. Luo J, Hendryx M, Ducataman A: Association between Six Environmental Chemicals and Lung Cancer Incidence in the United States. J Rural Health in review 40. The R Project: The R Project for Statistical Computing. [http://www.r-project.org/] webcite 41. Robert Haining: Spatial Data Analysis: Theory and Practice. Cambridge, UK: Cambridge University Press; 2003. 42. Smirnov O, Anselin L: Fast maximum likelihood estimation of very large spatial autoregressive models: a characteristic polynomial approach. 43. Rogerson PA: Statistical Methods for Geography: A Student's Guide. 2nd edition. Los Angeles, CA, USA: Sage Publications; 2006. 44. Austin MP, Belbin L, Meyers JA, Doherty MD, Luoto M: Evaluation of statistical models used for predicting plant species distributions: Role of artificial data and theory. Ecol Model 2006, 199:197-216. Publisher Full Text 45. Baden BM, Noonan DS, Turaga RMR: Scales of Justice: Is there a Geographic Bias in Environmental Equity Analysis? J Env Plan Mgmt 2007, 50:163-185. Publisher Full Text 46. Bhaduri B, Bright E, Coleman P, Urban M: LandScan USA: A High Resolution Geospatial and Temporal Modeling Approach for Population Distribution and Dynamics. GeoJournal 2007, 69:103-117. Publisher Full Text Sign up to receive new article alerts from International Journal of Health Geographics
{"url":"http://www.ij-healthgeographics.com/content/10/1/20","timestamp":"2014-04-17T15:26:40Z","content_type":null,"content_length":"147018","record_id":"<urn:uuid:79d325aa-622c-4ba0-b528-44a4dcd491ee>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
The Coefficient of Friction You are here The Coefficient of Friction Friction is "the resistance an object encounters in moving over another" (OED). It is easier to drag an object over glass than sandpaper. The reason for this is that the sandpaper exerts more frictional resistance. In many problems, it is assumed that a surface is "smooth", which means that it does not exert any frictional force. In real life, however, this wouldn't be the case. A "rough" surface is one which will offer some frictional resistance. Limiting Equilibrium Imagine that you are trying to push a book along a table with your finger. If you apply a very small force, the book will not move. This must mean that the frictional force is equal to the force with which you are pushing the book. If the frictional force were less that the force produced by your finger, the book would slide forward. If it were greater, the book would slide backwards. If you push the book a bit harder, it would still remain stationary. The frictional force must therefore have increased, or the book would have moved. If you continue to push harder, eventually a point is reached when the frictional force increases no more. When the frictional force is at its maximum possible value, friction is said to be limiting. If friction is limiting, yet the book is still stationary, it is said to be in limiting equilibrium. If you push ever so slightly harder, the book will start to move. If a body is moving, friction will be taking its limiting value. In summary: The frictional force between two objects is not constant, but increases until it reaches a maximum value. When the frictional force is at its maximum, the body in question will either be moving or will be on the verge of moving. The Coefficient of Friction The coefficient of friction is a number which represents the friction between two surfaces. Between two equal surfaces, the coefficient of friction will be the same. The symbol usually used for the coefficient of friction is m The maximum frictional force (when a body is sliding or is in limiting equilibrium) is equal to the coefficient of friction × the normal reaction force. Where m is the coefficient of friction and R is the normal reaction force. This frictional force, F, will act parallel to the surfaces in contact and in a direction to oppose the motion that is taking/ trying to take place. A particle of mass 5 kg is at limiting equilibrium on a rough plane which is inclined at an angle of 30 degrees to the horizontal. Find the coefficient of friction between the particle and the plane. Resolving up the plane: F - 5gsin30 = 0 Resolving perpendicular to the plane: R = 5gcos30 In limiting equilibrium, so F = mR 5gsin30 = m5gcos30 m = sin30/cos30 = 0.577 (3sf)
{"url":"http://www.mathsrevision.net/node/157","timestamp":"2014-04-20T10:46:40Z","content_type":null,"content_length":"48331","record_id":"<urn:uuid:b4a4e97f-090d-4b60-b01f-275c79307812>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
You've Got Parallel Code in My Chocolate September 14, 2012 My take on a parallel Depth-First Search (DFS) algorithm visits all nodes in a graph and the order of visitation is likely to change from one run to the next. So, here's the problem in a nutshell: My take on a parallel Depth-First Search (DFS) algorithm visits all nodes in a graph and the order of visitation is likely to change from one run to the next. I've recently been told that there are algorithms that require the visitation order of nodes be the same as it is for a serial execution. This is known as a "topological order" of graph nodes, which is defined by serial DFS on a directed acyclic graph (DAG). The context for this order requirement is in solving sparse linear equations where the matrix of a sparse system can be modeled as a DAG. The topological order on the DAG determines the order of equations to be solved. Thus, is there some way to process (visit) nodes of a DAG that touches those nodes in the same order each time the code is run and does not depend on the number of threads used? I am assuming that the computation at each node in the graph is independent of all other nodes. However, knowing a little about linear algebra and solving systems of equations, I suspect that the computations are going to be "mostly independent." I'll address this potential dependence at the end of this post after I expound upon my initial thoughts around how to access nodes in a specific While the original parallel solution I gave in The Art of Concurrency uses a single queue, the extra condition on visitation order has me thinking about using two containers: a stack to traverse/ visit nodes and a queue to hold the tasks of processing the nodes. One thread handles the traversal while the other N-1 threads do the node processing. The "boss" thread works alone to visit each node in the DAG in topological order. When an unvisited node is popped off the stack, the boss thread encapsulates that node into a task and puts that task into the queue. The "worker" threads wait on the queue. When something is available, a worker dequeues a task and performs the required computation. The single thread processing all the unvisited nodes from the stack will preserve the precedence of visitation to be topological as required. All other threads pulling tasks out of a queue will at least dole out node processing in the proper order, but will also allow concurrent execution of that processing. Below is code to implement this strategy using Windows threads. First, the shared declarations and the boss thread’s function. long visited[NUM_NODES]; long order[NUM_NODES]; int adj[NUM_NODES][NUM_NODES]; long gCount = 0; stack<int> S; // STL stack class concurrent_queue<int> Q; // TBB concurrent_queue container unsigned __stdcall bossThread(void *pArg) int i,j,k; int nodeCount = 0; while (S.size() > 0) { k = S.top(); if (InterlockedCompareExchange(&visited[k], 1L, 0L) == 0) { Q.push(k); // enqueue on worker's queue order[k] = nodeCount++; for (i = NUM_NODES-1; i >= 0; i--) if (adj[k][i] && !visited[i]) S.push(i); } // end while return 0; The visited array marks whether or not the node has been processed (all initialized to zero), the order array will hold the serial order of node processing by the worker threads, and the adj matrix is the adjacency matrix representation of the graph. The gCount counter is going to be used to keep track of the number of nodes that have been processed by the worker threads. When this counter value reaches NUM_NODES, the computation has completed and the worker threads will terminate. Since only the boss thread will be using it, I have decided to use the STL stack object (S) as the stack for assuring nodes are queued in topological order. One instance of the bossThread() function is launched and assumes that one reference to each node in the graph has been pushed onto S. (This ensures that all connected components of the graph will be processed.) As long as there is something still in the stack, the boss thread pops off the value and, if the boss thread has not visited this node, the node is placed at the tail end of the queue, Q. At this point, just for the purposes of my example code, I note the order in which the node was placed into the worker queue, which will yield the topological order of the graph nodes. Finally, any adjacent node that has not been visited by the boss thread is pushed onto the stack before the boss pops the next node off the stack. This really is just the serial DFS algorithm with the "processing" of a node to simply be putting it into the queue for the worker threads to actually do the computation for each node. Simple enough, right? The worker thread function, pDFSearch(), is also pretty simple. unsigned __stdcall pDFSearch(void *pArg) int k, i; while(1) { if (gCount == NUM_NODES) break; while (!Q.try_pop(k)) { if (gCount == NUM_NODES) break; if (InterlockedCompareExchange(&visited[k], 2L, 1L) == 1) { Do something to VISIT node k return 0; I set each worker thread into an infinite loop that will exit when the gCount reaches NUM_NODES. If the counter hasn't reached the final value, the thread looks into the queue, Q, for a new node to process. If no node is ready, the exit condition is tested to exit this spin-wait loop if the computation is done. If there was a node on the queue, it is processed after the gCount counter is incremented in an atomic way with InterlockedIncrement(), and then the thread goes back to the queue for the next node (or to find the termination condition has been reached). You may have noticed that I've glossed over the use of the InterlockedCompareExchnge() function. Recall that the three parameters are the destination, the new value and the old value. In an atomic way, the function will compare the value stored in the destination with the old value and, if they are equal, the new value will be stored in the destination. The function returns the original value of the destination variable.
{"url":"http://www.drdobbs.com/parallel/youve-got-parallel-code-in-my-chocolate/240007344","timestamp":"2014-04-16T21:56:01Z","content_type":null,"content_length":"97568","record_id":"<urn:uuid:b32fecf0-470b-4817-924a-9bbf1c68dd1e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring SOA Complexity Exclusive offer: get 50% off this eBook here SOA Cookbook — Save 50% Master SOA process architecture, modeling, and simulation in BPEL, TIBCO's BusinessWorks, and BEA's Weblogic Integration using this SOA book and eBook $23.99 $12.00 by Michael Havey | April 2010 | SOA This article by Michael Havey, author of SOA Cookbook, presents a formula for scoring SOA processes on complexity. We position complexity analysis as an important step in design oversight and governance. The approach we consider allows the governance team to rate each process as red, yellow, or green and to flag reds for rework. Intuitively, the 'complexity' of a process is the amount of branching or nesting in its graph. Flat form, scores well on complexity because it avoids excessive branching. Naïve processes score poorly. Our scoring method is a variant of McCabe cyclomatic In this article, we play the number game once more. Let's build a formula to calculate the complexity of processes. The purpose is not to teach how to design good processes (the earlier chapters tackled that), but to 'score' them on their control-flow complexity and flag those that exceed a particular threshold. Processes with lower scores are more readable, maintainable, and testable than those with higher scores. Processes with high scores need rework! Most SOA developers, today, struggle with process complexity. They write impeccable, highly-structured Java or C# code, but their processes tend to branch out and meander in every possible direction. They would never dream of embedding an if-then inside a for loop inside a while loop inside of a catch inside a do-while in Java, but they are quick to bury, deep in an SOA process, a pick inside a sequence inside a flow inside of switch inside a scope. Their processes are often far too complex. On the other hand, when they review each others' processes, they know intuitively what 'too complex' means. Process (b) in the following figure, for example, appears much more complex than Process (c), because it has far too many arrows and is difficult to navigate. Process (c) looks tidier and more structured by comparison. Process (a) is harder to judge. Although it is well-structured and easy to navigate, it is also absurdly long; having so many steps in a sequence is poor design. In this article, we quantify 'complex'. We build a formula that rules in favor of Process (c), and penalizes (a) for its excessive sequence and (b) for its nesting and excessive branching. In addition, we demonstrate that processes designed in 'flat form' (introduced in Chapter 6) score lower than 'naïve' processes. Applying McCabe's Formula for BPEL and TIBCO BusinessWorks In this section, we study McCabe's formula for complexity, and describe how it can be used to measure the complexity of processes in BPEL and BusinessWorks. Calculating McCabe Complexity The best-known measure of programmatic complexity is Thomas McCabe's cyclomatic complexity. In his landmark paper, published in 1976 (A Complexity Measure, IEEE Transactions on Software Engineering, v. SE-2, no. 4, pp. 308—320, http://www.literateprogramming.com/mccabe.pdf), McCabe shows how to score the complexity of computer programs (FORTRAN is McCabe's preferred language) using concepts from graph theory. McCabe's approach is twofold: • He shows how to represent a computer program as a directed graph, with nodes representing steps and arrows representing the flow of control between these steps. The program must have exactly one entry point, from which all nodes are reachable, and exactly one exit point, which is reachable from all nodes. If the program has several modules or procedures, each is represented as a separate directed graph. • The complexity of this graph, and thus the complexity of the computer program, is E – N + 2P, where E is the number of edges, N the number of nodes, and P the number of modules. Assuming the program is a single unit with no separate modules, its complexity is E ‑ N + 2. Alternatively, the complexity of the program is A + 1, where A is the number of decisions or alternate paths in the graph; if a node branches in D directions, it has 1 normal path and D-1 alternate paths. This second method is easier to calculate: start with 1, then for each decision node add the number of outgoing branches less one. For example, add 1 for each binary decision, 2 for each ternary decision, and so on. Process (a) in the figure above has 16 edges and 17 nodes, so its complexity is 16 – 17 + 2, or 1. Alternatively, it has no decisions, so its complexity is 0 + 1, or 1. Process (b) has 23 edges and 10 nodes, and thus, has a complexity of 23 – 10 + 2, or 15. Alternatively, in process (b), node D3 has one alternate path (that is, it is a binary decision), D1, D4, and D5 each have two alternative paths, D6 has three alternate paths, and D2 has four alternate paths; the total number of alternate paths is thus 1 + 2 + 2 + 2 + 3 + 4, or 14. So the complexity of Process (b) is 14 + 1, or 15. Process (c) has 19 edges and 14 nodes, and thus a complexity of 19 – 14 + 2, or 7; alternatively, it has six alternate paths—five for D1, one for D2—so its complexity is 6 + 1, or 7. McCabe advocates the use of his cyclomatic measure on actual software projects. The team should agree on an acceptable upper bound for complexity (say 20), score each program, and decide how to rework those that score too high. McCabe's measure is not perfect. For one thing, it does not penalize processes for having too many nodes. The rather preposterous sequence of 17 activities in Process (a) in the preceding figure has a perfect McCabe score of 1. A process consisting of a million consecutive activities, or even one with as many activities as there are particles in the universe, would also score 1, and would pass Secondly, McCabe does not penalize for nested branching. The two processes shown in the following figure have the same complexity, although the process on the bottom is intuitively more complex than that on the top. Each process scores 7, because each contains three ternary (or 3-way) decisions: D1, D2, and D3. In the top process, those decisions come consecutively, whereas in the bottom, they are nested three levels deep. McCabe Complexity for BPEL Despite its flaws, McCabe's formula forms a part of our scoring mechanism for SOA processes; it is a component of a larger measure we develop in the last section of this article. As a pre-requisite for that discussion, we now consider how to apply the McCabe score to processes developed in BPEL and in TIBCO's BusinessWorks. BPEL processes are mapped to McCabe's directed graph form as follows: • A BPEL sequence maps easily to a line of nodes, such as fragment shown in the next figure, where the BPEL sequence of activities A, B, and C is drawn as three nodes—A, B, and C—with arrows from A to B and from B to C. A sequence does not add to the complexity of the process. • A BPEL pick, flow, or switch is represented as an N-ary decision in the following figure. The beginning and end points of the structure are designated by the nodes labeled Split and Join. For a pick, the branches are onMessage or onAlarm handlers. For a switch, the branches are case or otherwise structures. For a flow, the branches are the activities to be run in parallel (A, B, and C in the figure). (If the flow has inter-activity links, the links are represented as edges.) A pick, switch, or flow adds N-1 to the overall complexity. • A BPEL while activity is represented as a GOTO-style loop. The loop begins with a conditional check called Test (as shown in the following figure) and then branches either to A (to perform the while activity if the condition is true), or out of the loop to End. When A completes, it loops back to Test for another iteration. As it contains one binary decision, the while structure adds 1 to the complexity. • Error handling is dicier. A single unnested scope with a single error handler and N basic activities (at any level within the scope) adds as much as N to the complexity, assuming each of those basic activities might encounter the error that the handler is meant to catch. The reason for this is that each basic activity must have a binary choice whether to continue down the happy path or route directly to the handler. The scope shown in process (a) in the following figure, contains activities A, B, and C in a sequence (Sc denotes the start of the scope, End its end), but each of these activities has an error path to the handler Error, thus adding 3 to the complexity. In cases with nested scopes and multiple handlers, things get more complicated. An example of this is shown in Process (b). The outer scope, bounded by Sc1 and End1, has two handlers: E11, which is accessible from activity A; and E12, accessible from E. The inner scope, bounded by Sc2 and End2, has its own two handlers, E21 and E22, both of which are accessible from the activities B and C. The E21 handler, in turn, can throw an exception to the outer handler E11. The complexity introduced by all of this is 7: one for the decision at A, one for the decision at E, one for the decision at E21, and two each for the decisions at B and C. Master SOA process architecture, modeling, and simulation in BPEL, TIBCO's BusinessWorks, and BEA's Weblogic Integration using this SOA book and eBook Published: September 2008 eBook Price: $23.99 Book Price: $39.99 See more With these fundamentals out of the way, we now calculate the McCabe complexity of more substantial BPEL processes. We start with the event-based flat-form representation, shown in the following In the previous figure (and in those that follow), a small circle with a label of the form '+N' indicates the number of alternate paths for a BPEL activity. For example, the '+14' next to the pick indicates that there are 15 handlers in the pick, or 14 alternate paths introduced by the pick. The event-based process has a score of 22, which is broken down as follows: • We add 1 for the while loop. • We add 14 for the main switch (which has 15 handlers). • Three of the switch cases have inner switches of 2, 3, and 4 cases, so we add 1 + 2 + 3 for these. • There are 21 alternate paths, the sum of those counted in the previous bullets. Using the formula A + 1, we add 1 to 21 to get a score of 22. The state-based disputes process is shown in the following figure: The McCabe complexity of the state-based process is 37, which implies that it has 36 alternate paths. These paths are the following: • The while loop has one. • The main switch has five cases and therefore four alternate paths. • Three cases in the main switch have inner switches of 2, 5, and 5 cases. These inner switches therefore have 1, 4, and 4 alternate paths. • There are nested picks in these inner switches with 5, 3, 2, 2, 2, 3, 4, 3, 3, 3, 2, and 2 handlers. The number of alternate paths in the nested picks is therefore the sum of 4, 2, 1, 1, 1, 2, 3, 2, 2, 2, 1, and 2. The flow-based process is shown in the following figure: The flow-based process scores 35. Its 34 alternate paths break down as follows: • The while loop adds 1. • The outer switch, which has 12 cases, adds 11. • Several cases contain inner picks. Altogether the inner cases add 22 to the score. The reader can quickly verify this from the figure. The naïve representation, shown in the following figure, scores 23: The process is divided into three parts: 1. The Capturing part has a score of 5: 1 for the while loop, 3 for the outer pick (which has 4 handlers), and 1 for the inner pick (with 2 handlers). 2. The Investigating part scores 9. The outer pick, which has 4 handlers, scores 3. The inner picks have 2, 2, 2, and 4 handlers, and thus have 1, 1, 1, and 3 alternative paths. The total is thus 3 + 1 + 1 + 1 + 3, or 9. 3. The Charging Back stage has picks at various levels of nesting with 3, 3, 3, 2, and 2 handlers, so its score is 2 + 2 + 2 + 1 + 1, or 8. The three parts have 5 + 9 + 8 alternate paths, or 22 in total. The score for the process is therefore 1 + 22, or 23. (Note that the scoped fault handlers don't add to the complexity as they are linked directly to throw activities in the main flow.) The following table ┃ Form │ Score │ Calculation ┃ ┃ Naïve │ 23 │ The process is divided into three parts. For the Capturing part, add 1 for the while loop, 3 for the outer pick (which has 4 handlers), and 1 for the inner pick (with 2 ┃ ┃ │ │ handlers), for a total of 5. For the Investigating part, add 3 for the outer pick (with 4 handlers); the inner picks have 2, 2, 2, and 4 handlers, so add 1, 1, 1, and 3 for ┃ ┃ │ │ these. The total for the Investigating part is thus 3 + 1 + 1 + 1 + 3, or 9. The Charging Back stage has picks at various levels of nesting with 3, 3, 3, 2, and 2 handlers, ┃ ┃ │ │ so its score is 2 + 2 + 2 + 1 + 1, or 8. The scoped fault handlers don't add to the complexity since they are linked directly to throw activities in the main flow. ┃ ┃ State-Based │ 37 │ Add 1 for the while loop. Add 4 for the main switch (which has 5 cases). Three cases in the main switch have inner switches of 2, 5, and 5 cases, so add 1 + 4 + 4 for this. ┃ ┃ │ │ There are nested picks in these inner switches with 5, 3, 2, 2, 2, 3, 4, 3, 3, 3, 2, and 2 handlers, so add 4, 2, 1, 1, 1, 2, 3, 2, 2, 2, 1, and 1 for this. ┃ ┃ Event-Based │ 22 │ Add 1 for the while loop. Add 14 for the main switch (which has 15 handlers). Three of the switch cases have inner switches of 2, 3, and 4 cases, so add 1 + 2 + 3 for this. ┃ ┃ Flow-Based │ 35 │ Add 1 for the while loop. Add 11 for the outer switch, which has 12 cases. Several cases contain inner picks. Altogether the inner cases add 22 to the score. ┃ The results are astonishing and reinforce why McCabe scoring by itself is insufficient. The naïve process, with a complexity of 23, is, according to the McCabe number, much simpler than the state-based and flow-based representations, which score 37 and 35 respectively. The event-based representation has the lowest score, 22, but beats the naïve form by a margin of only one. What's happening? McCabe scoring favours the naïve approach. Part of the answer, as we discuss further in the last section of this article, is overhead: most of the decisioning in flat form is machinery, and if we deduct its cost from the score, we achieve the results we were looking for. Specifically: • In the event form, the outer while and outer pick, which together form the 'event loop', account for 15 of the 18 complexity points, and hence are overhead. • In the flow form, the outer while and outer switch, which together form the 'route loop', account for 12 of the 35 complexity points, are thus overhead. • The complexity of the state form is entirely overhead! The while drives the machine, the switch selects states, and the pick drives transitions. McCabe Complexity for TIBCO's BusinessWorks As a contrast to BPEL—a block-structured language that also allows, through its flow activity, a graph-structured modelling style—consider TIBCO's process integration centrepiece BusinessWorks, a graph-structured SOA process language with support for block-structured conditionals, loops, picks, and exception handlers. The top-half of the following figure shows a BusinessWorks process that mixes these two styles: In the happy path, the process logs its starting point (TraceStart) and formats its input request (Format Request) before entering into a for-each-style loop, called CreateCases. (The loop is enclosed in a box known as an iterate group; the symbol at the top left corner of the box is an 'i' in a counter-clockwise arrow, which conveys iteration.) The group moves through a list of items, for each creating a case (CreateCase) and populating a database (CreateReqOnDB). When the loop is finished, the process performs either Assign Email IDs or UseDefault, following a conditional path to one or of the other, before finally sending an email (Send Email Notification), logging its status (TraceEnd), and completing. If an error occurs along the way, the process catches it (Catch) and cleans itself up (Cleanup). The bottom half of the diagram shows the process as a directed graph that can be scored for McCabe complexity. We represent the loop and the exception handler the same way we did in BPEL: the while starts with a node (Start Iter) that branches either to its first activity (Create Case) or to the end (End Iter), and the error handler (Catch) is linked from every node in the happy path that can encounter the error. The complexity, which is 13, is calculated as follows: • Eight activities (Trace Start, Format Request, Create Case, Create Req on DB, Assign Email IDs, Use Default, Send Email Notification, and Trace End) have a binary decision either to continue on the happy path or route to the error handler, and thus, each adds one to the complexity. • Start Iter and End Iter are ternary decisions, and thus, each adds two to the complexity. • In total, there are 12 alternate paths, so the complexity is 13. In this article, we discussed the following applying McCabe's Formula for BPEL and TIBCO BusinessWorks which includes: • Calculating McCabe Complexity • McCabe Complexity for BPEL • McCabe Complexity for TIBCO's BusinessWorks Master SOA process architecture, modeling, and simulation in BPEL, TIBCO's BusinessWorks, and BEA's Weblogic Integration using this SOA book and eBook Published: September 2008 eBook Price: $23.99 Book Price: $39.99 See more About the Author : Michael Havey is an architect with thirteen years experience in integration, SOA, and BPM. A consultant in TIBCO's financial services practice, Michael previously worked as a consultant for IBM, BEA, Chordiant, and eLoyalty. Michael is the author of two books and several articles. Michael lives near Ottawa, Canada. Books From Packt Service Oriented Java Oracle SOA Suite 11g BPEL PM and OSB operational management with Business Process Driven SOA Building SOA-Based Composite Service Oriented Oracle SOA Suite Business Integration R1 Developer's Guide Oracle Enterprise Manager 10g Grid Control SOA using BPMN and BPEL Governance Applications Using NetBeans IDE Architecture with Developer's Guide 6 Java
{"url":"https://www.packtpub.com/article/measuring-soa-complexity","timestamp":"2014-04-20T18:00:12Z","content_type":null,"content_length":"91136","record_id":"<urn:uuid:42f028d2-07b5-4da6-8cdf-5de0aff13c58>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry and Experience Einstein: Geometry and Experience Albert Einstein gave an address on 27 January 1921 at the Prussian Academy of Sciences in Berlin. He chose as his topic Geometry and Experience. He lectured in German but we present an English translation below. The lecture was published by Methuen & Co. Ltd, London, in 1922. Geometry and Experience Albert Einstein One reason why mathematics enjoys special esteem, above all other sciences, is that its laws are absolutely certain and indisputable, while those of all other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts. In spite of this, the investigator in another department of science would not need to envy the mathematician if the laws of mathematics referred to objects of our mere imagination, and not to objects of reality. For it cannot occasion surprise that different persons should arrive at the same logical conclusions when they have already agreed upon the fundamental laws (axioms), as well as the methods by which other laws are to be deduced therefrom. But there is another reason for the high repute of mathematics, in that it is mathematics which affords the exact natural sciences a certain measure of security, to which without mathematics they could not attain. At this point an enigma presents itself which in all ages has agitated inquiring minds. How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? Is human reason, then, without experience, merely by taking thought, able to fathom the properties of real things. In my opinion the answer to this question is, briefly, this:- As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. It seems to me that complete clearness as to this state of things first became common property through that new departure in mathematics which is known by the name of mathematical logic or "Axiomatics." The progress achieved by axiomatics consists in its having neatly separated the logical-formal from its objective or intuitive content; according to axiomatics the logical-formal alone forms the subject-matter of mathematics, which is not concerned with the intuitive or other content associated with the logical-formal. Let us for a moment consider from this point of view any axiom of geometry, for instance, the following:- Through two points in space there always passes one and only one straight line. How is this axiom to be interpreted in the older sense and in the more modern sense? The older interpretation :- Every one knows what a straight line is, and what a point is. Whether this knowledge springs from an ability of the human mind or from experience, from some collaboration of the two or from some other source, is not for the mathematician to decide. He leaves the question to the philosopher. Being based upon this knowledge, which precedes all mathematics, the axiom stated above is, like all other axioms, self-evident, that is, it is the expression of a part of this a priori knowledge. The more modern interpretation:- Geometry treats of entities which are denoted by the words straight line, point, etc. These entities do not take for granted any knowledge or intuition whatever, but they presuppose only the validity of the axioms, such as the one stated above, which are to be taken in a purely formal sense., i.e. as void of all content of intuition or experience. These axioms are free creations of the human mind. All other propositions of geometry are logical inferences from the axioms (which are to be taken in the nominalistic sense only). The matter of which geometry treats is first defined by the axioms. Schlick in his book on epistemology has therefore characterised axioms very aptly as "implicit definitions." This view of axioms, advocated by modern axiomatics, purges mathematics of all extraneous elements, and thus dispels the mystic obscurity which formerly surrounded the principles of mathematics. But a presentation of its principles thus clarified makes it also evident that mathematics as such cannot predicate anything about perceptual objects or real objects. In axiomatic geometry the words "point," "straight line," etc., stand only for empty conceptual schemata. That which gives them substance is not relevant to mathematics. Yet on the other hand it is certain that mathematics generally, and particularly geometry, owes its existence to the need which was felt of learning something about the relations of real things to one another. The very word geometry, which, of course, means earth-measuring, proves this. For earth-measuring has to do with the possibilities of the disposition of certain natural objects with respect to one another, namely, with parts of the earth, measuring-lines, measuring-wands, etc. It is clear that the system of concepts of axiomatic geometry alone cannot make any assertions as to the relations of real objects of this kind, which we will call practically-rigid bodies. To be able to make such assertions, geometry must be stripped of its merely logical-formal character by the geometry. To accomplish this, we need only add the proposition:- Solid bodies are related, with respect to their possible dispositions, as are bodies in Euclidean geometry of three dimensions. Then the propositions of Euclid contain affirmations as to the relations of practically-rigid bodies. Geometry thus completed is evidently a natural science; we may in fact regard it as the most ancient branch of physics. Its affirmations rest essentially on induction from experience, but not on logical inferences only. We will call this completed geometry "practical geometry," and shall distinguish it in what follows from "purely axiomatic geometry." The question whether the practical geometry of the universe is Euclidean or not has a clear meaning, and its answer can only be furnished by experience. All linear measurement in physics is practical geometry in this sense, so too is geodetic and astronomical linear measurement, if we call to our help the law of experience that light is propagated in a straight line, and indeed in a straight line in the sense of practical I attach special importance to the view of geometry which I have just set forth, because without it I should have been unable to formulate the theory of relativity. Without it the following reflection would have been impossible:- In a system of reference rotating relatively to an inert system, the laws of disposition of rigid bodies do not correspond to the rules of Euclidean geometry on account of the Lorentz contraction; thus if we admit non-inert systems we must abandon Euclidean geometry. The decisive step in the transition to general co-variant equations would certainly not have been taken if the above interpretation had not served as a stepping-stone. If we deny the relation between the body of axiomatic Euclidean geometry and the practically-rigid body of reality, we readily arrive at the following view, which was entertained by that acute and profound thinker, H Poincaré:- Euclidean geometry is distinguished above all other imaginable axiomatic geometries by its simplicity. Now since axiomatic geometry by itself contains no assertions as to the reality which can be experienced, but can do so only in combination with physical laws, it should be possible and reasonable - whatever may be the nature of reality - to retain Euclidean geometry. For if contradictions between theory and experience manifest themselves, we should rather decide to change physical laws than to change axiomatic Euclidean geometry. If we deny the relation between the practically-rigid body and geometry, we shall indeed not easily free ourselves from the convention that Euclidean geometry is to be retained as the simplest. Why is the equivalence of the practically-rigid body and the body of geometry - which suggests itself so readily - denied by Poincaré and other investigators? Simply because under closer inspection the real solid bodies in nature are not rigid, because their geometrical behaviour, that is, their possibilities of relative disposition, depend upon temperature, external forces, etc. Thus the original, immediate relation between geometry and physical reality appears destroyed, and we feel impelled toward the following more general view, which characterizes Poincaré's standpoint. Geometry (G) predicates nothing about the relations of real things, but only geometry together with the purport (P) of physical laws can do so. Using symbols, we may say that only the sum of (G) + (P) is subject to the control of experience. Thus (G) may be chosen arbitrarily, and also parts of (P); all these laws are conventions. All that is necessary to avoid contradictions is to choose the remainder of (P) so that (G) and the whole of (P) are together in accord with experience. Envisaged in this way, axiomatic geometry and the part of natural law which has been given a conventional status appear as epistemologically equivalent. Sub specie aeterni Poincaré, in my opinion, is right. The idea of the measuring-rod and the idea of the clock co-ordinated with it in the theory of relativity do not find their exact correspondence in the real world. It is also clear that the solid body and the clock do not in the conceptual edifice of physics play the part of irreducible elements, but that of composite structures, which may not play any independent part in theoretical physics. But it is my conviction that in the present stage of development of theoretical physics these ideas must still be employed as independent ideas; for we are still far from possessing such certain knowledge of theoretical principles as to be able to give exact theoretical constructions of solid bodies and clocks. Further, as to the objection that there are no really rigid bodies in nature, and that therefore the properties predicated of rigid bodies do not apply to physical reality, - this objection is by no means so radical as might appear from a hasty examination. For it is not a difficult task to determine the physical state of a measuring-rod so accurately that its behaviour relatively to other measuring-bodies shall be sufficiently free from ambiguity to allow it to be substituted for the "rigid" body. It is to measuring-bodies of this kind that statements as to rigid bodies must be All practical geometry is based upon a principle which is accessible to experience, and which we will now try to realise. We will call that which is enclosed between two boundaries, marked upon a practically-rigid body, a tract. We imagine two practically-rigid bodies, each with a tract marked out on it. These two tracts are said to be "equal to one another" if the boundaries of the one tract can be brought to coincide permanently with the boundaries of the other. We now assume that: If two tracts are found to be equal once and anywhere, they are equal always and everywhere. Not only the practical geometry of Euclid, but also its nearest generalisation, the practical geometry of Riemann, and therewith the general theory of relativity, rest upon this assumption. Of the experimental reasons which warrant this assumption I will mention only one. The phenomenon of the propagation of light in empty space assigns a tract, namely, the appropriate path of light, to each interval of local time, and conversely. Thence it follows that the above assumption for tracts must also hold good for intervals of clock-time in the theory of relativity. Consequently it may be formulated as follows:- If two ideal clocks are going at the same rate at any time and at any place (being then in immediate proximity to each other), they will always go at the same rate, no matter where and when they are again compared with each other at one place. - If this law were not valid for real clocks, the proper frequencies for the separate atoms of the same chemical element would not be in such exact agreement as experience demonstrates. The existence of sharp spectral lines is a convincing experimental proof of the above-mentioned principle of practical geometry. This is the ultimate foundation in fact which enables us to speak with meaning of the mensuration, in Riemann's sense of the word, of the four-dimensional continuum of space-time. The question whether the structure of this continuum is Euclidean, or in accordance with Riemann's general scheme, or otherwise, is, according to the view which is here being advocated, properly speaking a physical question which must be answered by experience, and not a question of a mere convention to be selected on practical grounds. Riemann's geometry will be the right thing if the laws of disposition of practically-rigid bodies are transformable into those of the bodies of Eudid's geometry with an exactitude which increases in proportion as the dimensions of the part of space-time under consideration are diminished. It is true that this proposed physical interpretation of geometry breaks down when applied immediately to spaces of sub-molecular order of magnitude. But nevertheless, even in questions as to the constitution of elementary particles, it retains part of its importance. For even when it is a question of describing the electrical elementary particles constituting matter, the attempt may still be made to ascribe physical importance to those ideas of fields which have been physically defined for the purpose of describing the geometrical behaviour of bodies which are large as compared with the molecule. Success alone can decide as to the justification of such an attempt, which postulates physical reality for the fundamental principles of Riemann's geometry outside of the domain of their physical definitions. It might possibly turn out that this extrapolation has no better warrant than the extrapolation of the idea of temperature to parts of a body of molecular order of magnitude. It appears less problematical to extend the ideas of practical geometry to spaces of cosmic order of magnitude. It might, of course, be objected that a construction composed of solid rods departs more and more from ideal rigidity in proportion as its spatial extent becomes greater. But it will hardly be possible, I think, to assign fundamental significance to this objection. Therefore the question whether the universe is spatially finite or not seems to me decidedly a pregnant question in the sense of practical geometry. I do not even consider it impossible that this question will be answered before long by astronomy. Let us call to mind what the general theory of relativity teaches in this respect. It offers two possibilities:- 1. The universe is spatially infinite. This can be so only if the average spatial density of the matter in universal space, concentrated in the stars, vanishes, i.e. if the ratio of the total mass of the stars to the magnitude of the space through which they are scattered approximates indefinitely to the value zero when the spaces taken into consideration are constantly greater and 2. The universe is spatially finite. This must be so, if there is a mean density of the ponderable matter in universal space differing from zero. The smaller that mean density, the greater is the volume of universal space. I must not fail to mention that a theoretical argument can be adduced in favour of the hypothesis of a finite universe. The general theory of relativity teaches that the inertia of a given body is greater as there are more ponderable masses in proximity to it; thus it seems very natural to reduce the total effect of inertia of a body to action and reaction between it and the other bodies in the universe, as indeed, ever since Newton's time, gravity has been completely reduced to action and reaction between bodies. From the equations of the general theory of relativity it can be deduced that this total reduction of inertia to reciprocal action between masses - as required by E Mach, for example - is possible only if the universe is spatially finite. On many physicists and astronomers this argument makes no impression. Experience alone can finally decide which of the two possibilities is realised in nature. How can experience furnish an answer? At first it might seem possible to determine the mean density of matter by observation of that part of the universe which is accessible to our perception. This hope is illusory. The distribution of the visible stars is extremely irregular, so that we on no account may venture to set down the mean density of star-matter in the universe as equal, let us say, to the mean density in the Milky Way. In any case, however great the space examined may be, we could not feel convinced that there were no more stars beyond that space. So it seems impossible to estimate the mean density. But there is another road, which seems to me more practicable, although it also presents great difficulties. For if we inquire into the deviations shown by the consequences of the general theory of relativity which are accessible to experience, when these are compared with the consequences of the Newtonian theory, we first of all find a deviation which shows itself in close proximity to gravitating mass, and has been confirmed in the case of the planet Mercury. But if the universe is spatially finite there is a second deviation from the Newtonian theory, which, in the language of the Newtonian theory, may be expressed thus:- The gravitational field is in its nature such as if it were produced, not only by the ponderable masses, but also by a mass-density of negative sign, distributed uniformly throughout space. Since this factitious mass-density would have to be enormously small, it could make its presence felt only in gravitating systems of very great extent. Assuming that we know, let us say, the statistical distribution of the stars in the Milky Way, as well as their masses, then by Newton's law we can calculate the gravitational field and the mean velocities which the stars must have, so that the Milky Way should not collapse under the mutual attraction of its stars, but should maintain its actual extent. Now if the actual velocities of the stars, which can, of course, be measured, were smaller than the calculated velocities, we should have a proof that the actual attractions at great distances are smaller than by Newton's law. From such a deviation it could be proved indirectly that the universe is finite. It would even be possible to estimate its spatial magnitude. Can we picture to ourselves a three-dimensional universe which is finite, yet unbounded? The usual answer to this question is "No," but that is not the right answer. The purpose of the following remarks is to show that the answer should be "Yes." I want to show that without any extraordinary difficulty we can illustrate the theory of a finite universe by means of a mental image to which, with some practice, we shall soon grow accustomed. First of all, an observation of epistemological nature. A geometrical-physical theory as such is incapable of being directly pictured, being merely a system of concepts. But these concepts serve the purpose of bringing a multiplicity of real or imaginary sensory experiences into connection in the mind. To "visualise" a theory, or bring it home to one's mind, therefore means to give a representation to that abundance of experiences for which the theory supplies the schematic arrangement. In the present case we have to ask ourselves how we can represent that relation of solid bodies with respect to their reciprocal disposition (contact) which corresponds to the theory of a finite universe. There is really nothing new in what I have to say about this; but innumerable questions addressed to me prove that the requirements of those who thirst for knowledge of these matters have not yet been completely satisfied. So, will the initiated please pardon me, if part of what I shall bring forward has long been known? What do we wish to express when we say that our space is infinite? Nothing more than that we might lay any number whatever of bodies of equal sizes side by side without ever filling space. Suppose that we are provided with a great many wooden cubes all of the same size. In accordance with Euclidean geometry we can place them above, beside, and behind one another so as to fill a part of space of any dimensions; but this construction would never be finished; we could go on adding more and more cubes without ever finding that there was no more room. That is what we wish to express when we say that space is infinite. It would be better to say that space is infinite in relation to practically-rigid bodies, assuming that the laws of disposition for these bodies are given by Euclidean Another example of an infinite continuum is the plane. On a plane surface we may lay squares of cardboard so that each side of any square has the side of another square adjacent to it. The construction is never finished; we can always go on laying squares - if their laws of disposition correspond to those of plane figures of Euclidean geometry. The plane is therefore infinite in relation to the cardboard squares. Accordingly we say that the plane is an infinite continuum of two dimensions, and space an infinite continuum of three dimensions. What is here meant by the number of dimensions, I think I may assume to be known. Now we take an example of a two-dimensional continuum which is finite, but unbounded. We imagine the surface of a large globe and a quantity of small paper discs, all of the same size. We place one of the discs anywhere on the surface of the globe. If we move the disc about, anywhere we like, on the surface of the globe, we do not come upon a limit or boundary anywhere on the journey. Therefore we say that the spherical surface of the globe is an unbounded continuum. Moreover, the spherical surface is a finite continuum. For if we stick the paper discs on the globe, so that no disc overlaps another, the surface of the globe will finally become so full that there is no room for another disc. This simply means that the spherical surface of the globe is finite in relation to the paper discs. Further, the spherical surface is a non-Euclidean continuum of two dimensions, that is to say, the laws of disposition for the rigid figures lying in it do not agree with those of the Euclidean plane. This can be shown in the following way. On the spherical surface the construction also seems to promise success at the outset, and the smaller the radius of the disc in proportion to that of the sphere, the more promising it seems. But as the construction progresses it becomes more and more patent that the disposition of the discs in the manner indicated, without interruption, is not possible, as it should be possible by Euclidean geometry of the plane surface. In this way creatures which cannot leave the spherical surface, and cannot even peep out from the spherical surface into three-dimensional space, might discover, merely by experimenting with discs, that their two-dimensional "space" is not Euclidean, but spherical space. From the latest results of the theory of relativity it is probable that our three-dimensional space is also approximately spherical, that is, that the laws of disposition of rigid bodies in it are not given by Euclidean geometry, but approximately by spherical geometry, if only we consider parts of space which are sufficiently great. Now this is the place where the reader's imagination boggles. "Nobody can imagine this thing," he cries indignantly. "It can be said, but cannot be thought. I can represent to myself a spherical surface well enough, but nothing analogous to it in three We must try to surmount this barrier in the mind, and the patient reader will see that it is by no means a particularly difficult task. For this purpose we will first give our attention once more to the geometry of two-dimensional spherical surfaces. K be the spherical surface, touched at S by a plane, E, which, for facility of presentation, is shown in the drawing as a bounded surface. Let L be a disc on the spherical surface. Now let us imagine that at the point N of the spherical surface, diametrically opposite to S, there is a luminous point, throwing a shadow L' of the disc L upon the plane E. Every point on the sphere has its shadow on the plane. If the disc on the sphere K is moved, its shadow L' on the plane E also moves. When the disc L is at S, it almost exactly coincides with its shadow. If it moves on the spherical surface away from S upwards, the disc shadow L' on the plane also moves away from S on the plane outwards, growing bigger and bigger. As the disc L approaches the luminous point N, the shadow moves off to infinity, and becomes infinitely great. Now we put the question, What are the laws of disposition of the disc-shadows L' on the plane E? Evidently they are exactly the same as the laws of disposition of the discs L on the spherical surface. For to each original figure on K there is a corresponding shadow figure on E. If two discs on K are touching, their shadows on E also touch. The shadow-geometry on the plane agrees with the disc-geometry on the sphere. If we call the disc-shadows rigid figures, then spherical geometry holds good on the plane E with respect to these rigid figures. Moreover, the plane is finite with respect to the disc-shadows, since only a finite number of the shadows can find room on the plane. At this point somebody will say, "That is nonsense. The disc-shadows are not rigid figures. We have only to move a two-foot rule about on the plane E to convince ourselves that the shadows constantly increase in size as they move away from S on the plane towards infinity." But what if the two-foot rule were to behave on the plane E in the same way as the disc-shadows L'? It would then be impossible to show that the shadows increase in size as they move away from S; such an assertion would then no longer have any meaning whatever. In fact the only objective assertion that can be made about the disc-shadows is just this, that they are related in exactly the same way as are the rigid discs on the spherical surface in the sense of Euclidean geometry. We must carefully bear in mind that our statement as to the growth of the disc-shadows, as they move away from S towards infinity, has in itself no objective meaning, as long as we are unable to employ Euclidean rigid bodies which can be moved about on the plane E for the purpose of comparing the size of the disc-shadows. In respect of the laws of disposition of the shadows L', the point S has no special privileges on the plane any more than on the spherical surface. The representation given above of spherical geometry on the plane is important for us, because it readily allows itself to be transferred to the three-dimensional case. Let us imagine a point S of our space, and a great number of small spheres, L', which can all be brought to coincide with one another. But these spheres are not to be rigid in the sense of Euclidean geometry; their radius is to increase (in the sense of Euclidean geometry) when they are moved away from S towards infinity, and this increase is to take place in exact accordance with the same law as applies to the increase of the radii of the disc-shadows L' on the plane. After having gained a vivid mental image of the geometrical behaviour of our L' spheres, let us assume that in our space there are no rigid bodies at all in the sense of Euclidean geometry, but only bodies having the behaviour of our L' spheres. Then we shall have a vivid representation of three-dimensional spherical space, or, rather of three-dimensional spherical geometry. Here our spheres must be called "rigid" spheres. Their increase in size as they depart from S is not to be detected by measuring with measuring-rods, any more than in the case of the disc-shadows on E, because the standards of measurement behave in the same way as the spheres. Space is homogeneous, that is to say, the same spherical configurations are possible in the environment of all points. [This is intelligible without calculation - but only for the two-dimensional case - if we revert once more to the case of the disc on the surface of the sphere.] Our space is finite, because, in consequence of the "growth" of the spheres, only a finite number of them can find room in space. In this way, by using as stepping-stones the practice in thinking and visualisation which Euclidean geometry gives us, we have acquired a mental picture of spherical geometry. We may without difficulty impart more depth and vigour to these ideas by carrying out special imaginary constructions. Nor would it be difficult to represent the case of what is called elliptical geometry in an analogous manner. My only aim today has been to show that the human faculty of visualisation is by no means bound to capitulate to non-Euclidean geometry. JOC/EFR April 2007 The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Extras/Einstein_geometry.html","timestamp":"2014-04-18T05:36:08Z","content_type":null,"content_length":"31895","record_id":"<urn:uuid:2928ab5f-b531-4712-a6e1-88712f0e5e21>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
power series Construct of the form ∑[n=0]^∞ a[n]x^n is a formal variable and the s (usually coming from a , but sometimes from more exotic objects. Using the standard s of we can perform formal operations on the series, like ; others may be possible, depending on where we get our coefficients from. Sometimes we'll use a power series about x[0], by replacing "x" above with "(x-x[0])" throughout. In analysis we usually also demand that our power series converge in some neighbourhood. All Taylor series (and MacLaurin series) are power series.
{"url":"http://everything2.com/title/power+series","timestamp":"2014-04-20T21:48:56Z","content_type":null,"content_length":"29167","record_id":"<urn:uuid:ff5f8d3a-9de9-4216-b1a1-53117c839308>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
User:Franklin T. Adams-Watters From OeisWiki I graduated from the University of Chicago in 1976 with a combined Bachelors and Masters degree in Mathematics. After a year in graduate school at MIT, I dropped out to program computers full time. I've been mostly doing that ever since. While my mathematical interests are fairly broad, I have had a particular interest with respect to the OEIS in partitions. It is my thesis that questions of the form "how many partitions of n are there ..." of some type, while undeniably interesting, have tended to obscure other interesting questions one can ask about them.
{"url":"http://oeis.org/wiki/User:Franklin_T._Adams-Watters","timestamp":"2014-04-19T12:07:27Z","content_type":null,"content_length":"12674","record_id":"<urn:uuid:7c0d3c17-b14b-4c73-9d24-371cc891a012>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
The Official Terry Pratchett Forums Re: Pictures that leave you Speechless ChristianBecker wrote:You use the units that come with the stuff in the formula: m = mass; SI unit of mass is kg c = speed of light; SI unit of speed is meters per second = m·s⁻¹ So, after putting it together, the units will be kg · m² : s⁻² (s⁻² = 1/s²) kg times meter squared divided by second squared = Joule, the SI unit for energy. I've always wondered. Thanks. "What have you been doing since you stole that antique TARDIS of yours, since you first landed on Skaro? Shouting 'Look at me!!! I'm not fighting a war!', while you battle the Daleks all the way through space and time." -the Master Re: Pictures that leave you Speechless Lol i remember when that happened But yeah, thats the bit that was never explained about the equation. Thanks "The reason an author needs to know the rules of grammar isn't so he or she never breaks them, but so the author knows how to break them." Re: Pictures that leave you Speechless Yes, you've got to convert at some point in the process, lest you end up like the Mars Orbiter in 1999 Alternatively you could use a different formula that has been adapted to be used with lbs and inches. If you end up with Joule as the unit for energy, though, that formula WILL be E = mc² with some converting done. On with their heads! I'm the clown prince of fools if you don't get the joke it's your loss Love and laughter you see are the new currency 'cause greed's coinage is not worth a toss Exile yourself to the unforgiving continent of Wraeclast! Re: Pictures that leave you Speechless so no matter the original units for mass used you always convert it to kg? "The reason an author needs to know the rules of grammar isn't so he or she never breaks them, but so the author knows how to break them." Re: Pictures that leave you Speechless You use the units that come with the stuff in the formula: m = mass; SI unit of mass is kg c = speed of light; SI unit of speed is meters per second = m·s⁻¹ So, after putting it together, the units will be kg · m² : s⁻² (s⁻² = 1/s²) kg times meter squared divided by second squared = Joule, the SI unit for energy. On with their heads! I'm the clown prince of fools if you don't get the joke it's your loss Love and laughter you see are the new currency 'cause greed's coinage is not worth a toss Exile yourself to the unforgiving continent of Wraeclast! Re: Pictures that leave you Speechless so a 21.5 megaton bomb? little less then ratios i had read, but still massive. i never did get that energy equation (or should i say it was never really explained too well), specifically what units of measure you used in the calculation. like in this example kg. and which units of energy it would be converted into. "The reason an author needs to know the rules of grammar isn't so he or she never breaks them, but so the author knows how to break them." Re: Pictures that leave you Speechless raptornx01 wrote:what was the supposed matter to energy ratio for anti-matter weapons? something like 1 kilogram of anti-matter equaling the power of 28,000 hiroshima bombs? or to put it another way, close to the power released when Mt. Saint Helens exploded in 1980. Quite easy to calculate: E = mc² (no, not michelancello) → E = 1kg · 9·10¹⁶m²·s⁻² = 9·10¹⁶ Joule Nuclear bombs are usually measured in TNT-equivalents: 1kT TNT = 4,184 · 10¹² J; the Hiroshima bomb had a force of about 13.4kT → 9·10¹⁶ Joule /(13.4 · 4,184 · 10¹² J) = 1605.25 So, 1kg of anti-matter would be as powerful as 1605 and a quarter Hiroshima bombs. On with their heads! I'm the clown prince of fools if you don't get the joke it's your loss Love and laughter you see are the new currency 'cause greed's coinage is not worth a toss Exile yourself to the unforgiving continent of Wraeclast! Re: Pictures that leave you Speechless chris.ph wrote:the yanks (obviously) are on about putting rail guns on destroyers They would be. Railguns are accurate and highly destructive. And as I heard somewhere, there is no such thing as "overkill". There is only "open fire" and "I need to reload". "What have you been doing since you stole that antique TARDIS of yours, since you first landed on Skaro? Shouting 'Look at me!!! I'm not fighting a war!', while you battle the Daleks all the way through space and time." -the Master Re: Pictures that leave you Speechless the yanks (obviously) are on about putting rail guns on destroyers measuring intelligence by exam results is like measuring digestion by turd length Yes, that seems to be a common problem with the equation. Nearly everyone knows it, few people could actually use it. I remember that I, too, also didn't actually know how to use it. As I picked up some physics here and there during my studies, it became clearer. A large portion of understanding physics is getting the units right. It might seem tedious when teachers at school and tutors, professors etc. at university insist you always write down the units, even when only calculating something trivial like speed or how far an accelerating vehicle gets in a given time. But once you get used to it, a lot of it becomes clear. Units are like numbers which with you can calculate. In the end, when you cancel down the result, you usually get rid of some m, kg, s, etc. and the rest is some handy unit, like m/s, kg · m²/s². Was really an eye opener when I first realized this. It's also handy in chemistry when calculating the concentration etc. of stuff. On with their heads! I'm the clown prince of fools if you don't get the joke it's your loss Love and laughter you see are the new currency 'cause greed's coinage is not worth a toss Exile yourself to the unforgiving continent of Wraeclast! Re: Pictures that leave you Speechless I must have seen that equation thousands of times, or more, in my life. And not just when its thrown out there out of context by someone who doesn't know any better (like how the famous "Romeo, Romeo, where art thou Romeo" line is never used right since the line is actually asking his name, not where he is). I've seen it in school and out of school. I've seen it science specials, in docs about physics in general, or Einstein specifically. but never has anyone said what it meant (beside the basic Energy equals mass times the speed of light squared). its like they are saying either "any idiot should know what it means" or "don't worry your pretty little head about it, its too complicated for you" "The reason an author needs to know the rules of grammar isn't so he or she never breaks them, but so the author knows how to break them." Re: Pictures that leave you Speechless raptornx01 wrote: like how the famous "Romeo, Romeo, where art thou Romeo" line is never used right since the line is actually asking his name, not where he is The arguments I've had about that one What's up with this glass? Excuse me? Excuse me? This is my glass? I don't think so. My glass was full! And it was a bigger glass! Re: Pictures that leave you Speechless i was close "The reason an author needs to know the rules of grammar isn't so he or she never breaks them, but so the author knows how to break them." Re: Pictures that leave you Speechless raptornx01 wrote:so no matter the original units for mass used you always convert it to kg? Uh, speaking of images that you leave you speechless, just exactly is that little cartoon girl doing in your sig image, Raptor? Re: Pictures that leave you Speechless can't stop rockin' "The reason an author needs to know the rules of grammar isn't so he or she never breaks them, but so the author knows how to break them."
{"url":"http://www.terrypratchettbooks.com/forum/viewtopic.php?f=1&t=4432&start=345","timestamp":"2014-04-17T04:54:47Z","content_type":null,"content_length":"49299","record_id":"<urn:uuid:350aca72-aedd-4326-b9c9-cba3cf56e226>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
06523 Kinetics. Lecture 10 Transition state theory Thermodynamic approach Statistical thermodynamics 06523 Kinetics. Lecture 10 Transition state theory Thermodynamic approach Statistical thermodynamics The steric factor P can be related to the change in disorder at the transition state ... The steric factor P can be understood as related to the change in order of the ... – PowerPoint PPT Number of Views:456 Avg rating:3.0/5.0 Slides: 14 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/11fbce-NDY2Y/06523_Kinetics_Lecture_10_Transition_state_theory_Thermodynamic_approach_Statistical_thermodynamics_powerpoint_ppt_presentation","timestamp":"2014-04-17T04:44:42Z","content_type":null,"content_length":"108877","record_id":"<urn:uuid:a0705436-11d3-454b-b386-569d1300b770>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
C++ Assignment January 6th, 2004, 03:33 AM C++ Assignment Wanted to know if I could get a little help with an assignment. I have to create a program that can guess the factors of a number. From what I figure If i can divide X (The number to be factored) by Y evenly then I have figured out the factors. The problem is I have no idea how to test for a whole number. Because I could have a loop increment Y each time until Z is a whole number. Wondered if anyone had an idea of how to test this. I tried a few things, and came up short. Any help would be appreciated. January 6th, 2004, 03:40 AM can u pm me some details?i think we can work this out. January 6th, 2004, 03:04 PM If you want to test for a whole number result from a division, use the modulus operator. This returns the remainder of an integer division, so if the remainder is 0 the result will be a whole int x = 6; int y = 3; if ( x % y == 0 ) std::cout &lt;&lt; "y is a factor of x" &lt;&lt; std::endl; std::cout &lt;&lt; "y is not a factor of x" &lt;&lt; std::endl; Once you get used to using the modulus operator, writing the program you specify should be a piece of cake. However, you need to do it yourself to understand how everything works, especially if it's an assignment rather than some code you're writing just for fun/learning. January 6th, 2004, 03:41 PM Keep in mind when checking for factors that you can keep it to 1/2 or lower than the number, since whatever you end up with as the result of the division can be automatically included. This should speed up your search, since you can ignore higher numbers than A / 2 where A is the number to be factored. An example of what I'm talking about: A = 128 B = (A / 2) = 64 :. you know that both 2 and 64 are factors of A. You also know that 64 is the second highest factor of 128, so there is no need to check for anything higher. If you were to do this in a while loop, it would be rather trivial to record both factors as well as limit it to to the search you are looking for. I've seen a LOT of developers ignore simple mathematical facts like this when writing such pieces of code, and IMO it's bad practice to not apply math properly. It should be noted that a similar method may be used to calculate primes, for which I developed a small C application. January 6th, 2004, 04:17 PM #include &lt;iostream.h&gt; int main(int argc, char **argv) int number = 0; int total = 0; cout &lt;&lt; "Please enter the number you wish to check for factors.\n"; cin &gt;&gt; number; // we don't care about 0 or 1, nothing divides by 0 and 1 is in every prime for(int i = 2; i &lt; number; i++) // % is the mod operator, it divides two numbers and returns the REMAINDER // if remainder is 0, then we know that number can be divided by i (a factor) if(number % i == 0) cout &lt;&lt; "Number " &lt;&lt; i &lt;&lt; " is a factor of " &lt;&lt; number &lt;&lt; ".\n"; // total is used to track if the number was prime if(! total ) cout &lt;&lt; "Number " &lt;&lt; number &lt;&lt; " appears to be prime.\n"; cout &lt;&lt; "Found " &lt;&lt; total &lt;&lt; " factors.\n"; return 0; Heh, had to keep slapping myself to use C++ instead of perl (every var had a $ in front at first :P ). January 6th, 2004, 08:47 PM Thanks for the help guys. Its appreciated. January 7th, 2004, 05:50 AM As an example of the modifications I mentioned above, here is the same thing Nebulus200 did, but with some minor modifications that should result in a net speed gain. #include &lt;iostream.h&gt; int main(int argc, char **argv) { int number = 0; int altnum = 0; int total = 0; cout &lt;&lt; "Please enter the number you wish to check for factors.\n"; cin &gt;&gt; number; for (int i = 2; i &lt; (number / 2); i++) { altnum = number / i; if (number % i == 0) { cout &lt;&lt; "Number " &lt;&lt; i &lt;&lt; " is a factor of " &lt;&lt; number &lt;&lt; ".\n"; cout &lt;&lt; "Number " &lt;&lt; altnum &lt;&lt; " is a factor of " &lt;&lt; number &lt;&lt; "\n"; total += 2; if (! total) { cout &lt;&lt; "Number " &lt;&lt; number &lt;&lt; " appears to be prime.\n"; } else { cout &lt;&lt; "Found " &lt;&lt; total &lt;&lt; " factors.\n"; return 0; Just a simple rework of the code originally posted by nebulus200.
{"url":"http://www.antionline.com/printthread.php?t=250944&pp=10&page=1","timestamp":"2014-04-20T02:57:48Z","content_type":null,"content_length":"13120","record_id":"<urn:uuid:7b3ed611-e74d-414f-b80b-7a7f25f27555>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Pigeon hole-proof February 10th 2008, 11:37 PM #1 Feb 2008 Pigeon hole-proof You're supposed to show this with the pigeon hole principle. Imagine the set {1, 2, ....., 2n}. Chose from this set n+1 different integers. Prove that there is always two integers among the chosen ones whose largest common divisor is 1. Hmm, I'll give it a shot, but no promises :/ Since your set is {1, 2, 3, ... , 2n) You will have 2n elements. Let us group every two elements together as such: Set A = {{1,2}, {3,4}, {5,6} ... , {2n-1, 2n}} Since each element in A contains consecutive integers, then if both integers from any element are chosen there will be two integers among the chosen which have a greatest common divisor of 1. As there are 2 integers in each element, there are a total of n elements in this set. Then if you choose n integers, one integer may come from each element. So each element may have 1 integer selected from it. Since you are taking n+1 integers, you must take one more integer, however there are no more empty elements to choose from, so you must choose from one of the elements you have already drawn from. So because there are n elements in A, and you must choose n+1 integers, by the pigeonhole principle you must choose at least two from the same element. Thus there will be at least two integers chosen which are consecutive, thus there will be two integers chosen who's greatest common divisor is one. Thank you, that was usefull. February 11th 2008, 12:18 AM #2 February 11th 2008, 01:02 AM #3 Feb 2008
{"url":"http://mathhelpforum.com/discrete-math/27958-pigeon-hole-proof.html","timestamp":"2014-04-18T02:00:49Z","content_type":null,"content_length":"35580","record_id":"<urn:uuid:8090101e-e04b-45c3-a496-0db3183a4741>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5557282 - Height finding antenna apparatus and method of operation This invention relates to ground based radar antenna systems for use in tracking targets, and more particularly, to a new and improved dual beam ground based radar antenna system and method having a pair of feed horns and a reflector for determining the height of a tracked target. In the field of electronic tracking devices, the use of ground based radar antennas have long been recognized as an effective way to determine the range and bearing of a tracked target. Such ground based radar antenna installations are commonly found along the boarders and coastlines and on the military reservations of modern industrialized nations. Because of the continued military and industrial development of high altitude supersonic aircraft, radar antenna installations of the past are faced with a continuing difficulty of determining the altitude and three-dimensional position of an aircraft. In particular, when applied from a ground based radar installation, the basic measurement required for determining height is the elevation angle of the aircraft. In order to properly track the aircraft, the height of the target must be known. Generally, once the elevation angle of the target is known, the height or altitude of the target is derived from the elevation angle by trigonometric formulas. General concepts of the radar technology may be gleaned from a handbook entitled INTRODUCTION TO RADAR SYSTEMS authored by Merrill I. Skolnik, copyrighted 1962 edition by McGraw-Hill Book Company. Two of the basic beam patterns that exist are the pencil beam and the fan beam. The pencil beam may be generated with a metallic reflector surface shaped in the form of a paraboloid of revolution with the electromagnetic energy fed from a point source placed at the focus. Although a narrow beam can search a large sector or even a hemisphere, it is not always desirable because operational requirements place a restriction on the maximum scan time. The maximum scan time is defined as the time for the pencil beam to return to the same point in space. Therefore, the radar beam cannot dwell too long in any one radar location. This is especially true if there is a large number of locations to be searched. The number of locations to be searched can be materially reduced if the narrow pencil beam radar antenna is replaced by a beam in which one dimension is narrow while the other dimension is broad such as a fan-shaped pattern. One method of generating a fan beam is with a parabolic reflector shaped to yield the proper ratio between the azimuth and elevation beamwidths. Many long range ground based search radar antennas use a fan beam pattern that is narrow in azimuth and broad in elevation. When ground based search radar antennas employing fan beams are used against aircraft targets, no resolution in elevation is obtained. Therefore, no height information is available. One method of achieving elevation angle information for targets located by a fan beam search radar antenna is to employ an additional fan beam radar antenna with the narrow dimension in elevation instead of azimuth, as in the common height finding radar antenna. In this method, again the height finding radar antenna actually measures elevation angle rather than height. Because the number of locations that the fan beam radar antenna must search is considerably less than the number that the pencil beam radar antenna must search, the fan beam radar antenna can dwell longer in each location so that more return signals per target can be obtained. The rate at which a fan beam antenna may be scanned is a compromise between the rate at which target position information is desired (data rate) and the ability to detect weak targets (probably of detection). Note that the two are at odds with one another and the more slowly the radar antenna scans, the more pulses will be available for integration and the better the detection capability. On the other hand, a slow scan rate means a longer time between the detection of the same target. The simple fan beam antenna is usually inadequate for targets at high altitudes close to the radar antenna, because the fan beam antenna radiates very little energy in the high altitude direction close to the radar antenna. It is possible to modify the antenna pattern to radiate more energy at higher angles. One such technique for accomplishing high angle detection employs an antenna fan beam with a shape proportional to the square of the cosecant of the elevation angle. In the cosecant-squared antenna, the gain is a function of the elevation angle and it should be noted that cosecant squared antennas apply to airborne search radar antennas observing ground targets as well as ground base radar antennas observing airborne targets. The cosecant-squared antenna may be generated by a distorted section of a parabola or by a true parabola with a properly designed set of multiple feed horns. The pattern may also be generated with an array-type antenna. The cosecant-squared antenna has the important property that the echo power received from a target of constant cross-section at constant altitude is independent of the targets range from the radar. In theory, the mathematics illustrate that the echo power is independent of the range for the constant altitude target. However in practice, the power received from an antenna with a cosecant-squared pattern is not truely independent of range because of the simplifying assumptions made. It should be noted, that the crosssection of the target varies with the viewing aspect, the earth is not flat, and the radiation pattern of any real antenna can be made to only approximate the desired cosecant-squared pattern. For preliminary design purposes, it may be assumed that a search radar antenna having a pattern proportional to csc.sup.2 φ, where φ is the elevation angle, produces a constant echo-signal power for a target flying at constant altitude if certain assumptions are satisfied. Fan beam search radar antennas generally employ this type of pattern. The design of a cosecant-squared antenna pattern is an application of synthesis techniques which are generally found in the prior art literature. The cosecant-squared pattern may be approximated with a reflector antenna by shaping the surface or by using more than one feed horn. The pattern produced in this manner may not be as accurate as might be produced by a well-designed antenna array, but operationally, it is not necessary to approximate the cosecant-squared pattern very percisely. A common method of producing the cosecant-squared pattern employs a shaped reflector. The upper half of the reflector is a parabola and reflects energy from the feed horn in a direction parallel to the axis as is known in the art. The lower half of the reflector, however, is distorted from the parabolic contour so as to direct a portion of the energy in the upward direction. A cosecant-squared antenna pattern can also be produced by feeding the parabola reflector with two or more feed horns or alternatively, by employing a linear array. If the feed horns are separated and fed properly, the combination of the secondary beams will give a smooth cosecant-squared pattern over some range of angle. A reasonable approximation to the cosecant-squared pattern has been obtained by employing two feed horns while a single feed horn combined with a properly located ground plane has been utilized to generate the pattern. An example of a height finding system of the past that employed a pencil beam antenna was comprised of a rotator-type antenna or an array-type antenna each of which provided focus to the antenna beam. In the rotator-type antenna, the pencil beam is scanned in elevation as it is rotated. The angle of the pencil beam at the instant the signal is returned is labelled the elevation angle which is a necessary element for determining the height of the tracked target. Once the return beam is received, the elevation angle is measured for calculating the height of the tracked target. There are many applications in which a knowledge of target height may not be necessary. An obvious example is where the target is known to lie on the surface of the earth, and its position is determined by range and azimuth. However, there are many instances in which a knowledge of the target's position in three dimensions is essential. The elevation angle can be used as the third position coordinate, but it is often more convenient to use height. Height may be derived from the measurement of range and elevation angle. The use of height, instead of the elevation angle from which it is derived, is more desirable in those applications where it is apt to be less variant than the elevation angle. This is usually true for aircraft targets or for satellites with nearly circular orbits. Three-dimensional position information can be obtained with a symmetrical pencil-beam antenna. Both the azimuth and the elevation angle can be determined from a single observation with a single radar antenna. The pencil beam might search a hemispherical volume in space by rapidly nodding in elevation and slowly rotating in azimuth, or alternatively, the beam could rotate in azimuth while elevating slowly to trace out a helical-scan pattern. The chief disadvantage of a radar antenna with a pencil beam is that it usually requires a relatively long time to cover the volume of interest. The search time depends on the number of hits to be obtained from each target. The greater the number of hits per scan, the more accurate will be the angle measurement. The time t.sub.s required to scan an antenna of azimuth beamwidth θ.sub.B and elevation beamwidth φ.sub.B over a total azimuth angle θ.sub.t and a total elevation angle φ.sub.t when "n" pulses are to be received from each f resolution cell (with a pulse repetition frequency f.sub.r) is ##EQU1## Consider a 2 azimuth and 60 1,000 Hz. If the scanning fluctuations are to be attenuated by 30 dB, at least 38 pulses must be processed per angular resolution cell. Substituting these values into equation (1) results in a frame time of 4.05 minutes. A 600-knot aircraft could fly 40.5 nautical miles in this time, which is a relatively long distance between observations. If three hits per scan were satisfactory, the frame time would be 0.27 minutes and the same aircraft would travel 2.7 nautical miles between observations. The pencil beam will generally be directed at targets above the ground clutter. The rotation of the pencil beam in azimuth may be mechanical, as in conventional ground-based search radar systems. A rapid nodding scan is often used in elevation and may also be performed mechanically by moving the entire antenna. Alternatively, the parabolic torus with an organ-pipe scanner, or the planar array, or a linear array feeding a parabolic cylinder might also be used to scan the beam. The linear array could be electronically scanned in elevation and mechanically scanned in azimuth. Frequency scanning is a convenient form of electronic scanning for this application if the necessary bandwidth is available. Elevation information may be obtained by stacking a number of narrow pencil beams in elevation and noting which beam contains the echo. Each of the stacked beams feeds an independent receiver. A separate transmitter might be used for each beam, or alternatively, a separate broad-coverage transmitting beam could illuminate the volume common to all narrow receiving beams. The overlapping pencil beams may be generated with a single reflector antenna fed by a number of horns--one for each beam. The beams may also be generated with an array antenna whose elements are combined to form a number of overlapping beams. By interpolating the voltages between adjacent beams of the stacked-beam configuration, it is possible to obtain a more precise measurement of the elevation angle than can be obtained with a single stationary pencil beam. In many radar applications the fan beam is used to search the required volume. Even though the broad beamwidth of the fan beam in elevation does not permit the measurement of the elevation angle to any degree of precision, it is possible, in some cases, to obtain a rough approximation of target height. One technique makes use of the phenomenon whereby, under cetain circumstances, the pattern of a broad fan beam is broken into many smaller lobes by interference between the direct wave and the wave reflected from the surface of the earth. "Lobing" is more likely to occur at the lower radar frequencies and when the beam is located over water or other good reflecting surfaces. If the interference lobe pattern of the antenna is known--either by calculation or by calibration, using a known aircraft target--the range at which a target is first detected by the radar antenna is a measure of target height. The path of the target can be followed through the lobe pattern to obtain confirmation of the height. This technique is not too satisfactory since it offers but a crude estimate of height; it is not too reliable; it depends upon too many uncontrollable factors such as the propagation conditions; and it requires as a priori knowledge of the radar cross section of the target. Another technique for measuring elevation angle or height includes the use of two antennas mounted one above the other. The elevation angle is measured by comparing the phase differences in the antennas as in an interferometer. Elevation angle can also be measured by generating two overlapping elevation fan beams with a single reflector as in the amplitude-comparison monopulse radar. The sum and difference signals are used just as in the monopulse tracking radar, except that the angle-error voltage does not control a servo loop but is used directly as a measure of the elevation The usual method of obtaining both azimuth-and elevation-angle measurements involves two separate fan-beam radar antennas. One of the two radar antennas is a vertical fan beam--narrow beam in azimuth angle, broad in elevation angle--rotating in azimuth to measure the range and azimuth. This is the conventional search radar antenna. A separate radar with a horizontal fan beam--narrow in elevation angle, broad in azimuth angle--is used to measure elevation. This is called a height finder. The range and azimuth obtained with the search radar antenna can be used to position the height finder in azimuth. The height finder searches for the target by scanning in elevation. Upon acquiring a target at the same range as indicated by the search radar antenna, it proceeds to nod about the target at a rapid rate to accurately determine the center of the beam. The search radar antenna and the height-finder radar antenna may be operated at two separate locations, or they may be mounted back-to-back on the same pedestal. Another height-finding technique employed in the past is the V-beam radar antenna. This consists of two fan beams, one vertical and the other tilted at some angle to the vertical. The separation between the vertical beam and the slant beam may be 45 same target in the two beams depends upon the target range and height. It can be shown that the height "h" of a target at a range R is ##EQU2## where Δω=azimuth rotation between beams=ω.sub.s t.sub.h ω.sub.s =azimuth rotation rate, rps, and t.sub.h =time between observations, sec Although an angle of 45 slant beam, there is some advantage in making the angle smaller if the radar must operate with a high traffic density. The larger the number of targets, the more difficult is the problem of correlating the echoes from the two beams. The closer the beams, the easier it is to correlate the echoes. As the beam is scanned in elevation, the elevation angle may be measured directly by a mechanical system. Since the pencil beam is always orthogonal to the radar antenna transmitter, the elevation angle can be measured as with a protractor-type measuring device assembled directly to the rotator-type antenna or with a synchro- servo measuring system. However, in a phased array radar antenna system, the pencil beam may be scanned in elevation by electronic phase shifters for achieving a rapid scan rate and for measuring the elevation angle. This type of system is employed when the data rate is important in tracking and is usually dependent upon the dynamics of the target. The main problem associated with the phased array pencil beam radar antenna is that in order to determine the target height by state-of-the-art methods, the system becomes very expensive to develop. Certain radar systems of the past are based on a comparison of the amplitudes of echo signals received from two or more antenna positions. Some systems such as the sequential-lobing and the conical-scan techinques use a single, time-shared antenna beam while other monopulse techniques use two or more simultaneous beams. The difference in amplitudes in the several antenna positions is proportional to the angular error. The elevation angle may also be determined by comparing the phase difference between the signals of two separate antennas. Unlike the antennas of amplitude-comparison trackers, the antennas employed in phase-comparison systems are not offset from the axis. The individual boresight axes of the antennas are parallel, causing the radiation to illuminate the same volume of space. The amplitudes of the target echo signals are essentially the same from each antenna beam but the phases are different. The measurement of the elevation angle by comparison of the phase relationships of the signals from the separated antennas of a radio interferometer is well known in the art and has been used as a passive instrument with the source of the energy being radiated by the target itself. A tracking radar antenna which operates with phase information is similar to an active interferometer and has been referred to as a phase-comparison monopulse or interferometer radar antenna. Two receiving antennas are employed which are separated by a distance "d". Mathematical concepts have been derived for determining the electrical phase angle between the feedhorns. The electrical phase angle is a function of the distance "d", the elevation angle and the wavelength of the received energy. It should be noted that for an antenna operating on the phase-comparison monopulse or interferometer radar antenna principles, the distance "d" is limited. Thus, for the interferometer radar antenna to be able to substantially monitor the total hemisphere, then the distance "d" must be less than or equal to one-half the wavelength of the received energy to avoid elevation angle ambiguities. In the general case, the frequencies of interest are those from the X-band to the L-band. Thus, if the distance "d" is to satisfy the above constraint, then the two antennas must be small. This is because the sum of the distance between the center lines of the two antennas must be equal to or less than one-half the wavelength of the received energy to avoid elevation angle ambiguities. Since the wavelength of frequencies of the L-band are approximately two feet while the wavelengths of the frequencies of the X-band are in the range of from one inch to two inches, the two antennas of the interferometer radar must be small. Generally, the radar range equation dictates that the bigger the antenna, the greater the gain or electronic amplification of the received signal. The radar equation generally describes the power of the received return signal in terms of the power transmitted from the radar antenna, the gain at the transmitting antenna, the gain at the receiver antenna, the wavelength of the received energy, the radar cross-section and the range from the radar antenna to the target. A major problem associated with the interferometer radar is that the receive aperture is too small and cannot adequately receive the signal reflected from the target. Thus, if the receive aperture is too small, the antenna receiver is generally ineffective for long distance reception. The pair of antennas could be made larger to boost the gain by using a parabolic reflector. Under these conditions, the distance "d" becomes larger through the main lobe region and the effective elevation angle is depressed resulting in a small elevation coverage. Consequently, the interferometer radar antenna can only measure the elevation of a target within a limited specified range. If the target were above a particular elevation angle, the radar antenna would not detect it. Therefore, in the interferometer radar antenna, two potential conditions exist. The first condition is when the distance "d" is within a specified length providing a larger elevation coverage but a low gain of the received signal. The second condition exists when the two antennas are made larger resulting in a higher gain of the received signals but a small elevation coverage. Hence, those concerned with the development and use of height finding dual beam antennas in the radar field have long recognized the need for improved radar antenna tracking systems which provide a shaped reflector combined with a closely spaced pair of feedhorns for including a greater elevation angle coverage than if the reflector was shaped as a standard parabola and for providing more amplification gain than if the feedhorns had been the only receiving elements while simultaneously eliminating elevation angle ambiguities. Further, the shaped reflector of the improved radar antenna system should achieve a cosecant-squared antenna pattern, permit the determination of the three-dimensional position of the target by the addition of the azimuth angle, and be economical to manufacture in comparison with prior systems and methods of height finding. Briefly, and in general terms, the present invention provides a new and improved radar antenna construction having a shaped reflector which substantially increases the range of elevation angle coverage, and which significantly increases the received signal gain over similar types of prior art radar designs while simultaneously eliminating elevation angle ambiguities. Moreover, the radar construction of the present invention approximates a cosecant-squared antenna pattern, permits the calculation of the target height and three-dimensional position and is economical to manufacture compared to radar systems of the past. Basically, the present invention is directed to an improved ground based radar system and method of operation for increasing the range of elevation angle coverage and for increasing the amplification of returned radar signals while simultaneously eliminating elevation angle ambiguities. This is accomplished by modifying the design of the radar system of the past by providing a shaped reflector and by adjusting the spacing of a pair of feedhorns. In accordance with the invention, the returned signals are collected by the shaped reflector and directed to the respective feedhorns. The shape of the reflector substantially increases the range of elevation angle coverage over the range previously available from standard parabolic reflectors and significantly increases the received signal gain over the gain available if the feedhorns were the primary receiving elements. The feedhorns are separated by a few wavelengths of the received signal energy. Ambiguities in the elevation angle are avoided because the reflector restricts the elevation angle seen by the feedhorns and allows a region of space, typically 30 smaller angle to avoid the ambiguities. The smaller angle is then translated into two times 360 between the feedhorns even though the feedhorns may be spaced at a distance greater than λ/2 wavelengths. In accordance with the improved method of the present invention, once the return signals are received, their amplitudes in decibels (dB) are compared and their phase difference is determined. The amplitudes in decibels (dB) are then plotted against the elevation angle on one graph having a pair of curves for matching to the number of feedhorns. Then, the electrical phase difference between the pair of curves is plotted against the elevation angle on a second graph. The character of the two curves relative to one another may be mapped into the phase difference curve for determining the electrical phase relationship between the return signals. Thus, knowing the electrical phase difference between the return signals, the elevation angle of a tracked target may be determined. Then, with knowledge of the elevation angle, the azimuth angle (from planar rotation) and the range of the target, the three-dimensional position of the target can then be calculated. The new and improved ground based radar system and method of operation of the present invention substantially increases the range of elevation angle coverage over the range previously available from standard parabolic reflectors and further increases the received signal gain over similar types of prior art radar antenna designs in which the feedhorns were the primary receiving elements. This is accomplished while simultaneously eliminating elevation angle ambiguities. Further, the radar construction approximates a cosecant-squared antenna pattern, permits the calculation of the target height and three-dimensional position in space, and is economical to manufacture compared with other radar systems of the past. The method of operation described herein can also be adapted to existing radar systems. These and other features and advantages of the invention will become apparent from the following more detailed description, when taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features of the invention. FIG. 1 is a side elevational view of a first three-dimensional radar system of the prior art; FIG. 2 is a planar view of the horizontal position of a target with respect to a fixed reference heading; FIG. 3 is a planar view of a second three-dimensional radar system of the prior art; FIG. 4 is a graphical representation of a single beam antenna pattern of the prior art; FIG. 5 is a graphical representation of a dual beam antenna pattern of the prior art; FIG. 6 is a side elevational view of a two-dimensional geometric construction of the prior art employed for determining the height of a tracked target; FIG. 7 is a view of a three-dimensional geometric construction of the prior art employed for determining the height of a tracked target; FIG. 8 is a planar view of a height finding radar system in accordance with the present invention; FIG. 9 is a graphical representation of the (received signal) amplitude gain (dB) versus elevation angle (degrees) for a high beam and a low beam; and FIG. 10 is a graphical representation of the electrical phase difference between a high beam and a low beam (degrees) versus elevation angle (degrees). As shown in the drawings for purposes of illustration, the invention is embodied in a ground based radar system 100 of the type having a shaped reflector 102 for substantially increasing the range of elevation angle coverage for height determination of a target 104 and having a pair of spaced feedhorns 106, 108 for transmitting a signal to and receiving a returned signal from the target, the gain of the return signal being significantly increased while simultaneously eliminating elevation angle ambiguities. In ground based radar systems known in the art, the basic parameter measurement required for determining the height of a tracked target is the elevation angle φ of the target. In order to properly track the target, the height must be known and once the elevation angle φ is determined, the height or altitude may be derived by trigonometric formulas. An example of a radar system of the past included a rotator type antenna 120 which provided focus to a pencil beam 122. In the rotator antenna, the pencil beam is scanned in elevation as it is rotated through 360 instant a returned signal is received is identified as the elevation angle φ. The angle φ is a necessary element for determining the height of the tracked target. The rotator antenna 120 is driven in a circular and scanning motion by a rotator control 126. The pencil beam 122 is shown scanning through a vertical arc while a transmitter 128 in communication with the antenna 120 is located a distance R from a target T. Typically, the target is an aircraft which passes through the scanned sector monitored by the rotator antenna 120. Once the transmitted beam is returned from the target T, the elevation angle φ may be measured for calculating the height of the target. As the beam is scanned in elevation, the angle φ may be measured directly by a mechanical system. Since the pencil beam 122 is normally orthogonal to the antenna 120, (but if not orthogonal, can be determined) the angle φ can be measured as with a protractor type measuring device assembled directly to the rotator type antenna. Further refinements to the antenna 120 for measuring the angle φ may employ a synchro-servo measuring system. The rotator antenna 120 may also be a phased array type antenna which provides elevation scanning to the pencil beam 122. In the phased array antenna, electronic circuitry is employed which changes the electrical phase of the received signal energy from one array element to the next thereby scanning the pencil beam. The process is made possible by employing phase shifters (not shown) for effectively providing movement of the return signal through the circuitry. Other systems have employed circuitry for changing the frequency of the transmitted and received signals for effecting the electrical phase across the array. A serpentine (not shown) is a device employed for changing the phase as a function of frequency of each of the signals directed to each of the antenna elements across the array. The electronic radar systems such as the phased array employ the phase shifted receivers for achieving a rapid scan rate which is important in tracking the target T and is usually dependent upon the dynamics of the target. However, the main problem associated with the above described rotator type antennas is the expense associated with developing state-of-the-art methods for tracking the target Generally, once the elevation angle φ has been determined, the height of the tracked target may be calculated as shown in FIG. 6. The calculations may be made for either a flat earth geometry or a round earth geometry. Once the elevation angle and the range to the target T are known and a flat earth geometry is assumed, the height of the target may be found simply by employing the trigonometric sine function. Further, if a reference heading 130 is selected, then the azimuth angle θ may be utilized to determine the angular displacement of the target T from the reference heading 130. This measurement applies where the target T lies in a plane which is parallel to the plane of the reference heading as shown in FIG. 2. The distance from the antenna 120 to the orthogonal projection of the target T into the azimuth plane is D. A view illustrating the pencil beam antenna 120 located at the origin of a three-dimensional coordinate axis is shown in FIG. 7. The target 104 is located with respect to the reference heading 130 by employing the elevation angle φ, the azimuth angle θ, the planar distance D, and the range R. The vertical projection from the target to the ground plane is the height calculated by trigonometric The measurement of the elevation angle φ by comparison of the phase relationships of signals from a pair of first and second antennas 140, 142 of a radio interferometer is well known in the art. The radio interferometer has been used as a passive instrument with the source of the energy being radiated from the target T itself. A tracking radar antenna which operates with phase information is similar to an active interferometer and has been referred to as a phase comparison monopulse radar system. The first antenna 140 has a dimension "a" designated by the distance 144 while the second antenna 142 has a dimension "b" designated by the distance 146. The first and second antennas are receiving antennas which are separated by a distance "d" designated by 148 as is shown in FIG. 3. Mathematical concepts have been derived for determining the electrical phase angle between the feedhorns. Such a concept in equation form is ##EQU3## where λ is equal to the wavelength of the received energy, (d) is equal to the distance between the centers of the first and second antennas 140, 142, and Δe is equal to the electrical phase angle between the two adjacent feedhorns. The electrical phase angle is a function of the distance "d" the elevation angle φ and the wavelength λ of the received energy. It should be noted that for an antenna operating on the phase comparison monopulse or interferometer radar principle, the distance "d" is limited. For the interferometer radar to be able to substantially monitor the total hemisphere without angle ambiguities, the distance "d" must be less than or equal to one-half the wavelength of the received energy. However, the above limitation on the distance "d" is in direct conflict with the desire to achieve a large antenna gain. This problem is discussed later in the context of the radar range equation. The gain of a circular aperture antenna in equation form is: ##EQU4## where (λ) is the wavelength of the received energy, (D) is equal to the diameter of the circular aperture, (Pa) is the antenna aperture efficiency (a number less than unity), and (G) is the gain of the antenna. Note that if two equal sized circular apertures were touching at their edges (e.g., immediately adjacent) then the distance between their centers (d) would be equal to their individual diameters (D). Also, note that the antenna gain (G) is proportional to the square of the diameter (D)-to-wavelength (λ) ratio. If the distance (d) between the antennas 140, 142 must be equal to or less than one-half the wavelength (λ), where for adjacent circular antennas (d=D), then the antenna gain in equation form would be: ##EQU5## For an aperture efficiency of (Pa)=1, the maximum possible gain is provided which is approximately 2 dB. This value is considered to be a low magnitude of gain. Generally, the radar range equation dictates that the bigger the antenna, the greater the gain or electronic amplification of the received signal. Such an equation may be represented as: ##EQU6## The radar range equation generally describes the power of the received return signal (P.sub.r) in terms of the power transmitted (P.sub.t) from the radar system, the gain (G.sub.t) at the transmitting antenna, the gain (G.sub.r) at the receiver antenna, the wavelength squared (λ.sup.2) of the received energy, the radar cross-section (σ) and the range to the fourth power (R.sup.4) from the radar antenna to the target T. A major problem associated with the interferometer radar is that the receive aperture (not shown) is too small and cannot adequately receive the signal reflected from the target T. Thus, if the receive aperture is too small, the first and second antennas 140 and 142 are generally ineffective for long distance reception. The pair of antennas 140, 142 could be made larger to boost the gain by using a pair of parabolic reflectors. However, under these conditions, the distance "d" would become larger. As a result, the effective elevation angle φ is depressed resulting in a small elevation coverage. Consequently, the interferometer radar can only measure the elevation of a target T within a limited specified range. If the target were above the limited elevation angle range, the radar antenna would not detect the target T. Under these conditions, the electrical phase angle Δe of equation (3) would not be accurate for measuring the phase difference between the antenna signals Therefore, in the phase-comparison monopulse or interferometer radar system, two potential conditions exist. The first condition is when the distance "d" is within a specified length which will provide a larger elevation angle coverage but a low gain of the signal returned from the target T. The second condition exists when the first and second antennas 140, 142 are made larger resulting in a higher gain of the signals returned from the target T but with a corresponding smaller elevation angle coverage. A more detailed treatment of the phase-comparison monopulse radar technique can be found in the fig. 5.12 and the accompanying text of section 5.4, pages 181-182 of the INTRODUCTION OF RADAR SYSTEMS by Merrill I. Skolnik. Generally, a plurality of antenna beam patterns have been developed in the past for tracking a target T. Examples of such patterns include a single beam antenna pattern 150 as is illustrated in FIG. 4. The antenna pattern 150 is graphically illustrated on a two dimensional graph having a vertical gain coordinate measured in decibels (dB) and a horizontal coordinate measuring the elevation angle φ. The antenna pattern 150 illustrates a main lobe region 152 centered on the graph having a boresight 154 pass through a maximum gain point 156 located at the apex of the antenna pattern. Adjacent to the main lobe region is a side lobe region 158 illustrating returned signals having spatial frequencies of varying harmonics. A second example includes a dual beam antenna pattern 160 as shown in FIG. 5. The dual beam antenna pattern 160 is also graphically illustrated having a vertical gain axis measured in decibels (dB) and a horizontal coordinate axis measuring the elevation angle φ. The antenna pattern 160 includes a main lobe region 162 comprised of a pair of beams 164, 166 having a boresight 168 passing therebetween. Notice that the boresight 168 is in the vicinity of the maximum gain of the pair of main lobe beams 164, 166. Since the boresight 168 does not pass through the maximum gain point as in the single beam antenna pattern 150, the condition existing in the dual beam antenna pattern 160 is referred to as off-boresight. A side lobe region 170 is illustrated adjacent to the pair of main lobe beams and is also comprised of a plurality of returned signals having spatial frequencies of varying harmonics. Notwithstanding the systems developed in the past, there continues to be a recognized need for an improved radar tracking system having greater elevation angle coverage than is provided by a standard parabolic reflector and a corresponding greater amplification gain while simultaneously eliminating elevation angle ambiguities. Further, the improved radar system should achieve a cosecant-squared antenna pattern, permit the determination of the three dimensional position of the target T by the addition of the azimuth angle and range and be economical to manufacture in comparison with prior art height finding methods. In accordance with the present invention, the shaped reflector 102 and the pair of spaced feedhorns 106, 108 cooperate to increase the range of coverage of the elevation angle φ of the target 104 beyond the coverage of a system employing a standard parabolic reflector and to significantly increase the gain of the signals returned to the ground based radar system 100 over other radar systems employing only feedhorns for collecting the returned signals. This is accomplished while simultaneously eliminating elevation angle ambiguities. Further, the radar system incorporates a construction which approximates a cosecant-squared antenna pattern, permits the calculation of the target height and three-dimensional position and is economical to manufacture compared to radar systems of the The first spaced feedhorn 106 directs energy from the transmitter 128 (shown in FIG. 6) to the shaped reflector 102 and outward to the target 104 in the ground based radar system 100 as is illustrated in FIG. 8. For illustration purposes only, received energy is shown reflected from the shaped reflector 102 and directed only to the first spaced feedhorn 106. Under normal conditions, received energy would also be reflected from the shaped reflector and directed to the second spaced feedhorn 108. The shape of the reflector 102 is employed for approximately achieving a cosecant-squared antenna pattern and will be described hereinafter. It should be noted that radar systems known in the past employed a shaped reflector approximating a cosecant-squared antenna pattern for tracking a target signal, however, the following described technique has not been previously employed for measuring the height of a tracked target. The shaped reflector 102 has both a transmitting function and a receiving function. In the transmitting mode, the reflector 102 focuses energy received from the feedhorn 106 so that most of the transmitted energy is directed towards the horizon with a smaller portion being directed to those elevation angles addressed between (-5 Under these conditions, the zero axis is usually close to the horizon because most radar facilities are usually located on a elevated plane. Therefore, the (-5 extends below the horizon. Typically, the shape of the reflector 102 is such that the resulting antenna pattern is a cosecant-squared pattern. Although other reflector shapes may be employed, the reflector shape as illustrated in FIG. 8 provides a cosecant squared shape suitable for the example described herein. Therefore, in the receiving mode, energy reflected from the target 104 at elevation angles φ of interest is received by the shaped reflector 102 and directed toward the feedhorns 106, 108. The first and second feedhorns 106, 108 are separated only by a few wavelengths of the received energy. The purpose of the dual feedhorns is to produce an antenna pattern which is a function of the elevation angle φ. Therefore, the elevation angle φ of the tracked target can be determined by measuring the electrical phase angle and amplitude difference between the feedhorns. This is accomplished by measuring the electrical phase angle and amplitude difference of the energy received by the shaped reflector 102 from the returned signal reflected by the target. It should be noted that the electrical phase angle and amplitude difference measurement between the feedhorns 106, 108 of the received reflected energy can be measured in many ways well known in the art. F or illustration purposes, the energy received between the first spaced feedhorn 106 and the shaped reflector 102 spans an angle α. One extreme of the energy received from the vicinity of the horizon is reflected from approximately the center of the shaped reflector and directed toward the feedhorn 106 as is shown in FIG. 8. The other extreme of the energy received from the shaped reflector at the spaced feedhorn 106 is reflected at an angle η which is larger than an angle γ. Because of the curvature designed into the lower portion of the shaped reflector 102, the energy received as a second reflected signal 112 forms an angle β with the energy received as a first reflected signal 110. The angle β lies between the two extremes of the energy received from the first reflected signal 110 and the second reflected signal 112. It should be noted that the angle α is smaller than the angle β located between the deflected waves of received energy when the energy is received by or transmitted from the first spaced feedhorn 106. Therefore, since angle β is greater than angle α and the angle β represents the energy deflected from the shaped reflector 102, a greater range of elevation angle coverage is provided than from the coverage available from a standard parabolic reflector or from the feedhorns facing directly into the horizon. The fourth angle γ is located between the angle α and the angle β. The angles γ and η are relevant in shaping the cosecant-squared beam and assist in achieving the extended elevation angle coverage. The angle γ must be smaller than the angle η for properly shaping the cosecant-squared pattern as the relationship between these angles is well known in the art. The feedhorns 106, 108 are separated by only a few wavelengths of the returned signal energy which would normally cause elevation angle ambiguities over a reasonable range of space for a straight interferometer type of radar system. However, the presence of this shaped reflector 102 restricts the elevation angle φ as seen by the pair of feedhorns and permits the angle β region of space (for example 30 translated into the much smaller angle α to avoid elevation angle ambiguities. The angle α, in turn, translates into two times 360 first 360 occurs for the angle β typically between (-5 (+5 amplitude in the region of the toe of the low beam while the second 360 occurs for the angle β, typically between (-5 (+25 amplitude as is clearly shown in FIGS. 9 and 10. This clearly illustrates that the curvature of the shaped reflector 102 does indeed present the feedhorns 106, 108 with an electromagnetic wavefront more nearly perpendicular than if the received energy had directly impinged on the feedhorns. The characteristic phase and amplitude of the ground based radar system 100 with the two feedhorns 106, 108 will now be described. Under normal operating conditions, the feedhorn 106 transmits energy to be radiated to the shaped reflector 102 and also receives a returned signal therefrom. Because there are two feedhorns, there is correspondingly two reference curves representing the elevation angle in degrees plotted against the amplitude gain of the return signal in decibels. The two curves are represented by a high beam curve 200 and low beam curve 202 as is clearly indicated on the graphical illustration of FIG. 9. Each of the curves 200 and 202 represents the energy received from one of the feedhorns 106, 108. Since the feedhorns are displaced from one another by a few wavelengths of the received energy, the high beam curve 200 is correspondingly displaced from the low beam curve 202. In general, the low beam curve and the high beam curve each represent the returned energy which is directed at the shaped reflector 102. The amplitude of the returned signal received by each of the respective feedhorns is determined by the energy of the received signal collected by the shaped reflector and directed back to the feedhorns. One of the novel features of the instant invention is that if the electrical phase angle difference between the high beam curve 200 and the low beam curve 202 is known as a function of the elevation angle of the received energy, the elevation angle and hence the height of the target may be determined. Therefore, in furtherance of the graphical analysis, the elevation angle φ in degrees is plotted against the electrical phase difference between the high beam curve and the low beam curve in degrees as is clearly illustrated in FIG. 10. The following discussion requires a close inspection of FIGS. 9 and 10 collectively. It should be noted that when the low beam return signal has a greater amplitude than the high beam return signal, there is a monotonic electrical phase change of 360 Similarly, when the high beam return signal has a greater amplitude, the electrical phase change is 360 elevation angle change. By referring to the high beam curve 200 with respect to the low beam curve 202 as illustrated in FIGS. 9 and 10, it will be noted that the phase angle between the two electrical signals can have a total maximum value of 2 πradians or 360 between the high and low beams may be shown with a linear curve 204 extending from (-180 10. The 10 curve 204 ranging from the (-5 (+5 Thus, the change in electrical phase from (-180 is a function of the elevation angle φ of the target T. Likewise, when the high beam return signal reaches a magnitude which is greater than the low beam return signal, the two curves cross as is illustrated in FIG. 9. Under these conditions, the electrical phase difference between the high and low beams may be shown by a second linear curve 206 as is illustrated in FIG. 10. As can be seen, the second linear curve 206 ranges from (-180 360 angle change ranges from (+5 the vertical axis as illustrated in FIG. 10. A crossover point 208 is defined as that point at which the high beam curve 200 intersects the low beam curve 202 on FIG. 9. It should be noted that the crossover point 208 lies along the (+5 and is defined as that point where the high beam return signal is equivalent to the low beam return signal in amplitude. An example will serve to demonstrate the relationship between FIGS. 9 and 10. For a 0 return signal shown on FIG. 9 is approximately 13 decibels while the amplitude of the low beam return signal is approximately 23 decibels. These measurements indicate that the low beam return signal has a greater amplitude indicated by the difference of 10 decibels. Upon further inspection, it is noted that the amplitude of the low beam return signal is greater than the amplitude of the high beam return signal for elevation angles between (-5 the antenna. In addition, for the case in which the returned signal is received at an elevation angle φ of 0 difference is 0 have been measured). Thus, in comparing the low beam curve 202 to the high beam curve 200, the reading of the amplitude of the gain along the horizontal axis indicates the strength of the returned signal to the first and second feedhorns 106, 108 from two separate ranges of elevation angles. By comparing the two curves in the elevation angle range of from (+5 readings on the high beam curve 200 are consistently greater than the corresponding readings on the low beam curve 202. After the two curves intersect at the crossover point 208, the angles of elevation exceed (+5 as the elevation angle increases. This phenomenon is illustrated graphically at the point at which the character of each curve changes. That location on each curve is referred to as the toe of the curve and is illustrated as a high beam toe 210 and a low beam toe 212. Thus, knowing the electrical phase angle between the high beam curve 200 and the low beam curve 202 from FIG. 10 (measured from the returned signals of a target 104 at an unknown elevation angle) and the amplitude relationship between the high beam curve and the low beam curve from FIG. 9 (measured from the returned signals of the same target), the elevation angle φ can be determined over the range of 30 described in the foregoing example. By knowing the amplitude relationship between the high beam curve and the low beam curve in FIG. 9, the elevation angle of the returned signals may be determined by employing both FIGS. 9 and 10. The elevation angles of interest range from twenty-to-thirty degrees, (-5 9 and 10 which is typical for a dual beam antenna. However, this method can be modified for determining the elevation angle of signals received from a tracked target at greater than 30 example implied that elevation angles of greater than 30 of interest. After the elevation angle φ of the tracked target is known, the azimuth angle θ (determined from rotation as shown in FIG. 2) and the range R may be combined with the elevation angle φ for determining the three dimensional position in space of the target T as shown in FIG. 7. It should be noted that dual beam antennas have been known in the past but have not been previously employed for measuring the elevation angle φ of a tracked target T. Originally, the dual beam antenna was designed for reducing the effects that extraneous objects in space (referred to as clutter) had on targets flying close to the horizon. The clutter tended to interfere with the transmission and the return of the radar signals. Further, it should be noted that the dual beam antenna has not been optimized for use in measuring the elevation angle φ. However, the method of measurement described herein could be utilized with existing or slightly modified dual beam antennas to measure height. In addition, once the elevation angle φ and the range R to the target are known, the target's height can be determined by employing either a flat earth or a round earth geometry. This simple geometry is illustrated in FIG. 6. With the addition of the azimuth angle θ, the three dimensional position in space of the tracked target may be calculated. In general, a closed form equation for the ground based radar system 100 employing the shaped-beam antenna pattern does not exist. The design of the antenna pattern was completed with the assistance of computer simulation in which a form of incremental analysis and empirical approximation w as utilized. Classical closed form equations describing the design of the antenna system disclosed herein have not been derived at this time, and thus have not been included. The dichotomy that existed in radar systems of the past has been between the range of elevation angle coverage and the amplitude gain of the returned signal. Thus, the coverage of the elevation angle φ was limited with a standard parabolic reflector which had a higher amplification gain of the returned signal. However, if the feedhorns of previous radar systems were the primary receiving elements, the amplification gain of the received signal was poor although the elevation angle coverage may have been greater. By combining the shaped reflector 102 with the closely spaced feedhorns 106, 108, a compromise is created in which the range of the elevation angle coverage is increased over the range previously available from a standard parabolic reflector. Further, the combination of the shaped reflector and closely spaced feedhorns also provide an increased returned signal gain over similar types of prior radar antenna designs in which the feedhorns were the primary receiving elements (as in the earlier described interferometer method). In essence, the shaped reflector 102 permits attaining a higher amplification gain than if the feedhorns 106, 108 were utilized without the benefit of a reflector with the simultaneous increase in elevation angle coverage over that previously provided by a standard parabolic reflector. Also, the shaped reflector is useful in achieving the cosecant-squared antenna pattern. Another advantage associated with the instant invention is that of solving a problem of the past which occurred when the feedhorns were positioned to face directly towards the horizon. Without a reflector to limit the size of the angle in which the returned signal could be received, the feedhorns could receive signals at the angle β. By its very nature, the angle β was very large and included ambiguities. In particular, if the spacing "d" between the feedhorns was greater than λ/2, ambiguities would exist. Keep in mind that it is the electrical phase difference between the high beam curve 200 and the low beam curve 202 of FIG. 9 which is utilized to determine the elevation angle φ for a particular target. Therefore, an ambiguity is defined as the situation in which a particular electrical phase difference between the return signals may result from more than a single elevation angle. An example of this problem is shown by the fact that in the equation (3), Δe can assume values only between .+-.π and that there are many combinations of (d), (λ) and (φ) which will provide the same value of Δe. Ideally, a monotonic function is preferred which provides a unique and discrete value of a dependent variable for each single value of an independent variable. By employing the shaped reflector 102, a single value of the elevation angle φ will exist for each value of electrical phase difference between the high beam return signal and the low beam return signal received by the feedhorns. Even though the feedhorns 106, 108 are separated by a few wavelengths of received energy, the presence of the shaped reflector changes the angle β into the angle α for eliminating the ambiguities. Under these conditions, when the angle α is substituted for the elevation angle φ in equation 3, a spacing "d" greater than λ/2 results in an electrical phase angle Δe between the feedhorns of less than or equal to π. Thus, the shaped reflector must be designed with the constraint that the angle α is less than the angle β and that spacings "d" greater than λ/2 result in a phase angle of less than or equal to π. This is accomplished by the geometry of the shaped reflector and the spacing of the feedhorns. Therefore, in addition to the advantages recited over the radar systems of the past, the shaped reflector provides a simplified means of solving ambiguity problems resulting in an economic construction. It should be noted that alternative embodiments of the present invention have been considered and include employing the present invention as an airborne radar system for tracking targets located on the ground. The airborne radar system would operate in the reverse mode as compared to the ground based radar system 100. Airborne feedhorns would provide energy to be radiated by an airborne shaped reflector which would transmit radiated energy to tracked targets on the ground. Signals returned from tracked targets on the ground would be collected by the airborne shaped reflector and delivered to the feedhorns for processing. Assuming the existence of a pair of feedhorns separated by only a few wavelengths of the received energy, a pair of curves such as the high beam curve and the low beam curve appearing on FIG. 9 could be plotted against the elevation angle in degrees. In similar form, the relationship between the curves of the returned signals could be mapped into linear curves representing elevation angle plotted against the electrical phase difference of the returned signals. Then the elevation angle of the tracked target on the ground could be determined and employed for determining a three-dimensional position. From the foregoing, it will be appreciated that the radar system of the present invention permits the shaped reflector 102 in combination with the pair of spaced feedhorns 106, 108 to collect signals returned from the target 104 increasing the range of coverage of the target elevation angle φ and significantly improving the amplified gain of the returned signals while simultaneously eliminating elevation angle ambiguities thereby permitting the targets to be tracked at large elevation angles. Further, the radar system employs a construction which approximates a cosecant-squared antenna pattern, eliminates elevation angle ambiguities, and is economical to manufacture compared to three dimensional radar systems of the past. Since large elevation angles φ may be measured, the heights and the three-dimensional positions of targets previously ignored by single or dual beam two dimensional radar antennas may now be tracked. While a particular form of the invention has been illustrated and described, it will be apparent that various modifications can be made without departing from the spirit and scope of the invention. Accordingly, it is not intended that the invention be limited, except as by the appended claims.
{"url":"http://www.google.com/patents/US5557282?ie=ISO-8859-1","timestamp":"2014-04-18T19:51:39Z","content_type":null,"content_length":"119636","record_id":"<urn:uuid:f141347e-7997-4377-8665-ccd4135377ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: finding all dominator trees "Amir Michail" <amichail@gmail.com> 3 Dec 2006 21:32:18 -0500 From comp.compilers | List of all articles for this month | From: "Amir Michail" <amichail@gmail.com> Newsgroups: comp.compilers Date: 3 Dec 2006 21:32:18 -0500 Organization: Compilers Central References: 06-11-09606-11-117 06-11-125 06-11-131 Keywords: analysis, question Posted-Date: 03 Dec 2006 21:32:18 EST Chris F Clark wrote: > ... > Yes, there are more efficient algorithms for computing the sets of > dominators in a graph considering each vertex in the graph as a root. > Both reachable vertexes and strongly-connected-components (cycles)* are > part of the algorithm. > First, if there is one unique vertex that can reach all other vertexes > (verticies if you prefer) in the graph, consider that vertex the root. > The dominator algorithm given that root will calculate the correct > dominators (the dominator tree) for every vertex (v1) in the graph > assuming that each other vertex (v2) is the root. That is, for any > target vertex v1 and root vertex v2, some vertex in the dominator tree > of v1 will be the dominator of v1 given the root v2. I don't understand this solution. How would it work in this example? A -> B <-> C The dominator tree for A: The dominator tree for B: The dominator tree for C: Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/06-12-025","timestamp":"2014-04-20T08:19:25Z","content_type":null,"content_length":"7209","record_id":"<urn:uuid:c7d95d92-1f7f-47b4-b42d-f7a0523313ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Least Upper Bond Proof. I have no Idea D:. September 10th 2008, 02:14 PM #1 Sep 2008 Least Upper Bound Proof. I have no Idea D:. Let S be a bounded set that is not empty. Prove that sup(S) is a boundary point of S can someone help me complete this proof? i've been trying for a long time and i can't come up with anything other than If S is a (not empty) set of real numbers and S has an upper bound y, then there is a least upper bound of the set S. That least upper bound is also called the supremum of S please help, i really need it Last edited by arturdo968; September 10th 2008 at 04:40 PM. Reason: title I think that you need help with the idea of boundary points. The point $b$ is a boundary point of $S$ if and only if $\left( {\forall \varepsilon > 0} \right)\left[ {\left( {\exists x \in S} \right)\left( {\exists y otin S} \right)\left[ {\left\{ {x,y} \ right\} \subseteq \left( {b - \varepsilon ,b + \varepsilon } \right)} \right]} \right]$. In other words, in every $\varepsilon-\mbox{neighborhood}$ of $b$ contains a point of $S$ and a point not in $S$. Using the definition of supremum, the proof is simple. hmm. see i'm not very good with this. i'm just starting off, and i'm having a hard time with the writing of proofs. it's hard. thanks you though. the definition of a supremum is the LUB on a function over its domain right?kj Last edited by arturdo968; September 10th 2008 at 04:36 PM. would it help if i were to plug anything in?> errr, anyone? i hate to post in a row lke this but i really need help September 10th 2008, 03:26 PM #2 September 10th 2008, 03:39 PM #3 Sep 2008 September 10th 2008, 04:37 PM #4 Sep 2008 September 10th 2008, 06:12 PM #5 Sep 2008
{"url":"http://mathhelpforum.com/calculus/48548-least-upper-bond-proof-i-have-no-idea-d.html","timestamp":"2014-04-16T10:28:48Z","content_type":null,"content_length":"41813","record_id":"<urn:uuid:9da7db79-0d6e-4184-873a-7f6fdf2db393>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex integral June 3rd 2008, 01:05 AM #1 Apr 2008 Complex integral Problem with complex integral.Consider the integral: with the branch -pi/0<phi<3pi/2 and the idented contour at z=0 and z=1 (circular contour in the upper half plane) a)show that this integral can be written in terms of the integral: no problem with this.I have found it. b)Evaluate the second integral in part a)and fiond the value of the original $\frac{\pi sin(a\pi/3)}{3sin\pi/3)sin¨[\pi/3(a+1)]}$ with this last integral I hav a problem.I used the contour z=x(0<x<R) the sectorcircle z=Re^i.phi(0<phi<2pi/3)and the line z=xe^2pî/3(0<x<R). I can't find the result. The choosen contour???????????? Can somebody give me a suggestion?????????Thanks I do not see how to do this, not even sure if it converges. It seems to me you are actually asking, $\int_0^{\infty} \frac{x^a - 1}{x^3+1}dx$. So that the denominator is defined. Also it might help if you say what $a$ is? Is it $0<a<1$? If it is like how I describe then write, $\int_0^{\infty} \frac{x^a}{x^3+1} dx - \int_0^{\infty} \frac{dx}{x^3+1}$. Now use the fact, $\int_0^{\infty} \frac{dx}{x^3+1} = \frac{2}{3}\pi$. And we just have to compute, $\int_0^{\infty} \frac{x^a}{x^3+1} dx$. I think this is done by using a keyhole contour. Complex integral yes it is 0<a<1.I don't think it can be done by the keyhole contout??????????? June 3rd 2008, 04:49 PM #2 Global Moderator Nov 2005 New York City June 4th 2008, 01:51 PM #3 Apr 2008
{"url":"http://mathhelpforum.com/calculus/40461-complex-integral.html","timestamp":"2014-04-18T02:05:35Z","content_type":null,"content_length":"38131","record_id":"<urn:uuid:c8f304de-96e7-4a03-9f7e-fc055eaec440>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Energy Conservation on viXra log Fundamental Physics 2013: What is the Big Picture? November 26, 2013 2013 has been a great year for viXra. We already have more than 2000 new papers taking the total to over 6000. Many of them are about physics but other areas are also well covered. The range is bigger and better than ever and could never be summarised, so as the year draws to its end here instead is a snapshot of my own view of fundamental physics in 2013. Many physicists are reluctant to speculate about the big picture and how they see it developing. I think it would be useful if they were more willing to stick their neck out, so this is my contribution. I don’t expect much agreement from anybody, but I hope that it will stimulate some interesting discussion and thoughts. If you don’t like it you can always write your own summaries of physics or any other area of science and submit to viXra. The discovery of the Higgs boson marks a watershed moment for fundamental physics. The standard model is complete but many mysteries remain. Most notably the following questions are unanswered and appear to require new physics beyond the standard model: • What is dark matter? • What was the mechanism of cosmic inflation? • What mechanism led to the early production of galaxies and structure? • Why does the strong interaction not break CP? • What is the mechanism that led to matter dominating over anti-matter? • What is the correct theory of neutrino mass? • How can we explain fine-tuning of e.g. the Higgs mass and cosmological constant? • How are the four forces and matter unified? • How can gravity be quantised? • How is information loss avoided for black holes? • What is the small scale structure of spacetime? • What is the large scale structure of spacetime? • How should we explain the existence of the universe? It is not unreasonable to hope that some further experimental input may provide clues that lead to some new answers. The Large Hadron Collider still has decades of life ahead of it while astronomical observation is entering a golden age with powerful new telescopes peering deep into the cosmos. We should expect direct detection of gravitational waves and perhaps dark matter, or at least indirect clues in the cosmic ray spectrum. But the time scale for new discoveries is lengthening and the cost is growing. It is might be unrealistic to imagine the construction of new colliders on larger scales than the LHC. A theist vs atheist divide increasingly polarises Western politics and science. It has already pushed the centre of big science out of the United States over to Europe. As the jet stream invariably blows weather systems across the Atlantic, so too will come their political ideals albeit at a slower pace. It is no longer sufficient to justify fundamental science as a pursuit of pure knowledge when the men with the purse strings see it as an attack on their religion. The future of fundamental experimental science is beginning to shift further East and its future hopes will be found in Asia along with the economic prosperity that depends on it. The GDP of China is predicted to surpass that of the US and the EU within 5 years. But there is another avenue for progress. While experiment is limited by the reality of global economics, theory is limited only by our intellect and imagination. The beasts of mathematical consistency have been harnessed before to pull us through. We are not limited by just what we can see directly, but there are many routes to explore. Without the power of observation the search may be longer, but the constraints imposed by what we have already seen are tight. Already we have strings, loops, twistors and more. There are no dead ends. The paths converge back together taking us along one main highway that will lead eventually to an understanding of how nature works at its deepest levels. Experiment will be needed to show us what solutions nature has chosen, but the equations themselves are already signposted. We just have to learn how to read them and follow their course. I think it will require open minds willing to move away from the voice of their intuition, but the answer will be built on what has come before. Thirteen years ago at the turn of the millennium I thought it was a good time to make some predictions about how theoretical physics would develop. I accept the mainstream views of physicists but have unique ideas of how the pieces of the jigsaw fit together to form the big picture. My millennium notes reflected this. Since then much new work has been done and some of my original ideas have been explored by others, especially permutation symmetry of spacetime events (event symmetry), the mathematical theory of theories, and multiple quantisation through category theory. I now have a clearer idea about how I think these pieces fit in. On the other hand, my idea at the time of a unique discrete and natural structure underlying physics has collapsed. Naturalness has failed in both theory and experiment and is now replaced by a multiverse view which explains the fine-tuning of the laws of the universe. I have adapted and changed my view in the face of this experimental result. Others have refused to. Every theorist working on fundamental physics has a set of ideas or principles that guides their work and each one is different. I do not suppose that I have a gift of insight that allows me to see possibilities that others miss. It is more likely that the whole thing is a delusion, but perhaps there are some ideas that could be right. In any case I believe that open speculation is an important part of theoretical research and even if it is all wrong it may help others to crystallise their own opposing views more clearly. For me this is just a way to record my current thinking so that I can look back later and see how it succeeded or changed. The purpose of this article then is to give my own views on a number of theoretical ideas that relate to the questions I listed. The style will be pedagogical without detailed analysis, mainly because such details are not known. I will also be short on references, after all nobody is going to cite this. Here then are my views. Causality has been discussed by philosophers since ancient times and many different types of causality have been described. In terms of modern physics there are only two types of causality to worry about. Temporal causality is the idea that effects are due to prior causes, i.e. all phenomena are caused by things that happened earlier. Ontological causality is about explaining things in terms of simpler principles. This is also known as reductionism. It does not involve time and it is completely independent of temporal causality. What I want to talk about here is temporal causality. Temporal causality is a very real aspect of nature and it is important in most of science. Good scientists know that it is important not to confuse correlation with causation. Proper studies of cause and effect must always use a control to eliminate this easy mistake. Many physicists, cosmologists and philosophers think that temporal causality is also important when studying the cosmological origins of the universe. They talk of the evolving cosmos, eternal inflation, or numerous models of pre-big-bang physics or cyclic cosmologies. All of these ideas are driven by thinking in terms of temporal causality. In quantum gravity we find Causal Sets and Causal Dynamical Triangulations, more ideas that try to build in temporal causality at a fundamental level. All of them are misguided. The problem is that we already understand that temporal causality is linked firmly to the thermodynamic arrow of time. This is a feature of the second law of thermodynamics, and thermodynamics is a statistical theory that emerges at macroscopic scales from the interactions of many particles. The fundamental laws themselves can be time reversed (along with CP to be exact). Physical law should not be thought of in terms of a set of initial conditions and dynamical equations that determine evolution forward in time. It is really a sum over all possible histories between past and future boundary states. The fundamental laws of physics are time symmetric and temporal causality is emergent. The origin of time’s arrow can be traced back to the influence of the big bang singularity where complete symmetry dictated low entropy. The situation is even more desperate if you are working on quantum gravity or cosmological origins. In quantum gravity space and time should also be emergent, then the very description of temporal causality ceases to make sense because there is no time to express it in terms of. In cosmology we should not think of explaining the universe in terms of what caused the big bang or what came before. Time itself begins and ends at spacetime singularities. When I was a student around 1980 symmetry was a big thing in physics. The twentieth century started with the realisation that spacetime symmetry was the key to understanding gravity. As it progressed gauge symmetry appeared to eventually explain the other forces. The message was that if you knew the symmetry group of the universe and its action then you knew everything. Yang-Mills theory only settled the bosonic sector but with supersymmetry even the fermionic side would follow, perhaps uniquely. It was not to last. When superstring theory replaced supergravity the pendulum began its swing back taking away symmetry as a fundamental principle. It was not that superstring theory did not use symmetry, it had the old gauge symmetries, supersymmetries, new infinite dimensional symmetries, dualities, mirror symmetry and more, but there did not seem to be a unifying symmetry principle from which it could be derived. There was even an argument called Witten’s Puzzle based on topology change that seemed to rule out a universal symmetry. The spacetime diffeomorphism group is different for each topology so how could there be a bigger symmetry independent of the solution? The campaign against symmetry strengthened as the new millennium began. Now we are told to regard gauge symmetry as a mere redundancy introduced to make quantum field theory appear local. Instead we need to embrace a more fundamental formalism based on the amplituhedron where gauge symmetry has no presence. While I embrace the progress in understanding that string theory and the new scattering amplitude breakthroughs are bringing, I do not accept the point of view that symmetry has lost its role as a fundamental principle. In the 1990s I proposed a solution to Witten’s puzzle that sees the universal symmetry for spacetime as permutation symmetry of spacetime events. This can be enlarged to large-N matrix groups to include gauge theories. In this view spacetime is emergent like the dynamics of a soap bubble formed from intermolecular interaction. The permutation symmetry of spacetime is also identified with the permutation symmetry of identical particles or instantons or particle states. My idea was not widely accepted even when shortly afterwards matrix models for M-theory were proposed that embodied the principle of event symmetry exactly as I envisioned. Later the same idea was reinvented in a different form for quantum graphity with permutation symmetry over points in space for random graph models, but still the fundamental idea is not widely regarded. While the amplituhedron removes the usual gauge theory it introduces new dual conformal symmetries described by Yangian algebras. These are quantum symmetries unseen in the classical Super-Yang-Mills theory but they combine permutations symmetry over states with spacetime symmetries in the same way as event-symmetry. In my opinion different dual descriptions of quantum field theories are just different solutions to a single pregeometric theory with a huge and pervasive universal symmetry. The different solutions preserve different sectors of this symmetry. When we see different symmetries in different dual theories we should not conclude that symmetry is less fundamental. Instead we should look for the greater symmetry that unifies them. After moving from permutation symmetry to matrix symmetries I took one further step. I developed algebraic symmetries in the form of necklace Lie algebras with a stringy feel to them. These have not yet been connected to the mainstream developments but I suspect that these symmetries will be what is required to generalise the Yangian symmetries to a string theory version of the amplituhedron. Time will tell if I am right. We know so much about cosmology, yet so little. The cosmic horizon limits our view to an observable universe that seems vast but which may be a tiny part of the whole. The heat of the big bang draws an opaque veil over the first few hundred thousand years of the universe. Most of the matter around us is dark and hidden. Yet within the region we see the ΛCDM standard model accounts well enough for the formation of galaxies and stars. Beyond the horizon we can reasonably assume that the universe continues the same for many more billions of light years, and the early big bang back to the first few minutes or even seconds seems to be understood. Cosmologists are conservative people. Radical changes in thinking such as dark matter, dark energy, inflation and even the big bang itself were only widely accepted after observation forced the conclusion, even though evidence built up over decades in some cases. Even now many happily assume that the universe extends to infinity looking the same as it does around here, that the big bang is a unique first event in the universe, that space-time has always been roughly smooth, that the big bang started hot, and that inflation was driven by scalar fields. These are assumptions that I question, and there may be other assumptions that should be questioned. These are not radical ideas. They do not contradict any observation, they just contradict the dogma that too many cosmologist live by. The theory of cosmic inflation was one of the greatest leaps in imagination that has advanced cosmology. It solved many mysteries of the early universe at a stroke and Its predictions have been beautifully confirmed by observations of the background radiation. Yet the mechanism that drives inflation is not understood. It is assumed that inflation was driven by a scalar inflaton field. The Higgs field is mostly ruled out (exotic coupling to gravity not withstanding), but it is easy to imagine that other scalar fields remain to be found. The problem lies with the smooth exit from the inflationary period. A scalar inflaton drives a DeSitter universe. What would coordinate a graceful exit to a nice smooth universe? Nobody knows. I think the biggest clue is that the standard cosmological model has a preferred rest frame defined by commoving galaxies and the cosmic background radiation. It is not perfect on small scales but over hundreds of millions of light years it appears rigid and clear. What was the origin of this reference frame? A DeSitter inflationary model does not possess such a frame, yet something must have co-ordinated its emergence as inflation ended. These ideas simply do not fit together if the standard view of inflation is correct. In my opinion this tells us that inflation was not driven by a scalar field at all. The Lorentz geometry during the inflationary period must have been spontaneously broken by a vector field with a non-zero component pointing in the time direction. Inflation must have evolved in a systematic and homogenous way through time while keeping this fields direction constant over large distances smoothing out any deviations as space expanded. The field may have been a fundamental gauge vector or a composite condensate of fermions with a non-zero vector expectation value in the vacuum. Eventually a phase transition ended the symmetry breaking phase and Lorentz symmetry was restored to the vacuum, leaving a remnant of the broken symmetry in the matter and radiation that then filled the cosmos. The required vector field may be one we have not yet found, but some of the required features are possessed by the massive gauge bosons of the weak interaction. The mass term for a vector field can provide an instability favouring timelike vector fields because the signature of the metric reverses sign in the time direction. I am by no means convinced that the standard model cannot explain inflation in this way, but the mechanism could be complicated to model. Another great mystery of cosmology is the early formation of galaxies. As ever more powerful telescopes have penetrated back towards times when the first galaxies were forming, cosmologists have been surprised to find active galaxies rapidly producing stars, apparently with supermassive black holes ready-formed at their cores. This contradicts the predictions of the cold dark matter model according to which the stars and black holes should have formed later and more slowly. The conventional theory of structure formation is very Newtonian in outlook. After baryogenesis the cosmos was full of gas with small density fluctuations left over from inflation. As radiation decoupled, these anomalies caused the gas and dark matter to gently coalesce under their own weight into clumps that formed galaxies. This would be fine except for the observation of supermassive black holes in the early universe. How did they form? I think that the formation of these black holes was driven by large scale gravitational waves left over from inflation rather than density fluctuations. As the universe slowed its inflation there would be parts that slowed a little sooner and other a little later. Such small differences would have been amplified by the inflation leaving a less than perfectly smooth universe for matter to form in. As the dark matter followed geodesics through these waves in spacetime it would be focused just as light waves on the bottom of a swimming pool is focused by surface waves into intricate light patterns. At the caustics the dark matter would come together as high speed to be compressed in structures along lines and surfaces. Large black holes would form at the sharpest focal points and along strands defined by the caustics. The stars and remaining gas would then gather around the black holes. Pulled in by their gravitation to form the galaxies. As the universe expanded the gravitational waves would fade leaving the structure of galactic clusters to mark where they had been. The greatest question of cosmology asks how the universe is structured on large scales beyonf the cosmic horizon. We know that dark energy is making the expansion of the universe accelerate so it will endure for eternity, but we do not know if it extends to infinity across space. Cosmologists like to assume that space is homogeneous on large scales, partly because it makes cosmology simpler and partly because homogeneity is consistent with observation within the observable universe. If this is assumed then the question of whether space is finite or infinite depends mainly on the local curvature. If the curvature is positive then the universe is finite. If it is zero or negative the universe is infinite unless it has an unusual topology formed by tessellating polyhedrons larger than the observable universe. Unfortunately observation fails to tell us the sign of the curvature. It is near zero but we can’t tell which side of zero it lies. This then is not a question I can answer but the holographic principle in its strongest form contradicts a finite universe. An infinite homogeneous universe also requires an explanation of how the big bang can be coordinated across an infinite volume. This leaves only more complex solutions in which the universe is not homogeneous. How can we know if we cannot see past the cosmic horizon? There are many homogeneous models such as the bubble universes of eternal inflation, but I think that there is too much reliance on temporal causality in that theory and I discount it. My preference is for a white hole model of the big bang where matter density decreases slowly with distance from a centre and the big bang singularity itself is local and finite with an outer universe stretching back further. Because expansion is accelerating we will never see much outside the universe that is currently visible so we may never know its true shape. It has long been suggested that the laws of physics are fine-tuned to allow the emergence of intelligent life. This strange illusion of intelligent design could be explained in atheistic terms if in some sense many different universes existed with different laws of physics. The observation that the laws of physics suit us would then be no different in principle from the observation that our planet suits us. Despite the elegance of such anthropomorphic reasoning many physicists including myself resisted it for a long time. Some still resist it. The problem is that the laws of physics show some signs of being unique according to theories of unification. In 2001 I like many thought that superstring theory and its overarching M-theory demonstrated this uniqueness quite persuasively. If there was only one possible unified theory with no free parameters how could an anthropic principle be viable? At that time I preferred to think that fine-tuning was an illusion. The universe would settle into the lowest energy stable vacuum of M-theory and this would describe the laws of physics with no room for choice. The ability of the universe to support life would then just be the result of sufficient complexity. The apparent fine-tuning would be an illusion resulting from the fact that we see only one form of intelligent life so far. I imagined distant worlds populated by other forms of intelligence in very different environments from ours based on other solutions to evolution making use of different chemical combination and physical processes. I scoffed at science fiction stories where the alien life looked similar to us except for different skin textures or different numbers of My opinion started to change when I learnt that string theory actually has a vast landscape of vacuum solutions and they can be stabilized to such an extent that we need not be living at the lowest energy point. This means that the fundamental laws of physics can be unique while different low energy effective theories can be realized as solutions. Anthropic reasoning was back on the table. It is worrying to think that the vacuum is waiting to decay to a lower energy state at any place and moment. If it did so an expanding sphere of energy would expand at the speed of light changing the effective laws of physics as it spread out, destroying everything in its path. Many times in the billions of years and billions of light years of the universe in our past light come, there must have been neutron stars that collided with immense force and energy. Yet not once has the vacuum been toppled to bring doom upon us. The reason is that the energies at which the vacuum state was forged in the big bang are at the Planck scale, many orders of magnitude beyond anything that can be repeated in even the most violent events of astrophysics. It is the immense range of scales in physics that creates life and then allows it to survive. The principle of naturalness was spelt out by ‘t Hooft in the 1980s, except he was too smart to call it a principle. Instead he called it a “dogma”. The idea was that the mass of a particle or other physical parameters could only be small if they would be zero given the realisation of some symmetry. The smallness of fermion masses could thus be explained by chiral symmetry, but the smallness of the Higgs mass required supersymmetry. For many of us the dogma was finally put to rest when the Higgs mass was found by the LHC to be unnaturally small without any sign of the accompanying supersymmetric partners. Fine tuning had always been a feature of particle physics but with the Higgs it became starkly apparent. The vacuum would not tend to squander its range of scope for fine-tuning, limited as it is by the size of the landscape. If there is a cheaper way the typical vacuum will find it so that there is enough scope left to tune nuclear physics and chemistry for the right components required by life. Therefore I expect supersymmetry or some similar mechanism to come in at some higher scale to stabilise the Higgs mass and the cosmological constant. It may be a very long time indeed before that can be verified. Now that I have learnt to accept anthropomorphism, the multiverse and fine-tuning I see the world in a very different way. If nature is fine-tuned for life it is plausible that there is only one major route to intelligence in the universe. Despite the plethora of new planets being discovered around distant stars, the Earth appears as a rare jewel among them. Its size and position in the goldilocks zone around a long lives stable star in a quite part of a well behaved galaxy is not typical. Even the moon and the outer gas giants seem to play their role in keeping us safe from natural instabilities. Yet of we were too safe life would have settled quickly into a stable form that could not evolve to higher functions. Regular cataclysmic events in our history were enough to cause mass extinction events without destroying life altogether, allowing it to develop further and further until higher intelligence emerged. Microbial life may be relatively common on other worlds but we are exquisitely rare. No sign of alien intelligence drifts across time and space from distant worlds. I now think that where life exists it will be based on DNA and cellular structures much like all life on Earth. It will require water and carbon and to evolve to higher forms it will require all the commonly available elements each of which has its function in our biology or the biology of the plants on which we depend. Photosynthesis may be the unique way in which a stable carbon cycle can complement our need for oxygen. Any intelligent life will be much like us and it will be rare. This I see as the most significant prediction of fine tuning and the multiverse. String Theory String theory was the culmination of twentieth century developments in particles physics leading to ever more unified theories. By 2000 physicists had what appeared to be a unique mother theory capable of including all known particle physics in its spectrum. They just had to find the mechanism that collapsed its higher dimensions down to our familiar 4 dimensional spacetime. Unfortunately it turned out that there were many such mechanisms and no obvious means to figure out which one corresponds to our universe. This leaves string theorists in a position unable to predict anything useful that would confirm their theory. Some people have claimed that this makes the theory unscientific and that physicists should abandon the idea and look for a better alternative. Such people are misguided. String theory is not just a random set of ideas that people tried. It was the end result of exploring all the logical possibilities for the ways in which particles can work. It is the only solution to the problem of finding a consistent interaction of matter with gravity in the limit of weak fields on flat spacetime. I don’t mean merely that it is the only solution anyone could fine, it is the only solution that can work. If you throw it away and start again you will only return to the same answer by the same logic. What people have failed to appreciate is that quantum gravity acts at energy scales well above those that can be explored in accelerators or even in astronomical observations. Expecting string theory to explain low energy particle physics was like expecting particle physics to explain biology. In principle it can, but to derive biochemistry from the standard model you would need to work out the laws of chemistry and nuclear physics from first principles and then search through the properties of all the possible chemical compounds until you realised that DNA can self-replicate. Without input from experiment this is an impossible program to put into practice. Similarly, we cannot hope to derive the standard model of particle physics from string theory until we understand the physics that controls the energy scales that separate them. There are about 12 orders of magnitude in energy scale that separate chemical reactions from the electroweak scale and 15 orders of magnitude that separate the electroweak scale from the Planck scale. We have much to learn. How then can we test string theory? To do so we will need to look beyond particle physics and find some feature of quantum gravity phenomenology. That is not going to be easy because of the scales involved. We can’t reach the Planck energy, but sensitive instruments may be able to probe very small distance scales as small variations of effects over large distances. There is also some hope that a remnant of the initial big bang remains in the form of low frequency radio or gravitational waves. But first string theory must predict something to observe at such scales and this presents another Despite nearly three decades of intense research, string theorists have not yet found a complete non-perturbative theory of how string theory works. Without it predictions at the Planck scale are not in any better shape than predictions at the electroweak scale. Normally quantised theories explicitly include the symmetries of the classical theories they quantised. As a theory of quantum gravity, string theory should therefore include diffeomorphism invariance of spacetime, and it does but not explicitly. If you look at string theory as a perturbation on a flat spacetime you find gravitons, the quanta of gravitational interactions. This means that the theory must respect the principles of general relativity in small deviations from the flat spacetime but it is not explicitly described in a way that makes the diffeomorphism invariance of general relativity manifest. Why is that? Part of the answer coming from non-perturbative results in string theory is that the theory allows the topology of spacetime to change. Diffeomorphisms on different topologies form different groups so there is no way that we could see diffeomorphism invariance explicitly in the formulation of the whole theory. The best we could hope would be to find some group that has every diffeomorphism group as a subgroup and look for invariance under that. Most string theorists just assume that this argument means that no such symmetry can exist and that string theory is therefore not based on a principle of universal symmetry. I on the other hand have proposed that the universal group must contain the full permutation group on spacettime events. The diffeomorphism group for any topology can then be regarded as a subgroup of this permutation group. String theorists don’t like this because they see spacetime as smooth and continuous whereas permutation symmetry would suggest a discrete spacetime. I don’t think these two ideas are incompatible. In fact we should see spacetime as something that does not exists at all in the foundations of string theory. It is emergent. The permutation symmetry on events is really to be identified with the permutation symmetry that applies to particle states in quantum mechanics. A smooth picture of spacetime then emerges from the interactions of these particles which in string theory are the partons of the strings. This was an idea I formulated twenty years ago, building symmetries that extend the permutation group first to large-N matrix groups and then to necklace Lie-algebras that describe the creation of string states. The idea was vindicated when matrix string theory was invented shortly after but very few people appreciated the connection. The matric theories vindicated the matrix extensions in my work. Since then I have been waiting patiently for someone to vindicate the necklace Lie algebra symmetries as well. In recent years we have seen a new approach to quantum field theory for supersymmetric Yang-Mills which emphasises a dual conformal symmetry rather than the gauge symmetry. This is a symmetry found in the quantum scattering amplitudes rather than the classical limit. The symmetry takes the form of a Yangian symmetry related to the permutations of the states. I find it plausible that this will turn out to be a remnant of necklace Lie-algebras in the more complete string theory. There seems to be still some way to go before this new idea expressed in terms of an amplituhedron is fully worked out but I am optimistic that I will be proven right again, even if few people recognise it again. Once this reformulation of string theory is complete we will see string theory in a very different way. Spacetime, causality and even quantum mechanics may be emergent from the formalism. It will be non-perturbative and rigorously defined. The web of dualities connecting string theories and the holographic nature of gravity will be derived exactly from first principles. At least that is what I hope for. In the non-perturbative picture it should be clearer what happens at high energies when space-time breaks down. We will understand the true nature of the singularities in black-holes and the big bang. I cannot promise that these things will be enough to provide predictions that can be observed in real experiments or cosmological surveys, but it would surely improve the chances. Loop Quantum Gravity If you want to quantised a classical system such as a field theory there are a range of methods that can be used. You can try a Hamiltonian approach, or a path integral approach for example. You can change the variables or introduce new ones, or integrate out some degrees of freedom. Gauge fixing can be handled in various ways as can renormalisation. The answers you get from these different approaches are not quite guaranteed to be equivalent. There are some choices of operator ordering that can affect the answer. However, what we usually find in practice is that there are natural choices imposed by symmetry principles or other requirements of consistency and the different results you get using different methods are either equivalent or very nearly so, if they lead to a consistent result at all. What should this tell us about quantum gravity? Quantising the gravitational field is not so easy. It is not renormalisable in the same way that other gauge theories are, yet a number of different methods have produced promising results. Supergravity follows the usual field theory methods while String theory uses a perturbative generalisation derived from the old S-matrix approach. Loop Quantum Gravity makes a change of variables and then follows a Hamiltonian recipe. There are other methods such as Twistor Theory, Non-Commutative Geometry, Dynamical Triangulations, Group Field Theory, Spin Foams, Higher Spin Theories etc. None has met with success in all directions but each has its own successes in some directions. While some of these approaches have always been known to be related, others have been portrayed as rivals. In particular the subject seems to be divided between methods related to string theory and methods related to Loop Quantum Gravity. It has always been my expectation that the two sides will eventually come together, simply because of the fact that different ways of quantising the same classical system usually do lead to equivalent results. Superficially strings and loops seem like related geometric objects, i.e. one dimensional structures in space tracing out two dimensional world sheets in spacetime. String Theorists and Loop Qunatum Gravitists alike have scoffed at the suggestion that these are the same thing. They point out that string pass through each other unlike the loops which form knot states. String theory also works best in ten dimensions while LQG can only be formulated in 4. String Theory needs supersymmetry and therefore matter, while LQG tries to construct first a consistent theory of quantum gravity alone. I see these differences very differently from most physicists. I observe that when strings pass through each other they can interact and the algebraic diagrams that represent this are very similar to the Skein relations used to describe the knot theory of LQG. String theory does indeed use the same mathematics of quantum groups to describe its dynamics. If LQG has not been found to require supersymmetry or higher dimensions it may be because the perturbative limit around flat spacetime has not yet been formulated and that is where the consistency constraints arise. In fact the successes and failures of the two approaches seem complementary. LQG provides clues about the non-perturbative background independent picture of spacetime that string theorists need. Methods from Non-Commutative Geometry have been incorporated into string theory and other approaches to quantum gravity for more than twenty years and in the last decade we have seen Twistor Theory applied to string theory. Some people see this convergence as surprising but I regard it as natural and predictable given the nature of the process of quantisation. Twistors have now been applied to scattering theory and to supergravity in 4 dimensions in a series of discoveries that has recently led to the amplituhedron formalism. Although the methods evolved from observations related to supersymmetry and string theory they seem in some ways more akin to the nature of LQG. Twistors were originated by Penrose as an improvement on his original spin-network idea and it is these spin-networks that describe states in LQG. I think that what has held LQG back is that it separates space and time. This is a natural consequence of the Hamiltonian method. LQG respects diffeomorphism invariance, unlike string theory, but it is really only the spatial part of the symmetry that it uses. Spin networks are three dimensional objects that evolve in time, whereas Twistor Theory tries to extend the network picture to 4 dimensions. People working on LQG have tended to embrace the distinction between space and time in their theory and have made it a feature claiming that time is philosophically different in nature from space. I don’t find that idea appealing at all. The clear lesson of relativity has always been that they must be treated the same up to a sign. The amplituhedron makes manifest the dual conformal symmetry to yang mills theory in the form of an infinite dimensional Yangian symmetry. These algebras are familiar from the theory of integrable systems where they may were deformed to bring in quantum groups. In fact the scattering amplitude theory that applies to the planar limit of Yang Mills does not use this deformation, but here lies the opportunity to united the theory with Loop Quantum Gravity which does use the deformation. Of course LQG is a theory of gravity so if it is related to anything it would be supergravity or sting theory, not Yang Mills. In the most recent developments the scattering amplitude methods have been extended to supergravity by making use of the observation that gravity can be regarded as formally the square of Yang-Mills. Progress has thus been made on formulating 4D supergravity using twistors, but so far without this deformation. A surprise observation is that supergravity in this picture requires a twistor string theory to make it complete. If the Yangian deformation could be applied to these strings then they could form knot states just like the loops in LQG. I cant say if it will pan out that way but I can say that it would make perfect sense if it did. It would mean that LQG and string theory would finally come together and methods that have grown out of LQG such as spin foams might be applied to string theory. The remaining mystery would be why this correspondence worked only in 4 spacetime dimensions. Both Twistors and LQG use related features of the symmetry of 4 dimensional spacetime that mean it is not obvious how to generalise to higher dimensions, while string theory and supergravity have higher forms that work up to 11 dimensions. Twistor theory is related to conformal field theory is a reduced symmetry from geometry that is 2 dimensions higher. E.g. the 4 dimensional conformal group is the same as the 6 dimensional spin groups. By a unique coincidence the 6 dimensional symmetries are isomorphic to unitary or special linear groups over 4 complex variables so these groups have the same representations. In particular the fundamental 4 dimensional representation of the unitary group is the same as the Weyl spinor representation in six real dimensions. This is where the twistors come from so a twistor is just a Weyl spinor. Such spinors exist in any even number of dimensions but without the special properties found in this particular case. It will be interesting to see how the framework extends to higher dimensions using these structures. Quantum Mechanics Physicists often chant that quantum mechanics is not understood. To paraphrase some common claims: If you think you understand quantum mechanics you are an idiot. If you investigate what it is about quantum mechanics that is so irksome you find that there are several features that can be listed as potentially problematical; indeterminacy, non-locality, contextuality, observers, wave-particle duality and collapse. I am not going to go through these individually; instead I will just declare myself a quantum idiot if that is what understanding implies. All these features of quantum mechanics are experimentally verified and there are strong arguments that they cannot be easily circumvented using hidden variables. If you take a multiverse view there are no conceptual problems with observers or wavefunction collapse. People only have problems with these things because they are not what we observe at macroscopic scales and our brains are programmed to see the world classically. This can be overcome through logic and mathematical understanding in the same way as the principles of relativity. I am not alone in thinking that these things are not to be worried about, but there are some other features of quantum mechanics that I have a more extraordinary view of. Another aspect of quantum mechanics that gives some cause for concern is its linearity, Theories that are linear are often usually too simple to be interesting. Everything decouples into modes that act independently in a simple harmonic way, In quantum mechanics we can in principle diagonalise the Hamiltonian to reduce the whole universe to a sum over energy eigenstates. Can everything we experience by encoded in that one dimensional spectrum? In quantum field theory this is not a problem, but there we have spacetime as a frame of reference relative to which we can define a privileged basis for the Hilbert space of states. It is no longer the energy spectrum that just counts. But what if spacetime is emergent? What then do we choose our Hilbert basis relative to? The symmetry of the Hilbert space must be broken for this emergence to work, but linear systems do not break their symmetries. I am not talking about the classical symmetries of the type that gets broken by the Higgs mechanism. I mean the quantum symmetries in phase Suppose we accept that string theory describes the underlying laws of physics, even if we don’t know which vacuum solution the universe selects. Doesn’t string theory also embody the linearity of quantum mechanics? It does so long as you already accept a background spacetime, but in string theory the background can be changed by dualities. We don’t know how to describe the framework in which these dualities are manifest but I think there is reason to suspect that quantum mechanics is different in that space, and it may not be linear. The distinction between classical and quantum is not as clear-cut as most physicists like to believe. In perturbative string theory the Feynman diagrams are given by string worldsheets which can branch when particles interact. Is this the classical description or the quantum description? The difference between classical and quantum is that the worldsheets will extremise their area in the classical solutions but follow any history in the quantum. But then we already have multi-particle states and interactions in the classical description. This is very different from quantum field Stepping back though we might notice that quantum field theory also has some schizophrenic characteristics. The Dirac equation is treated as classical with non-linear interactions even though it is a relativistic Schrödinger equation, with quantum features such as spin already built-in. After you second quantise you get a sum over all possible Feynman graphs much like the quantum path integral sum over field histories, but in this comparison the Feynman diagrams act as classical configurations. What is this telling us? My answer is that the first and second quantisation are the first in a sequence of multiple iterated quantisations. Each iteration generates new symmetries and dimensions. For this to work the quantised layers must be non-linear just as the interaction between electrons and photons is non-linear is the so-called first-quantised field theory. The idea of multiple quantisations goes back many years and did not originate with me, but I have a unique view of its role in string theory based on my work with necklace lie algebras which can be constructed in an iterated procedure where one necklace dimension is added at each step. Physicists working on scattering amplitudes are at last beginning to see that the symmetries in nature are not just those of the classical world. There are dual-conformal symmetries that are completed only in the quantum description. These seem to merge with the permutation symmetries of the particle statistics. The picture is much more complex than the one painted by the traditional formulations of quantum field theory. What then is quantisation? When a Fock space is constructed the process is formally like an exponentiation. In category picture we start to see an origin of what quantisation is because exponentiation generalises to the process of constructing all functions between sets, or all functors between categories and so on to higher n-categories. Category theory seems to encapsulate the natural processes of abstraction in mathematics. This I think is what lies at the base of quantisation. Variables become functional operators, objects become morphisms. Quantisation is a particular form of categorification, one we don’t yet understand. Iterating this process constructs higher categories until the unlimited process itself forms an infinite omega-category that describes all natural processes in mathematics and in our multiverse. Crazy ideas? Ill-formed? Yes, but I am just saying – that is the way I see it. Black Hole Information We have seen that quantum gravity can be partially understood by using the constraint that it needs to make sense in the limit of small perturbations about flat spacetime. This led us to strings and supersymmetry. There is another domain of thought experiments that can tell us a great deal about how quantum gravity should work and it concerns what happens when information falls into a black hole. The train of arguments is well known so I will not repeat them here. The first conclusion is that the entropy of a black hole is given by its horizon area in Plank units and the entropy in any other volume is less than the same Bekenstein bound taken from the surrounding surface. This leads to the holographic principle that everything that can be known about the state inside the volume can be determined from a state on its surface. To explain how the inside of a blackhole can be determined from its event horizon or outside we use a black hole correspondence principle which uses the fact that we cannot observe both the inside and then outside at a later time. Although the reasoning that leads to these conclusions is long and unsupported by any observation It is in my opinion quite robust and is backed up by theoretical models such as AdS/CFT duality. There are some further conclusions that I would draw from black hole information that many physicists might disagree with. If the information in a volume is limited by the surrounding surface then it means we cannot be living in a closed universe with a finite volume like the surface of a 4-sphere. If we did you could extend the boundary until it shrank back to zero and conclude that there is no information in the universe. Some physicists prefer to think that the Bekenstein bound should be modified on large scales so that this conclusion cannot be drawn but I think the holographic principle holds perfectly to all scales and the universe must be infinite or finite with a different topology. Recently there has been a claim that the holographic principle leads to the conclusion that the event-horizon must be a firewall through which nothing can pass. This conclusion is based on the assumption that information inside a black hole is replicated outside through entanglement. If you drop two particles with fully entangled spin states into a black hole you cannot have another particle outside that is also entangled to this does not make sense. I think the information is replicated on the horizon in a different way. It is my view that the apparent information in the bulk volume field variables must be mostly redundant and that this implies a large symmetry where the degrees of symmetry match the degrees of freedom in the fields or strings. Since there are fundamental fermions it must be a supersymmetry. I call a symmetry of this sort a complete symmetry. We know that when there is gauge symmetry there are corresponding charges that can be determined on a boundary by measuring the flux of the gauge field. In my opinion a generalization of this using a complete symmetry accounts for holography. I don’t think that this complete symmetry is a classical symmetry. It can only be known properly in a full quantum theory much as dual conformal gauge symmetry is a quantum symmetry. Some physicists assume that if you could observe Hawking radiation you would be looking at information coming from the event horizon. It is not often noticed that the radiation is thermal so if you observe it you cannot determine where it originated from. There is no detail you could focus on to measure the distance of the source. It makes more sense to me to think of this radiation as emanating from a backward singularlty inside the blackhole. This means that a black hole once formed is also a white hole. This may seem odd but it is really just an extension of the black hole correspondence principle. I also agree with those who say that as black hole shrink they become indistinguishable from heavy particles that decay by emitting radiation. Every theorist working on fundamental physics needs some background philosophy to guide their work. They may think that causality and time are fundamental or that they are emergent for example. They may have the idea that deeper laws of physics are simpler. They may like reductionist principles or instead prefer a more anthropomorphic world view. Perhaps they think the laws of physics must be discrete, combinatorical and finite. They may think that reality and mathematics are the same thing, or that reality is a computer simulation or that it is in the mind of God. These things affect the theorist’s outlook and influence the kind of theories they look at. They may be meta-physical and sometimes completely untestable in any real sense, but they are still important to the way we explore and understand the laws of nature. In that spirit I have formed my own elaborate ontology as my way of understanding existence and the way I expect the laws of nature to work out. It is not complete or finished and it is not a scientific theory in the usual sense, but I find it a useful guide for where to look and what to expect from scientific theories. Someone else may take a completely different view that appears contradictory but may ultimately come back to the same physical conclusions. That I think is just the way philosophy works. In my ontology it is universality that counts most. I do not assume that the most fundamental laws of physics should be simple or beautiful or discrete or finite. What really counts is universality, but that is a difficult concept that requires some explanation. It is important not to be misled by the way we think. Our mind is a computer running a program that models space, time and causality in a way that helps us live our lives but that does not mean that these things are important in the fundamental laws of physics. Our intuition can easily mislead our way of thinking. It is hard understand that time and space are interlinked and to some extent interchangeable but we now know from the theory of relativity that this is the case. Our minds understand causality and free will, the flow of time and the difference between past and future but we must not make the mistake of assuming that these things are also important for understanding the universe. We like determinacy, predictability and reductionism but we can’t assume that the universe shares our likes. We experience our own consciousness as if it is something supernatural but perhaps it is no more than a useful feature of our psychology, a trick to help us think in a way that aids our survival. Our only real ally is logic. We must consider what is logically possible and accept that most of what we observe is emergent rather than fundamental. The realm of logical possibilities is vast and described by the rules of mathematics. Some people call it the Platonic realm and regard it as a multiverse within its own level of existence, but such thoughts are just mindtricks. They form a useful analogy to help us picture the mathematical space when really logical possibilities are just that. They are possibilities stripped of attributes like reality or existence or place. Philosophers like to argue about whether mathematical concepts are discovered or invented. The only fair answer is both or neither. If we made contact with alien life tomorrow it is unlikely that we would find them playing chess. The rules of chess are mathematical but they are a human invention. On the other hand we can be quite sure that our new alien friends would know how to use the real numbers if they are at least as advanced as us. They would also probably know about group theory, complex analysis and prime numbers. These are the universal concepts of mathematics that are “out there” waiting to be discovered. If we forgot them we would soon rediscover them in order to solve general problems. Universality is a hard concept to define. It distinguishes the parts of mathematics that are discovered from those that are merely invented, but there is no sharp dividing line between the two. Universal concepts are not necessarily simple to define. The real numbers for example are notoriously difficult to construct if you start from more basic axiomatic constructs such as set theory. To do that you have to first define the natural numbers using the cardinality of finite sets and Peano’s axioms. This is already an elaborate structure and it is just the start. You then extend to the rationals and then to the reals using something like the Dedekind cut. Not only is the definition long and complicated, but it is also very non-unique. The aliens may have a different definition and may not even consider set theory as the right place to start, but it is sure and certain that they would still possess the real numbers as a fundamental tool with the same properties as ours. It is the higher level concept that is universal, not the definition. Another example of universality is the idea of computability. A universal computer is one that is capable of following any algorithm. To define this carefully we have to pick a particular mathematical construction of a theoretical computer with unlimited memory space. One possibility for this is a Turing machine but we can use any typical programming language or any one of many logical systems such as certain cellular automata. We find that the set of numbers or integer sequences that they can calculate is always the same. Computability is therefore a universal idea even though there is no obviously best way to define it. Universality also appears in complex physical systems where it is linked to emergence. The laws of fluid dynamics, elasticity and thermodynamics describe the macroscopic behaviour of systems build form many small elements interacting, but the details of those interactions are not important. Chaos arises in any nonlinear system of equations at the boundary where simple behaviour meets complexity. Chaos we find is described by certain numbers that are independent of how the system is constructed. These examples show how universality is of fundamental importance in physical systems and motivates the idea that it can be extended to the formation of the fundamental laws too. Universality and emergence play a key role in my ontology and they work at different levels. The most fundamental level is the Platonic realm of mathematics. Remember that the use of the word realm is just an analogy. You can’t destroy this idea by questioning the realms existence or whether it is inside our minds. It is just the concept that contains all logically consistent possibilities. Within this realm there are things that are invented such as the game of chess, or the text that forms the works or Shakespeare or Gods. But there are also the universal concepts that any advanced team of mathematicians would discover to solve general problems they invent. I don’t know precisely how these universal concepts emerge from the platonic realm but I use two different analogies to think about it. The first is emergence in complex systems that give us the rules of chaos and thermodynamics. This can be described using statistical physics that leads to critical systems and scaling phenomena where universal behaviour is found. The same might apply to to the complex system consisting of the collection of all mathematical concepts. From this system the laws of physics may emerge as universal behaviour. This analogy is called the Theory of Theories by me or the Mathematical Universe Hypothesis by another group. However this statistical physics analogy is not perfect. Another way to think about what might be happening is in terms of the process of abstraction. We know that we can multiply some objects in mathematics such as permutations or matrices and they follow the rules of an abstract structure called a group. Mathematics has other abstract structures like fields and rings and vector spaces and topologies. These are clearly important examples of universality, but we can take the idea of abstraction further. Groups, fields, rings etc. all have a definition of isomorphism and also something equivalent to homomorphism. We can look at these concepts abstractly using category theory, which is a generalisation of set theory encompassing these concepts. In category theory we find universal ideas such as natural transformations that help us understand the lower level abstract structures. This process of abstraction can be continued giving us higher dimensional n-categories. These structures also seem to be important in physics. I think of emergence and abstraction as two facets of the deep concept of universality. It is something we do not understand fully but it is what explains the laws of physics and the form they take at the most fundamental level. What physical structures emerge at this first level? Statistical physics systems are very similar in structure to quantum mechanics both of which are expressed as a sum over possibilities. In category theory we also find abstract structures very like quantum mechanics systems including structures analogous to Feynman diagrams. I think it is therefore reasonable to assume that some form of quantum physics emerges at this level. However time and unitarity do not. The quantum structure is something more abstract like a quantum group. The other physical idea present in this universal structure is symmetry, but again in an abstract form more general than group theory. It will include supersymmetry and other extensions of ordinary symmetry. I think it likely that this is really a system described by a process of multiple quantisation where structures of algebra and geometry emerge but with multiple dimensions and a single universal symmetry. I need a name for this structure that emerges from the platonic realm so I will call it the Quantum Realm. When people reach for what is beyond M-Theory or for an extension of the amplituhedrom they are looking for this quantum realm. It is something that we are just beginning to touch with 21^st century From this quantum realm another more familiar level of existence emerges. This is a process analogous to superselection of a particular vacuum. At this level space and time emerge and the universal symmetry is broken down to the much smaller symmetry. Perhaps a different selection would provide different numbers of space and time dimensions and different symmetries. The laws of physics that then emerge are the laws of relativity and particle physics we are familiar with. This is our universe. Within our universe there are other processes of emergence which we are more familiar with. Causality emerges from the laws of statistical physics within our universe with the arrow of time rooted in the big bang singularity. Causality is therefore much less fundamental than quantum mechanics and space and time. The familiar structures of the universe also emerge within including life. Although this places life at the least fundamental level we must not forget the anthropic influence it has on the selection of our universe from the quantum realm. Experimental Outlook Theoretical physics continues to progress in useful directions but to keep it on track more experimental results are needed. Where will they come from? In recent decades we have got used to mainly negative results in experimental particle physics, or at best results that merely confirm theories from 50 years ago. The significance of negative results is often understated to the extent that the media portray them as failures. This is far from being the case. The LHC’s negative results for SUSY and other BSM exotics may be seen as disappointing but they have led to the conclusion that nature appears fine-tuned at the weak scale. Few theorists had considered the implications of such a result before, but now they are forced to. Instead of wasting time on simplified SUSY theories they will turn their efforts to the wider parameter space or they will look for other alternatives. This is an important step forward. A big question now is what will be the next accelerator? The ILS or a new LEP would be great Higgs factories, but it is not clear that they would find enough beyond what we already know. Given that the Higgs is at a mass that gives it a narrow width I think it would be better to build a new detector for the LHC that is specialised for seeing diphoton and 4 lepton events with the best possible energy and angular resolution. The LHC will continue to run for several decades and can be upgraded to higher luminosity and even higher energy. This should be taken advantage of as much as possible. However, the best advance that would make the LHC more useful would be to change the way it searches for new physics. It has been too closely designed with specific models in mind and should have been run to search for generic signatures of particles with the full range of possible quantum numbers, spin, charge, lepton and baryon number. Even more importantly the detector collaborations should be openly publishing likelihood numbers for all possible decay channels so that theorists can then plug in any models they have or will have in the future and test them against the LHC results. This would massively increase the value of the accelerator and it would encourage theorists to look for new models and even scan the data for generic signals. The LHC experimenters have been far too greedy and lazy by keeping the data to themselves and considering only a small number of models. There is also a movement to construct a 100 TeV hadron collider. This would be a worthwhile long term goal and even if it did not find new particles that would be a profound discovery about the ways of nature. If physicists want to do that they are going to have to learn how to justify the cost to contributing nations and their tax payers. It is no use talking about just the value of pure science and some dubiously justified spin-offs. CERN must reinvent itself as a postgraduate physics university where people learn how to do highly technical research in collaborations that cross international frontiers. Most will go on to work in industry using the skills they have developed in technological research or even as technology entrepreneurs. This is the real economic benefit that big physics brings and if CERN can’t track how that works and promote it they cannot expect future funding. With the latest results from the LUX experiments hope of direct detection of dark matter have faded. Again the negative result is valuable but it may just mean that dark matter does not interact weakly at all. The search should go on but I think more can be done with theory to model dark matter and its role in galaxy formation. If we can assume that dark matter started out with the same temperature as the visible universe then it should be possible to model its evolution as it settled into galaxies and estimate the mass of the dark matter particle. This would help in searching for it. Meanwhile the searches for dark matter will continue including other possible forms such as axions. Astronomical experiments such as AMS-2 may find important evidence but it is hard to find optimism there. A better prospect exists for observations of the dark age of the universe using new radio telescopes such as the square kilometre array that could detect hydrogen gas clouds as they formed the first stars and galaxies. Neutrino physics is one area that has seen positive results that go beyond the standard model. This is therefore an important area to keep going. They need to settle the question of whether neutrinos are Majorana spinors and produce figures for neutrino masses. Observation of cosmological high energy neutrinos is also an exciting area with the Ice-Cube experiment proving its value. Gravitational wave searches have continued to be a disappointment but this is probably due to over-optimism about the nature of cosmological sources rather than a failure of the theory of gravitational waves themselves. The new run with Advanced LIGO must find them otherwise the field will be in trouble. The next step would be LISA or a similar detector in space. Precision measurements are another area that could bring results. Measurements of the electron dipole moment can be further improved and there must be other similar opportunities for inventive experimentalists. If a clear anomaly is found it could set the scale for new physics and justify the next generation of accelerators. There are other experiments that could yield positive results such as cosmic ray observatories and low frequency radio antennae that might find an echo from the big bang beyond the veil of the primordial plasma. But if I had to nominate one area for new effort it would have to be the search for proton decay. So far results have been negative pushing the proton lifetime to at least 10^34 years but this has helped eliminate the simplest GUT models that predicted a shorter lifetime. SUSY models predict lifetimes of over 10^36 years but this can be reached if we are willing to set up a detector around a huge volume of clear Antarctic ice. Ice-Cube has demonstrated the technology but for proton decay a finer array of light detectors is needed to catch the lower energy radiation from proton decay. If decays were detected they would give us positive information about physics at the GUT scale. This is something of enormous importance and its priority must be raised. Apart from these experiments we must rely on the advance of precision technology and the inventiveness of the experimental physicist. Ideas such as the holometer may have little hope of success but each negative result tells us something and if someone gets lucky a new flood of experimental data will nourish our theories, There is much that we can still learn. Book Review: Time Reborn by Lee Smolin April 24, 2013 Fill the blank in this sentence:- “The best studied approach to quantum gravity is ___________________ and it appears to allow for a wide range of choices of elementary particles and forces.” Did you answer “String Theory”? I did, but Lee Smolin thinks the answer is his own alternative theory “Loop Quantum Gravity” (page 98) This is one of many things he says in his new book that I completely disagree with. That’s fine because while theoretical physicists agree rather well on matters of established physics such as general relativity and quantum mechanics you will be hard pushed to find two with the same philosophical ideas about how to proceed next. Comparing arguments is an important part of looking for a way forward. Here is another non-technical point I disagree with. In the preface he says that he will “gently introduce the material the lay reader needs” (page xxii) Trust me when I say that although this book is written without equations it is not for the “lay reader” (an awkward term that originally meant non-clergyman). If you are not already familiar with the basic ideas of general relativity, quantum mechanics etc and all the jargon that goes with them, then you will probably not get far into this book. Books like this are really written for physicists who are either working on similar areas or who at least have a basic understanding of the issues involved. Of course if the book were introduced as such it would not be published by Allen Lane. Instead it would be a monograph in one of those obscure vanity series by Wiley or Springer where they run off a few hundred copies and sell them at $150/£150/€150 (same number in any other currency) OK perhaps I took too many cynicism pills this The message Smolin wants to get across in that time is “real” and not an “illusion”. Already I am having problems with the language. When people start to talk about whether time is real I hear in my brain the echo of Samuel Johnson’s well quoted retort “I refute it thus!” OK, you can’t kick time but you can kick a clock and time is real. The real question is “Is time fundamental or emergent?” and Smolin does get round to this more appropriate terminology in the end. In the preface he tells us what he means when he says that time is real. This includes “The past was real but is no longer real” “The future does not yet exist and is therefore open” (page xiv) In other words he is taking our common language based intuitive notions of how we understand time and saying that this is fundamentally correct. The problem with this is that when Einstein invented relativity he taught me that my intuitive notions of time are just feature of my wetware program that evolved to help me get around at a few miles per hour, remembering things from the past so that I could learn to anticipate the future etc. It would be foolish to expect these things to be fundamental in realms where we move close to the speed of light, let alone at the centre of a black-hole where density and temperature reach unimaginable extremes. Of course Smolin is not denying the validity of relative time, but he wants me to accept that common notions of the continuous flow of time and causality are fundamental, even though the distinction between past and future is an emergent feature of thermodynamics that is purely statistical and already absent from known fundamental laws. His case is even harder to buy given that he does accept the popular idea that space is emergent. Smolin has always billed himself as the relativitist (unlike those string theorists) who understands that the principles of general relativity must be applied to quantum gravity How then can he say that space and time need to be treated so differently? This seems to be an idea that came to him in the last few years. There is no hint of it in a technical article he wrote in 2005 where he makes the case for background independence and argues that both space and time should be equally emergent. This new point of view seems to be a genuine change of mind and I bought the book because I was curious to know how this came about. The preface might have been a good place for him to tell me when and how he changed his mind but there is nothing about it (in fact the preface and introduction are similar and could have been stuck together into one section without any sign of discontinuity between them) Smolin does however explain why he thinks time is not fundamental. The main argument is that he believes the laws of physics have evolved to become fine-tuned with changes accumulating each time a baby universe is born. This is his old idea that he wrote about at length in another book “Life of the Cosmos” If this theory is to be true he now thinks that time must be fundamentally similar to our intuitive notions of continuously flowing time. I would tend to argue the converse, that time is emergent so we should not take the cosmological evolution theory too seriously. I don’t think many physicists follow his evolution theory but the alternatives such as eternal inflation and anthropic landscapes are equally contentious and involve piling about twenty layers of speculation on top of each other without much to support them. I think this is a great exercise to indulge in but we should not seriously think we have much idea of what can be concluded from it just yet. Smolin does have some other technical arguments to support his view of time, basically along the lines that the theories that work best so far for quantum gravity use continuous time even when they demonstrate emergent space. I don’t buy this argument either. We still have not solved quantum gravity after all. He also cites lots of long gone philosophers especially Leibniz. Apart from our views on string theory, time and who such books are aimed at I want to mention one other issue where I disagree with Smolin. He says that all symmetries and conservation laws are approximate (e.g. Pages 117-118). Here he seems to agree with Sean Carrol and even Motl (!) (but see comments). I have explained many times why energy, momentum and other gauge charges are conserved in general relativity in a non-trivial and experimentally confirmed way. Smolin says that “we see from the example of string theory that the more symmetry a theory has, the less its explanatory power” (page 280). He even discusses the preferred reference frame given by the cosmic background radiation and suggests that this is fundamental (page 167). I disagree and in fact I take the opposite (old fashioned) view that all the symmetries we have seen are part of a unified universal symmetry that is huge but hidden and that it is fundamental, exact, non-trivial and really important. Here I seem to be swimming against the direction the tide is now flowing but I will keep on going. Ok so I disagree with Smolin but I have never met him and there is nothing personal about it. If he ever deigned to talk to an outsider like me I am sure we could have a lively and interesting discussion about it. The book itself covers many points and will be of interest to anyone working on quantum gravity who should be aware of all the different points of view and why people hold them, so I recommend to them, but probably not to the average lay person living next door. see also Not Even Wrong for another review, and The Reference Frame for yet another. There is also a review with a long interview in The Independent. Energy is Conserved (in cosmology) August 17, 2010 On viXra log we have been having some lengthy discussions on energy conservation in classical general relativity. I have been trying to convince people that Energy is conserved, but most of them who have expressed an opinion think that energy is not conserved, or that the law of conservation of energy is somehow trivial in general relativity with no useful physical content. I am going to have one more try to show why energy is conserved and is not trivial by tackling the question of energy conservation in cosmology. Some physicists have claimed that energy conservation is violated when you look at the cosmic background radiation. This radiation consists of photons that are redshifted as the universe expands. The total number of photons remains constant but their individual energy decreases because it is proportional to their frequency ( E = hf ) and the frequency decreases due to redshift. This implies that the total energy in the radiation field decreases, but if energy is conserved, where does it go? The answer is that it goes into the gravitational field, but to make this answer convincing we need some equations. If the radiation question is not strong enough, what about the case of the cosmological constant, also known as dark energy? With modern precision cosmological observation it is now known that the cosmological constant is not zero and that dark energy contributes about 70% of the total non-gravitational energy content of the observable universe at the current cosmological epoch. (We assume here a standard cosmological model in which the dark energy is a fixed constant and not a dynamic field.) As the universe expands, the density of dark energy stays constant. This means that in an expanding region of space the total dark energy must be increasing. If energy is conserved, where is this energy coming from? Again the answer is that it comes from the gravitational field, but we need to look at the equations. These are questions that surfaced relatively recently. As I mentioned in my history post, the original dispute over energy conservation in general relativity began between Klein, Hilbert and Einstein in about 1916. It was finally settled by about 1957 after the work of Landau, Lifshitz, Bondi, Wheeler and others who sided with Einstein. After that it was mostly discussed only among science historians and philosophers. However, the discovery of cosmic microwave background and then dark energy have brought the discussion back, with some physicists once again doubting that the law of energy conservation can be correct. Energy in the real universe has contributions from all physical fields and radiation including gravity and dark energy. It is constantly changing from one form to another, it also flows from one place to another. It can travel in the form of radiation such as light or gravitational waves. Even the energy loss of binary pulsars in the form of gravitational waves has been observed indirectly and it agrees with experiment. None of these processes is trivial and energy is conserved in all cases. But what about energy on a truely universal scale, how does that work? On scales larger than the biggest galactic clusters, the universe has been observed to be very close to homogeneous and isotropic. Furthermore, 3 dimensional space is flat on average as far as we can tell, and it is expanding uniformly. Spacetime curvature and gravitational energy on these large scales comes purely from the expansion component of space as a function of time. The metric for this universe is $ds^2 = a(t)^2 ds_3^2 - c^2 dt^2$ $ds_3^2 = dx^2 + dy^2 + dz^2$ $a(t)$ is an expansion factor that increases with time (For full details see http://en.wikipedia.org/wiki/Friedmann_equations) In a previous post I gave the equation for the Noether current in terms of the fields and an auxiliary vector field that specifies the time translation diffeomorphisms. The Noether current has a term called the Komar superpotential but for the standard cosmology this is zero. The remaining terms in the zero component of the current density come from the matter fields and the spacetime curvature and are given by $J^0 = \rho$ + $\frac{\gamma}{a}$ + $\frac{\Lambda c^2}{\kappa} - 3 \frac{\dot{a}^2}{\kappa a^2}$ The first term is the mass-energy from cold matter, (including dark matter) at density $\rho$. The second term is the energy density from radiation. The third term is dark matter energy density and the last term is the energy in the gravitational field. Notice that the gravitational energy is negative. By the field equations we know that the value of the energy will be zero. This equation is in fact one of the Freidmann equations that is used in standard cosmology. If you prefer to think of total energy in an expanding region of spacetime rather than energy density, you should multiply each term of the equation by a volume factor $a^3$ It should now be clear how energy manages to be conserved in cosmology on large scales even with a cosmological constant. The dark energy in an expanding region increases with the volume of the region that contains it, but at the same time the expansion of space accelerates exponentially so that the negative contribution from the gravitational field also increases in magnitude rapidly. The total value of energy in an expanding region remains zero, and therefore constant. This is not a trivial result because it is equivalent to the Friedmann equation that captures the dynamics of the expanding universe. So there you have it; the cosmological energy conservation equation that everybody has been asking about is just this $E$ = $M c^2$ + $\frac{\Gamma}{a}$ + $\frac{\Lambda c^2}{\kappa} a^3$ - $\frac{3}{\kappa}\dot{a}^2 a = 0$ It is not very complicated or mysterious, and it’s not trivial because it describes gravtational dynamics on the scale of the observable universe. In this equation • $a(t)$ is the universal expansion factor as a funcrtion of time normalised to 1 at the current epoch. • $M$ is the total mass in the expanding volume $V = a(t)^3$ • $\Gamma$ is the cosmic radiation energy density fixed at the current epoch • $\Lambda$ is the cosmological constant. • $\kappa$ is a gravitational coupling constant. Energy Is Conserved (the history) August 11, 2010 We have been discussion the law of conservation of energy in the context of classical general relativity. So far I have not been able to convince anyone here that the maths shows that energy is conserved. Lubos Motl and Matti Pitkanen have posted some contrary arguments on their blogs to add to the old one by Sean Carroll. We have also been trading points and counterpoint in the comments with Ervin Goldfain joining in, also in disagreement with me. To avoid going over the same arguments repeatedly we have agreed to disagree, for now. If you think such a discussion about Energy in physics seems off the wall, think again. This subject and related issues concerning gravitational waves have occupied physicists for years. Some well-known names in the world of science have exchanged some heated words and still not everyone agrees on the outcome. But it is too soon to end our debate. There are still a few more points I want to make. It was said that my claim in favour of energy conservation means that I am “convinced that all relativists are wrong”. This is not the case. Historically many relativists have been on my side. This is actually a debate that began as soon as general relativity was formulated by Einstein. Einstein in fact developed the first complete formulation of energy conservation in GR, but Hilbert objected. The argument has raged ever since with as many different views on the subject as there have been relativists and cosmologists. Amongst those who have accepted the law of energy conservation and produced their own formulations are Dirac, Landau, Wald, Weinberg and of course Einstein himself, so to say I am contradicting all relativists is far from true. It has also been said that all the textbooks show that energy is not conserved in general relativity, except in special cases. This is also not true. Most GR textbooks do not tackle the general formulation of energy conservation in GR. They just deal with special cases such as a static background gravitational field with a killing vector. This does not mean that energy conservation only works in special cases as some people claim. The textbooks just don’t cover the general case. Some textbooks do cover it but by using pseudotensor methods (e.g. Dirac, Weinberg, Landau & Lifshitz) A few textbooks do suggest that energy is not conserved, e.g. Peebles, but these are the minority. I am going to recount some of the history of the debate. To keep it orderly I’ll give it as a timeline of events with my own contribution immodestly tacked on the end. We start in 1915 with conservation of energy a well established concept recently unified with the conservation of mass by Einstein. The World is at war and Einstein is about to publish his general theory. July 1915: Einstein lectures on his incomplete theory of general relativity to Hilbert, Klein and possible Noether at Göttingen, convincing them that his ideas are important. October 1915: Albert Einstein publishes a tentative equation for general relativity $R_{ab} = T_{ab}$ with $R_{ab}$ being the Ricci curvature tensor and $T_{ab}$ being the covariant generalization of the energy-momentum tensor. November 1915: Einstein realises that his previous equation cannot be right because the divergence of the energy momentum tensor is zero as required by local energy conservation. To correct it he writes the new equation $R_{ab} - \frac{1}{2} g_{ab} = T_{ab}$. These are the Einstein Field equations which work because the left hand side has zero divergence due to the Bianchi identities. November 1915: David Hilbert publishes a calculation showing how the Einstein Field Equations can be derived from a least action principle. In fact his work is dated prior to Einstein’s but they had been in communication and it is reasonable to give the priority for the equations to Einstein and for the action formulation to Hilbert. 1916: Hilbert publishes a note with an equation for a conserved energy vector in general relativity. 1916: Einstein publishes a full formulation of energy conservation in general relativity in which a pseudotensor quantity is added to the energy-momentum tensor and another superpotential term to give a conserved energy current. 1916: Einstein predicts the existence of gravitational waves which will carry away energy and momentum from orbiting stars. He derives the quadrupole radiation formula to quantify the rate at which energy is dispersed. July 1917: Oscar Klein points out (with help from Noether) that conservation of Hilbert’s energy vector is an identity that does not require the field equation. 1917: In response to Klein, Hilbert publishes an article questioning the validity of energy conservation in general relativity. He says that the energy equations do not exist at all and this is a general characteristic of the theory. 1917: Writing to Klein, Hilbert says that general relativity has only improper energy theorems. By this he means that the pseudotensor methods are not covariant. 1917: Klein writes to Einstein making the claim that energy conservation in general relativity is an identity. This is based on Hilbert’s energy vector. 1917: To construct a static cosmological model Einstein introduced the cosmological constant as an extra term in his field equations. March 1918: Einstein writes back to Klein explaining that in his formulation of energy conservation the divergence of the current is not an identity because it requires the field equations. July 1918: Emmy Noether publishes two theorems on symmetry in physics. The first showed that symmetry in any theory derived from an action principle implies a conservation law. In particular, energy conservation is implied by time invariance. The second shows that in the case of gauge theories with local symmetry such as general relativity, there are divergence identities such as the Bianchi 1918: Felix Klein uses Noether’s theorems to derive a third boundary theorem to show why the conservation law of energy in general relativity must take a particular form that he considers to make it an identity. 1918: Einstein comments on the power and generality of Noether’s theorems but does not accept the conclusion that energy conservation is an identity. 1919: Arthur Eddington measures the deflection of starlight by the Sun during a solar eclipse. The observation confirms the prediction of general relativity and provides massive press publicity for the theory. 1922: Arthur Eddington expresses skepticism about the existence of gravitational waves saying that they “travel at the speed of thought”. 1922: Friedman finds cosmological solutions of general relativity that describe an expanding universe. 1927: Lemaitre predicts an expanding universe 1929: Edwin Hubble observes the expanding universe in galactic redshifts. This led to Einstein dropping his cosmological constant. 1936: After working on exact solutions for gravitational waves with Rosen, Einstein concludes that gravitational waves can not exist, reversing his 1916 prediction. This sparked a vigorous twenty year debate over the reality of gravitational waves. 1936: After working with Robertson, Einstein eventually concedes that gravitational waves do exist. Rosen who had departed for the Soviet Union did not accept this concession. He never changed his mind even as late as 1970. 1951: Landau and Lifshitz publish “The Classical Theory of Fields” as part of a series of textbooks on theoretical physics. It deals with energy and momentum in general relativity using a symmetric pseudotensor. The symmetry means that they can also show conservation of angular momentum using the same structure. 1955: Rosen computes the energy in exact gravitational wave solutions using pseudotensors and finds the result ot be zero. He presents this as evidence that gravitational waves are not real. 1957: Herman Bondi introduced a formalism now known as Bondi Energy to study energy in general relativity and gravitational waves in particular. This work was very influential and formed a turning point in the understanding of gravitational waves and energy in general realtivity. 1957: Weber and Wheeler find a gravitational wave solution that does transmit energy. 1957: Richard Feynman describes the sticky-bead thought experiment to show that gravitational waves are real. The idea was popularised by Herman Bondi and finally led to the general acceptance of the reality of gravitational waves. 1959: Andrzej Trautman gave the formulation of energy conservation for the special case where a static background is given by the existence of a killing vector field. 1959: Komar defined a superpotential for general cases whose divergence vanishes as an identity. The superpotential uses an auxiliary vector field similar to the killing vector field in Trautmans theory, but the Komar field does not need to satisfy any special conditions so the solution is more general. The Komar potential has the advantage over pseudotensor methods that it is expressed in a covariant form. However, the zero divergence of the superpotential is an identity. 1961: Arnowitt, Deser and Misner formulate the ADM mass/energy for systems in asymptotically flat spacetimes 1961: In his book “Geometrodynamics” Archibald Wheeler says that energy conservation in a closed universe reduces to a trivial 0 = 0 equation. 1964: Weber begins experiments to try to detect gravitational waves. 1972: Steven Weinberg uses a pseudotensor method to show energy conservation in his textbook “Gravitation and Cosmology” 1974: The discovery of the Hulse-Taylor binary pulsar shows that gravitational energy is radiated as originally predicted by Einstein. 1975: In his concise introductory text to general relativity Dirac derives a pseudotensor using Noether’s theorem to prove energy and momentum conservation. 1979: Schoen and Yau prove the positive energy theorem for ADM energy. (A simpler proof was given by Witten in 1981) 1993: Phillip Peebles in his book “Principles of Physical Cosmology” claims that energy conservation is violated for the cosmic microwave background 1997: Philip Gibbs shows that if Noether’s theorem is generalised to include second derivatives of the fields and is applied to the symmetries generated by a vector field, a conserved current with a covariant current can be derived. The current which has an explicit dependence on the vector field is equal to a term that is zero when the Einstein Field Equations are satisfied, plus the Komar superpotential. 1998: Observational evidence (Riess, Pulmutter) leads to the reintroduction of the cosmological constant, now called dark energy For further historical details and references on the Klein-Hilbert-Einstein-Noether debate see “A note on General Relativity, Energy Conservation and Noether’s Theorems” by Katherine Brading in “The Universe of General Relativity” ed A.J. Fox, J. Eisenstaedt. A good read on the history of gravitational waves is “Traveling at the speed of thought” by Daniel Kennefick Energy Is Conserved (the maths) August 8, 2010 Judging by the comments on the previous article I have not yet succeeded in convincing anyone that energy is conserved. Luboš Motl has posted a response contradicting my viewpoint and agreeing with an older blog post by Sean Carroll. Fortunately I have an advantage, I’ve done the maths and the outcome is clear and unambiguous. Since my detractors are people who understand equations I should have no trouble convincing them if I take a more technical approach. So, no more analogies, let’s start with Einstein’s equation. $G_{ab} + \Lambda g_{ab} = \kappa T_{ab}$ $G_{ab}$ is the Einstein tensor given by $G_{ab}$ = $R_{ab}$ - $\frac{1}{2} R g_{ab}$ $R_{ab}$ is the Ricci curvature tensor, $g_{ab}$ is the metric tensor, $\Lambda$ is the cosmological constant, $\kappa$ is a gravitational coupling constant and $T_{ab}$ is the energy-stress tensor for matter. A time translation is generated by a time-like vector field $\xi^a$ with a small parameter $\epsilon$ $\Delta x^a = \epsilon \xi^a$ All the tensor quantities have corresponding transformation rules and the field equations are covariant under the transformation. Using Noether’s theorem a formula for the corresponding conserved energy current can be derived. The details of this calculation are quite lengthy and can be found in arXiv:gr-qc/9701028 (without the cosmological constant which is easy to add in.) The result obtained is $J^a = J_M^a + J_{DE}^a + J_G^a$ Where $J_M^a$ is the energy current from the matter contribution given by $J_M^a = \xi^b T_{cb} g^{ca}$ $J_{DE}^a$ is the dark energy component given by $J_{DE}^a = \frac{-1}{\kappa}\xi^a \Lambda$ and $J_{G}^a$ is the gravitational contribution given by $J_G^a$ = $\frac{-1}{\kappa}$${\xi}^b$$G_{cb} g^{ca}$ + $K^a$ $K^a$ is the Komar superpotential given by $K^a$ = $\frac{1}{2\kappa}$$(\xi^b ;^a - \xi^a ;^b) ;_b$ Given the Einstein field equations we can eliminate the matter term, the dark energy term and the first part of the gravitational term to leave just the Komar superpotential $J^a = K^a$ Then it is easy to see that $J^a ;_a = K^a ;_a = 0$ Since the current is divergenceless it defines a conserved energy. Some people claim that this result is trivial. Clearly it is not because it requires the gravitational field equations to prove conservation of energy. You should not make the mistake of defining energy as just the Komar superpotential, even though it is equal to that when the dynamics are taken into account. Energy must be defined as a sum over contributions from each field including the gravitational field, and dark energy. The energy contribution from matter $J_M^a = \xi^b T_{cb} g^{ca}$ is a sum of contributions from each type of matter field. It includes all the non-gravitational forms of energy including heat, electrical energy, rest mass equivalent of energy, radiation etc. This form of the law of energy conservation in general relativity tells us that the energy from all these contributions plus the gravitational contribution and the dark energy contribution is conserved. In other words energy can be transformed from one form to another but is never created or destroyed. There is nothing approximate, trivial or ambiguous about this result. It is energy conservation in the same old form that we have always known it but with the contribution from gravity included. Energy Is Conserved August 6, 2010 The first law of thermodynamics states that energy is conserved. It is one of the most fundamental laws of physics and not one that you would expect many physicists to challenge, so it comes as a surprise to find that a growing number of cosmologists and relativists are doing just that. Of course any law of physics is subject to experimental verification and as new realms of observation are opened up we should require that previous assumptions including conservation of energy are checked. But the subject under question is not new physics in this sense. It is the classical theory of general relativity. Whether general relativity is correct is not the issue, although it has withstood all experimental tests so far. The question concerns whether energy is conserved in the classical theory of general relativity with or without cosmological constant as given be Einstein nearly 100 years ago. This is a purely mathematical question. It has indeed been said that too much ink has been spilt on this subject already, but the fact is that the wrong conclusions are still drawn. It does not matter how well-respected the cosmologists are or how many people have read their textbooks, the fact is that they are wrong. Energy is conserved in general relativity. There are no ifs or buts. The mathematics is clear and the errors in the thinking of those who think it is not conserved can also be traced. It is time to put the record straight. Not all the cosmologists are so bold as to state directly that energy is not conserved, but some are. Here are some examples of the kind of things they do say: “there is not a general global energy conservation law in general relativity theory” – Phillip Peebles in Principles of Physical Cosmology “In special cases, yes. In general — it depends on what you mean by ‘energy’, and what you mean by ‘conserved’.” – John Baez and Michael Weiss in the Physics FAQ “The energy conservation law is an identity in general relativity” - Felix Klein “the local conservation laws, integrated over a closed space [..] produce nothing of interest, only the trivial identity 0 = 0“, John Wheeler in Geometrodynamics. “a local energy density is well-defined in GR only for spacetimes that admit a timelike Killing vector” – Steve Carlip on sci.physics.research “Energy is Not Conserved” – Sean Carroll on Cosmic Variance “Energy is not conserved in cosmology. As is always the case with confusing stuff in cosmology, this is covered well in Edward Harrison’s COSMOLOGY textbook.” – Phillip Helbig on usenet Many other statements have been made to the effect that conservation of energy in general relativity is only approximate, quasi-local, trivial, non-covariant, ambiguous or only valid in special cases. They are all wrong. Energy is conserved in general relativity. Discussions about conservation of energy in cosmology often arise when people write about redshift of the cosmic background radiation. Individual photons are not created or destroyed as they travel across space, so if they are redshifted they are losing energy. Where does it go? The answer is that it goes into the background gravitational field. The presence of the CMB affects slightly the rate at which the universe is expanding so there should be an energy term for the expansion rate. This is what happens for particles moving in other types of background such as electric and magnetic fields so it should work for gravity too. Since the discovery of “dark energy” the level of confusion has become worse. People visualise dark energy as a constant density of energy that pervades space. If space expands then there should be more of it, so where does the energy come from? The anser is the same as for radiation. The dark energy, or cosmological constant as it used to be known, affects the expansion rate of the universe. The gravitational component of energy has a contribution from this exapnsion and the rate changes to conteract the amount of dark energy being added so that total energy is constant. Noether’s Theorem To make the case for energy conservation in general relativity sound, we need a valid mathematical formula for it in terms of the gravitational field (the metric tensor) and the matter fields. This problem was initially tackled as soon as general relativity was proposed by Einstein. The mathematician Emmy Noether was asked to look at the problem and she solved it eloquently by stating her theorem relating symmetry to conservation laws. Although the theorem is well-known to physicists it is not often appreciated that it was formulated to tackle this specific problem. Noether’s theorem tells us that if a physical law derived from an action principle is invariant under time translations, then it has an energy conservation law. In fact the theorem provides a formula to derive an energy current whose divergence is zero. Such a current can always be integrated over a region space to provide a total energy whose rate of change is equal to the flux of the current from the surface bounding the region. This is exactly what we mean by conservation of energy. For example if we take Maxwell’s equations in special relativity such invariance applies and we can derive a formula for the energy current. Of course in special relativity time is not absolute and there are different concepts of time dependent on an observers velocity. This means that we actually get an infinite number of energy conservation laws, one for each possible velocity. Conveniently this boils down to a single energy-momentum tensor that gives the energy current for any choice of the time coordinate in a reference frame. The same tensor can be used to provide momentum and angular momentum conservation laws. It is all very intuitive and nice! Energy-momentum pseudotensors What about the case of general relativity? Invariance under time translation still holds and general relativity is derived from the Hilbert action principle so Noether’s theorem can be applied to the gravitational field along with any matter fields to give a total conserved energy current, but there is a technical hitch. The Hilbert action includes second derivatives of the metric tensor as well as the first, and Noether’s theorem only deals with the case where there are first derivatives. The usual solution applied in the early days of relativity was to modify the Hilbert action in a way that removed the terms containing the second derivatives without affecting the dynamics of the Einstein equations derived from it. Noether’s theorem could then be applied. The only snag was that the procedure could not be made gauge invariant so the energy-momentum quantities derived did not form a covariant tensor as they did for special relativity. Sometimes they are called the energy-momentum pseudotensor. The solution works but some people just don’t like it. They complain that the pseudo-tensor can be made zero at any point in spacetime for example. It is not really a problem but people did not expect it so they complain about it. The source of the problem (which is not really a problem) can be traced to the fact that the spacetime symmetry group in general relativity is bigger than it is in special relativity. Instead of just a choice of time coordinate for each velocity of an inertial reference frame, you have one for any choice of motion whether inertial or not. This gives a much larger set of conservation laws and with the extra choice you can always make the energy and momentum of the field zero at any given event in space and time. The choice of time coordinate can be associated with a contravariant vector field that generates the time translation. We should expect the formula for our energy from Noether’s theorem to have a dependency on this field. Trying to express it as a tensor is not really appropriate and that is what causes the confusion. Modern Covariant Solution It turns out that there is a more general version of Noether’s theorem that can be used even when the action includes terms with second derivatives. This provides a more modern approach to the derivation of an energy current that has a dependency on the time translation vector field. Since it does not require any manipulations of the action the result is a covariant local expression. I am avoiding formulae here but you can look up, the answer in arXiv:gr-qc/9701028. This paper does not take into account the cosmological constant but that is not a problem. The conditions for Noether’s theorem still apply with the cosmological constant term in place and the derivation of this more general case is a straightforward exercise left for the reader. So the outcome is that there is a local covariant expression for the energy current in general relativity after all. This is exactly the thing that many cosmologists claim does not exist, but it does, and energy conservation holds perfectly with no caveats. To finish off let’s take a look at some of the specific things that cosmologists and relativists have been saying and debunk them one by one in the light of the solution we now understand. Energy Conservation in general relativity is approximate NOT It is sometimes claimed that energy conservation in general relativity is only approximate. On further examination of what is meant we find that the person who thinks this only knows of (or only accepts as valid) the extension of the covariant energy-momentum tensor from special relativity to the general theory. This tensor includes only contributions from the matter fields and not the gravitational field. Its covariant divergence is zero just as required for a conserved current vector, but unfortunately it is a symmetric tensor and you can not integrate a divergenceless symmetric tensor to get a conserved quantity in curved spacetime. That only works for vectors and anti-symmetric tensors. Because of this people say that the conservation is only approximate. It should be clear now where the error in this argument lies. The energy-momentum tensor does not include contributions from the gravitational field and energy conservation cannot be formulated without it. Of course your energy conservation law is only going to be approximate if you neglect one of the fields that has energy. The correction is to include the gravitational field either by using the pseudotensor method or by using the more modern derivation of the current as a function of the time translation vector field. Energy conservation only works in special cases in general relativity NOT The cause of this false claim is once again the use of the energy-momentum tensor. For some special cases the gravitational field has a killing vector that indicates that it is static in some specific reference frame. If you contract this killing vector with the energy-momentum tensor you get an expression for an energy current that is conserved. That’s very nice but nothing unusual. It is normal that you can get a conserved energy in a fixed background field which is static. The same happens for other fields such as the electromagnetic field. If the energy in the background field in not changing then the energy in the rest of the system can be conserved too without adding the energy from the background field. Just because energy conservation is a bit simpler in special cases does not mean that it does not work in more general cases, which it does of course. Another special case often cited is an asymptotically flat spacetime. You can work out the total energy and momentum and it takes the form of a familiar energy-momentum four vector in the asymptotic limit. Very nice, but again just a special case while the general case also works perfectly well. Energy conservation in general relativity is trivial NOT This particular version of the energy conservation “problem” in general relativity goes back to the early days when Noether, Einstein, Klein, Hilbert and others were investigating it. Klein claimed that the conservation law that Noether’s theorem gave was an identity, so there was no real physical content to the law. This claim has been echoed many times since, for example when Wheeler claimed that the law reduces to the trivial result 0 = 0 for closed spacetimes. In addition to her well-known theorem, Noether had a second theorem that elaborated on what happens when there is a local gauge symmetry rather than just a global symmetry. In this case you can derive Bianchi type identities that provide formulae for currents that are conserved kinematically, even if the equations of motion are not. You can say that such a current is trivially conserved. The formula for the energy current derived from Noether’s theorem is not such a quantity, but it is the sum of two parts one of which is trivially conserved and the other of which is always zero when the field equations apply. For some people this is enough to make the claim that energy conservation is trivial in general relativity. That this makes no sense is easily seen by considering any other gauge field and its conserved charges. For example, electromagnetism is a gauge theory that conserves electric charge. Because of Noethers second theorem the expression for the electric current can be written as the sum of a term depending only on the electromagnetic potential whose divergence is explicitly zero, plus a term which is obviously zero when Maxwell’s equations hold. This is exactly analogous to the case of energy conservation in general relativity. Nobody claims that this makes charge conservation trivial in the classical theory of electromagnetism so they should not make such a claim for energy conservation in general relativity. I have debunked some of the major claims about energy conservation in general relativity that people use to justify the idea that there is something wrong with it. There are others but they are all just as shallow and easy to deal with. If you come across anyone making such claims, please just refer them to here and hopefully we can put an end to this nonsense.
{"url":"http://blog.vixra.org/category/energy-conservation/","timestamp":"2014-04-18T18:21:54Z","content_type":null,"content_length":"177336","record_id":"<urn:uuid:67ce1ff5-213c-486e-b358-8a4ddde397b6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 3p^2+12p-2=0 how do you solve this problem? I need all the steps please and thank you. • one year ago • one year ago Best Response You've already chosen the best response. You have to use the Quadradic Formula here, it's not solvable by factoring: \[x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}\] Best Response You've already chosen the best response. To be clear that's for the standard form: ax^2 + bx + c = 0 a = 3 b = 12 c = -2 See how that works? Yes, you will get two answers, as you should (because of the x\(^2\)) Best Response You've already chosen the best response. Best Response You've already chosen the best response. If what is under the radical/square-root above is negative you'd have two imaginary roots, thankfully you won't here. But you will have square-roots (or fractional powers if you prefer) in your final answers. Best Response You've already chosen the best response. Best Response You've already chosen the best response. is it \[x=12\pm \sqrt{120} \over 6\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5005bd29e4b062418066f3fc","timestamp":"2014-04-17T01:17:15Z","content_type":null,"content_length":"39727","record_id":"<urn:uuid:16a7985b-7d41-49d1-9cf5-a6aa8445efa6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: -graph bar, over-: problem with string vars and quotes [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: -graph bar, over-: problem with string vars and quotes From Friedrich Huebler <huebler@rocketmail.com> To statalist@hsphsun2.harvard.edu Subject st: Re: -graph bar, over-: problem with string vars and quotes Date Fri, 9 Feb 2007 10:35:45 -0800 (PST) Many thanks for the suggested solutions for my problem with broken labels. The -twoway bar- approach is interesting because the underlying data doesn't have to be modified. Nick's code also doesn't appear to suffer from an additional problem that I encountered. Substituting single quotes for all double quotes requires only one additional line of code. However, the labels are cut off almost completely (at least in Stata 8.2) if the graph has fewer than 20 bars, as you can see by running the following commands. sysuse auto, clear sort make replace make = `""AMC Concord""' if make == "AMC Concord" replace make = `""AMC" Pacer"' if make == "AMC Pacer" replace make = `"AMC "Spirit""' if make == "AMC Spirit" replace make = subinstr(make,`"""',"''",.) graph hbar mpg if _n<20, over(make) graph hbar mpg if _n<21, over(make) Need Mail bonding? Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-02/msg00286.html","timestamp":"2014-04-19T07:11:14Z","content_type":null,"content_length":"7194","record_id":"<urn:uuid:80349189-f97e-4593-a701-65328cbe78de>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
In what generality is the natural map $\operatorname{Hom}_R(L,M)\otimes S\to\operatorname{Hom}_{R\otimes S}(L\otimes S,M\otimes S)$ an isomorphism? up vote 3 down vote favorite Let $k$ be a commutative ring, $R$ and $S$ commutative $k$-algebras. Let $L$ and $M$ be $R$-modules. Consider the natural map $$\operatorname{Hom}_R(L,M)\otimes_k S \to \operatorname{Hom}_{R \ otimes_k S}(L\otimes_k S, M \otimes_k S).$$ In what generality is this map an isomorphism? Note: This question is reposted from math.SE. The partial answer here appears to show that it suffices to assume $S$ is free as a $k$-module, and either $S$ is finite over $k$ or $L$ is finitely generated as an $R$-module. It also seems to state that if $S$ is free but not finite over $k$, and $L$ is free but not finite over $R$, and $M$ is "reasonable" (e.g., $M=R$), then the morphism fails to be an isomorphism. But it is unsatisfying that the entire analysis (for providing both counterexamples and hypotheses) relies on the assumption that $S$ is free over $k$. Edit: The following reduction is suggested by a-fortiori: the term on the left is equal to $$\operatorname{Hom}_R(L,M) \otimes_R (R \otimes_k S),$$ and the term on the right is equal to $$\ operatorname{Hom}_{R \otimes_k S}(L \otimes_R (R \otimes_k S), M \otimes_R (R \otimes_k S)).$$ Thus, writing $T = R \otimes_k S$, we find that the morphism in question is $$\operatorname{Hom}_R(L, M) \otimes_R T \to \operatorname{Hom}_T(L \otimes_R T, M \otimes_R T).$$ Replacing $T$ by $S$, we see that we have reduced the original question to the case $k = R$, and consequently, $S = R \otimes_k S$. (I found a-fortiori's explanation overly succinct, but I think I've overcompensated.) 2 It is true if $L$ is finitely presented as $R$-module (I don't think finitely generated is enough) and $S$ is flat as $R$-module. – Torsten Ekedahl Aug 3 '11 at 16:42 1 replacing $k$ and $S$ by $R$ and $R\otimes_k S$, respectively, we may assume $k=R$ – user2035 Aug 3 '11 at 17:10 1 Dear Charles, after the reduction to the case $k=R$ suggested by a-fortiori, Proposition 7 in Bourbaki's Algèbre II.5.3 gives some more information. Namely, the canonical morphism in question is a monomorphism if $S$ is projective as an $R$-module, and it is an isomorphism if one of the $R$-modules $S$ and $L$ is projective and finitely generated. – Fred Rohrer Aug 4 '11 at 3:28 1 After reducing to the case suggested by a-fortiori, you can find a proof for Torsten's remark in Matsumura's Commutative Ring Theory, on page 52, Theorem 7.11. – Mahdi Majidi-Zolbanin Aug 4 '11 at The Bourbaki condition also happens to be the same one used to derive the trace map in general. – Harry Gindi Feb 9 '12 at 12:00 add comment 1 Answer active oldest votes Here is just some sanity check: We may as well work on the local case. Suppose $R=k$ local and $S=R/m$, $m$ is the maximal ideal of $R$. I will also assume $L,M$ finitely generated. Then the LHS is $S^{\mu(Hom_R(L,M))}$ while the RHS is $S^{\mu(L)\mu(M)}$. So if your map is an isomorphism, one must have: $${\mu(Hom_R(L,M))} = {\mu(L)\mu(M)} \ \ \ (*)$$ Here $\mu(L)$ is the number of generators of $L$. This rarely happens unless $L$ is free. If $L$ is not, even freeness of $M$ is not enough. For example, if $M=R$ and $ann_R(L)$ contains a non-zerodivisor on $R$ (e.g, if $R$ is a domain and $L$ any torsion module), then the LHS of $(*)$ is $0$, while the RHS is $\mu(L) up vote 3 $. down vote In summary, together with the comments: if $R\otimes_kS$ is not flat over $R$, then I think $L$ must be projective for this to be true in any reasonable generality. May be your situation is more specific, if so can you tell us what you want to be true ? EDIT: in fact, the above analysis suggests the following class of counter examples: Let $k=R$, $L = R/(x)$ where $x$ is $R$-regular and $M=R$. Then the LHS of your original map is $0$, while the RHS is $Hom_S(S/(x), S) \cong 0:_{S} x$. If $x$ is not $S$-regular then the RHS is not $0$. 1 I really think that this answer, together with the comments, more or less gives me what I'm looking for--an idea of what (reasonably general) hypotheses make it work ($S$ flat over $k$ and $L$ finitely presented), together with an analysis of why you won't get good results without these hypotheses (in particular, if $S$ is not flat over $k$--a case which has not been addressed until now). – Charles Staats Aug 4 '11 at 15:41 Incidentally, "$S$ flat over $k$ and $L$ finitely presented" does cover the case that initially inspired the question. – Charles Staats Aug 4 '11 at 15:41 add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/72004/in-what-generality-is-the-natural-map-operatornamehom-rl-m-otimes-s-to-ope?sort=votes","timestamp":"2014-04-20T06:35:58Z","content_type":null,"content_length":"62748","record_id":"<urn:uuid:30617eb9-e594-426c-a3e1-73eebf957bb9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Browsing DSP Publications by Title Browse this collection by: Now showing items 214-233 of 508 • R. Neelamani, R. de Queiroz and R. G. Baraniuk,"Lattice Algorithms for Compression Color Space Estimation in JPEG Images," in International Workshop on Combinatorial Image Analysis, • M. A. Davenport, R. G. Baraniuk and C. D. Scott,"Learning minimum volume sets with support vector machines," in IEEE Workshop on Machine Learning for Signal Processing (MLSP),, pp. 301-306. • R. L. Claypoole, R. G. Baraniuk and R. D. Nowak,"Lifting Construction of Non-Linear Wavelet Transforms," in IEEE-SP International Symposium on Time-frequency and Time-scale Analysis,, pp. 49-52. • R. G. Baraniuk, "A Limitation of the Kernel Method for Joint Distributions of Arbitrary Variables," IEEE Signal Processing Letters, vol. 3, no. 2, pp. 51-53, 1996. • D. Johnson,"Limits of population coding," in Computational Neuroscience Meeting, • V. J. Ribeiro, R. H. Riedi and R. G. Baraniuk, "Locating Available Bandwidth Bottlenecks," IEEE Internet Computing, vol. 8, no. 5, pp. 34-41, 2004. • T. Karagiannis , M. Faloutsos and R. H. Riedi,"Long-Range Dependence: Now you see it now you don't!," in Global Internet, • R. D. Nowak and B. D. Van Veen, "Low Rank Estimation of Higher Order Statistics," IEEE Transactions on Signal Processing, 1995. • C. S. Burrus and F. Fernandes,"M-Band Multiwavelet Systems," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), • I. W. Selesnick, M. Lang and C. S. Burrus, "Magnitude Squared Design of Recursive Filters with the Chebyshev Norm Using a Constrained Rational Remez Algorithm," IEEE Transactions on Signal Processing, 1994. • I. W. Selesnick, M. Lang and C. S. Burrus,"Magnitude Squared Design of Recursive Filters with the Chebyshev Norm Using a Constrained Rational Remez Algorithm," in IEEE DSP Workshop, • G. Merchant and T. Parks, "Magnitude Weighting and Time Segmentation for Phase-Only Reconstruction of Signals," Rice University ECE Technical Report, no. 8101, 1981. • R. G. Baraniuk,"Marginals vs. Covariance in Joint Distribution Theory," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),, pp. 1021-1024. • M. Wakin and C. Rozell, "A Markov Chain Analysis of Blackjack Strategy," Rice University, 2004. • B. D. Van Veen and R. G. Baraniuk, "Matrix Based Computation of Floating-Point Roundoff Noise," IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, no. 12, pp. 1995-1998, 1989. • S. Sarvotham, D. Baron and R. G. Baraniuk,"Measurements vs. Bits: Compressed Sensing meets Information Theory," in Allerton Conference on Communication, Control and Computing, • R. G. Baraniuk, P. Flandrin and O. Michel,"Measuring Time-Frequency Information and Complexity using the Renyi Entropies," in IEEE International Symposium on Informatin Theory (ISIT),, pp. 426. • R. G. Baraniuk, P. Flandrin, A. J. E. M. Janssen and O. Michel, "Measuring Time-Frequency Information Content using the Renyi Entropies," IEEE Transactions on Information Theory, vol. 47, no. 4, pp. 1391-1409, 2001. • M. A. Davenport, R. G. Baraniuk and C. D. Scott, "Minimax support vector machines," IEEE Workshop on Statistical Signal Processing (SSP), 2007. • R. Neelamani, R. D. Nowak and R. G. Baraniuk,"Model-based Inverse Halftoning with Wavelet-Vaguelette Deconvolution," in IEEE International Conference on Image Processing,, pp. 973-976.
{"url":"http://scholarship.rice.edu/handle/1911/21661/browse?rpp=20&order=ASC&sort_by=1&etal=-1&type=title&starts_with=K","timestamp":"2014-04-18T20:52:18Z","content_type":null,"content_length":"33104","record_id":"<urn:uuid:e96f0597-2ba9-40ca-9c47-4cd737340343>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Bladensburg, MD Precalculus Tutor Find a Bladensburg, MD Precalculus Tutor ...I have been tutoring since the 11th grade, and have tutored throughout college. I have worked with elementary school children on Math and English. I have also worked with high school students on Math and Science. 40 Subjects: including precalculus, English, reading, chemistry ...I also have scored very well on standardized tests:SAT (Old): 1450ACT: 32GRE - Math: 167/170, Verbal: 170/170I can easily relay the strategies needed to go through standardized testing with efficiency.I took AP Calculus in high school and scored a 5/5 on the BC exam. As a student of mechanical e... 32 Subjects: including precalculus, reading, algebra 2, calculus ...I tutored for 4 years during high school ranging from basic algebra through calculus. I tutored for 3 years during college ranging from remedial algebra through third semester calculus. I have experience in Java (and other web programming) and Microsoft Excel. 13 Subjects: including precalculus, calculus, geometry, GRE ...I have picked up some of these methods from full time Chemistry teachers, and have developed others myself. Seeing students gain confidence in doing this is one of the real pleasures of tutoring. My 30 year full time career was in physics (I still do consulting), so I especially welcome the opportunity to tutor physics. 13 Subjects: including precalculus, chemistry, calculus, physics ...If you need help, in any branch of high school math, and you're willing to work at it, I can turn you into an A student in short order.I can help with Terms of algebra, Algebraic addition, subtraction, multiplication, division, Factors and factoring, Linear equations and their solutions and Quadr... 17 Subjects: including precalculus, English, calculus, ASVAB Related Bladensburg, MD Tutors Bladensburg, MD Accounting Tutors Bladensburg, MD ACT Tutors Bladensburg, MD Algebra Tutors Bladensburg, MD Algebra 2 Tutors Bladensburg, MD Calculus Tutors Bladensburg, MD Geometry Tutors Bladensburg, MD Math Tutors Bladensburg, MD Prealgebra Tutors Bladensburg, MD Precalculus Tutors Bladensburg, MD SAT Tutors Bladensburg, MD SAT Math Tutors Bladensburg, MD Science Tutors Bladensburg, MD Statistics Tutors Bladensburg, MD Trigonometry Tutors Nearby Cities With precalculus Tutor Brentwood, MD precalculus Tutors Capitol Heights precalculus Tutors Cheverly, MD precalculus Tutors Colmar Manor, MD precalculus Tutors Cottage City, MD precalculus Tutors Edmonston, MD precalculus Tutors Glenarden, MD precalculus Tutors Hyattsville precalculus Tutors Landover Hills, MD precalculus Tutors Mount Rainier precalculus Tutors North Brentwood, MD precalculus Tutors Riverdale Park, MD precalculus Tutors Riverdale Pk, MD precalculus Tutors Riverdale, MD precalculus Tutors Rogers Heights, MD precalculus Tutors
{"url":"http://www.purplemath.com/bladensburg_md_precalculus_tutors.php","timestamp":"2014-04-20T13:26:48Z","content_type":null,"content_length":"24455","record_id":"<urn:uuid:9f727d80-f83a-4056-bc52-46a332bb902d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: InterpolatingFunction Replies: 1 Last Post: Oct 14, 1996 2:19 AM Messages: [ Previous | Next ] Posted: Oct 7, 1996 1:59 AM I have a question about using a InterpolatingFunction obtained by NDSolve as a usual function. I posted similar questions ealier last month, and received much help to which I am very grateful. First I ran: Then I got: {{c->InterpolatingFunction [...], w->InterpolatingFunction [...]}} Next I ran, using the above result: NDSolve[{k'[t]==0.715c[k[t]]+... /. %..%[[1]],k[0]==0.3},k,{t,0,40}] Then I successfully (with the helpful suggestions from MathGroup) got {{k->InterpolatingFunction [...]}} Now there is a problem that I can not solve. I want to look at how c[k[t]] moves as t moves from 0 to 40. I found in page212 of the last edition of 'Mathematica' book the following command and tried it. k /. First[%] which allowed me to plot k[t] by: When I similarly tried to plot c[k[t]] by: ^this is supposed to mean c[k] obtained by c/. First[%] immediately after getting {c->InterpolatingFunction[ ]} Then I got a very strange looking plot together with some error messages. Can anybody suggest what went wrong? Noriaki Kinoshita University of Cambridge Date Subject Author 10/7/96 N Kinoshita 10/14/96 Re: InterpolatingFunction Harald Berndt
{"url":"http://mathforum.org/kb/thread.jspa?threadID=224042","timestamp":"2014-04-20T16:59:48Z","content_type":null,"content_length":"18257","record_id":"<urn:uuid:2fa0f723-1e19-4a27-9ec9-4efcbffa55bb>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Lewko's blog The first research level math problem I seriously worked on (without much success) was the sum-free subset problem. A sum-free set (in an additive group) is a set that contains no solution to the equation $x+y=z$. The sum-free subset problem concerns the following theorem of Erdős, which is one of the early applications of the probabilistic method. Theorem: (Erdős, 1965) Let A be a finite set of natural numbers. There exists a sum-free subset $S \subseteq A$ such that $|S| \geq \frac{1}{3}|A| + \frac{1}{3}$. The natural problem here is to determine how much one can improve on this result. Given the simplicity of the proof, it seems that one should be able to better. However, the best result to date is $\ frac{|A|}{3}+ \frac{2}{3}$ (for $|A|>2$) due to Bourgain (1995) using Fourier analysis. It seems likely that there is a function $f(n) \rightarrow \infty$ such that one may take $|S| \geq \frac{|A|} {3}+ f(|A|)$ in the above theorem, however proving this seems to be quite challenging. On the other hand proving a good upper bound also was a longstanding open problem, namely deciding if the constant $\frac{1}{3}$ could be replaced by a larger constant. This problem, in fact, has received a fair amount of attention over the years. In his 1965 paper Erdos states that Hinton proved that the constant is at most $\frac{7}{15} \approx .467$ and that Klarner improved this to $\frac{3}{7}$. In 1990 Alon and Kleitman improved this further to $\frac{12}{29} \approx 0.414$, and Malouf in her thesis (as well as Furedi) improved this further to $\frac{2}{5} = .4$. A couple years ago I improved this to $\frac{11}{28} \approx .393$ (I recently learned that Erdős claimed the same bound without a proof in a handwritten letter from 1992). More recently, Alon improved this to $\frac{11}{28} - 10^{-50000}$ (according to Eberhard, Green and Manners, Alon’s paper doesn’t appear to be electronically available. UPDATE: Alon’s paper can be found here). All of these results (with the exception of Alon’s) proceed simply be constructing a set and proving its largest sum-free subset is at most a certain size (then an elementary argument can be used to construct arbitrarily large sets with the same constant: see this blog post of Eberhard). On the other hand, it is obvious from the statement of the theorem above that one can’t have a set whose largest sum-free subset is exactly $\frac{|A|}{3}$, thus a simple proof by example is hopeless. The futile industry of constructing examples aside, I spent a lot of effort thinking about this problem and never got very far. About a month ago Sean Eberhard, Ben Green and Freddie Manners solved this problem, showing that $\frac{1}{3}$ is indeed the optimal constant! I don’t yet fully understand their proof, and will refer the reader to their introduction for a summary of their ideas. However, having learned the problem’s difficulty firsthand, I am very pleased to see it solved. In the wake of this breakthrough on the upper bound problem, I thought I’d take this opportunity to highlight a toy case of the lower bound problem. Definition: For a natural number $n$ define the sum-free subset number Sum-free$(n)$ to be the largest number such that every set of $n$ natural numbers is guaranteed to contain a sum-free subset of size at least Sum-free$(n)$. Thus by Bourgain’s theorem we have that Sum-free$(n) \geq \frac{n+2}{3}$ (and the Eberhard-Green-Manners theorem states Sum-free$(n)=\frac{n}{3}+o(1)$). It turns out that Bourgain’s result is sharp for sets of size $12$ and smaller. However, Bourgain’s theorem only tells us that a set of size $13$ must contain a sum-free subset of size $5$ while it seems likely (from computer calculations) that every set of size $13$ must in fact contain a sum-free subset of size $6$. Indeed, I’ll even Conjecture: Sum-free$(13)=6$. While the asymptotic lower bound problem (showing that there $f(|A|) \rightarrow \infty$) seems very hard and is connected with subtle questions in harmonic analysis (such as the now solved Littewood conjecture on exponential sums), this problem is probably amenable to elementary combinatorics (although, it certainly does not seem easy!). Below is a chart showing what I know about the first $15$ sum-free subset numbers. │$n$│Bourgain’s lower bound│Best known upper │Upper bound set example │ │ │on Sum-free$(n)$ │bound on Sum-free$(n)$│ │ │1 │1 │1 │$\{1\}$ │ │2 │1 │1 │$\{1,2\}$ │ │3 │2 │2 │$\{1,2,3\}$ │ │4 │3 │3 │$\{1,2,3,4\}$ │ │5 │3 │3 │$\{1,2,3,4,5\}$ │ │6 │3 │3 │$\{1,2,3,4,5,6\}$ │ │7 │3 │3 │$\{1,2,3,4,5,6,8\}$ │ │8 │4 │4 │$\{1,2,3,4,5,6,7,9\}$ │ │9 │4 │4 │$\{1,2,3,4,5,6,7,8,10\}$ │ │10 │4 │4 │$\{1,2,3,4,5,6,8,9,10,18\}$ │ │11 │5 │5 │$\{1,2,3,4,6,10,20,70,140,210,420\}$ │ │12 │5 │5 │$\{1,2,3,4,5,6,7,8,9,10,14,18\}$ │ │13 │5 │6 │$\{1,2,3,4,6,10,20,35,70,105,140,210,420\}$ │ │14 │6 │6 │$\{1,2,3,4,5,6,7,10,12,14,30,35,60,70\}$ │ │15 │6 │7 │$\{1,2,3,4,6,10,20,30,35,60,70,105,140,210,420\}$ │ Allison and I just arxiv’ed our paper An Exact Asymptotic for the Square Variation of Partial Sum Processes. Let $\{X_{i}\}$ be a sequence of independent, identically distributed random variables with mean $\mu < \infty$ . The strong law of large numbers asserts that $\sum_{i=1}^{N}X_{i} \sim N\mu$ almost surely. Without loss of generality, one can assume that $X_{i}$ are mean-zero by defining $Y_{i}=X_{i}-\mu$. If we further assume a finite variance, that is $\mathbb{E}\left[|X_{i}|^2 \right] = \sigma^2 < \infty$, the Hartman-Wintner law of the iterated logarithm gives an exact error estimate for the strong law of large numbers. More precisely, $\left|\sum_{i=1}^{N} X_{i} \right|^2\leq (2+o(1))\sigma^2 N \ln\ln (N)$ where the constant $2$ can not be replaced by a smaller constant. That is, the quantity $\sum_{i=1}^{N}X_{i}$ gets as large/small as $\pm \sqrt{ (2-\epsilon) \sigma N \ln\ln (N)}$ infinitely often. The purpose of our current work is to prove a more delicate variational asymptotic that refines the law of the iterated logarithm and captures more subtle information about the oscillations of a sums of i.i.d random variables about its expected value. More precisely, Theorem Let $\{X_{i}\}$ be a sequence of independent, identically distributed mean zero random variables with variance $\sigma$ and satisfying $\mathbb{E}\left[|X_{i}|^{2+\delta}\right] < \infty$. If we let $\mathcal{P}_{N}$ denote the set of all possible partitions of the interval $[N]$ into subintervals, then we have almost surely: $\max_{\pi \in \mathcal{P}_{N}} \sum_{I \in \pi } | \sum_{i\in I} X_{i}|^2 \sim 2 \sigma^2N \ln \ln(N)$. Choosing the partition $\pi$, to contain a single interval $J=[1,N]$ immediately recovers the upper bound in the law of the iterated logarithm. This result also strengthens earlier work of J. Qian. An interesting problem left by this work is deciding if the moment condition $\mathbb{E}\left[|X_{i}|^{2+\delta}\right] < \infty$ can be removed. Without an auxiliary moment condition we are able to establish the following weaker `in probability’ result. Theorem Let $\{X_i\}$ be a sequence of independent, identically distributed mean zero random variables with finite variance $\sigma$. We then have that $\frac{\max_{\pi \in \mathcal{P}_{N}} \sum_{I \in \pi } | \sum_{i\in I} X_{i}|^2}{2 \sigma^2 N \ln \ln(N)} \xrightarrow{p} 1$ Leakage resilient cryptography is an exciting area of cryptography that aims to build cryptosystems that provide security against side channel attacks. In this post I will give a nontechnical description of a common leakage resilient security model, as well as describe a recent paper in the area with Allison Lewko and Brent Waters, titled “How to Leak on Key Updates”. Review of Public Key Encryption Let us (informally) recall the definition of a public key cryptography system. Alice would like to send Bob a private message $M$ over an unsecured channel. Alice and Bob have never met before and we assume they do not share any secret information. Ideally, we would like a procedure where 1) Alice and Bob engage in a series of communications resulting in Bob learning the message $M$ 2) an eavesdropper, Eve, who intercepts all of the communications sent between Alice and Bob, should not learn any (nontrivial) information about the message $M$. As stated, the problem is information theoretically impossible. However, this problem is classically solved under the heading of public key cryptography if we further assume that: 1) Eve has limited computational resources, 2) certain computational problems (such as factoring large integers or computing discrete logarithms in a finite group) are not efficiently solvable, and 3) we allow Alice and Bob to use randomization (and permit security to fail with very small probability). More specifically, a public key protocol works as follows: Bob generates a private and public key, say $SK$ and $PK$ respectively. As indicated by the names, $PK$ is publicly known but Bob retains $SK$ as secret information. When Alice wishes to send a message $M$ to Bob she generates an encrypted ciphertext $C$ using the message $M$, Bob’s public key $PK$ and some randomness. She then sends this ciphertext to Bob via the public channel. When Bob receives the ciphertext he decrypts it using his secret key $SK$ and recovers $M$. While Eve has access to the ciphertext $C$ and the secret key $SK$, she is unable to learn any nontrivial information about the message $M$ (assuming our assumptions are sound). In fact, we require a bit more: even if this is repeated many times (with fixed keys), Eve’s ability to decrypt the ciphertext does not meaningfully improve. Leakage Resilient Cryptography and our work In practice, however, Eve may be able to learn information in addition to what she intercepts over Alice and Bob’s public communications via side channel attacks. Such attacks might include measuring the amount of time or energy Bob uses to carry out computations. The field of leakage resilient cryptography aims to incorporate protection against such attacks into the the security model. In this model, in addition to the ciphertext and public key, we let Eve select a (efficiently computable) function $F:\{0,1\}^{\ell}\rightarrow\{0,1\}^{\mu \ell}$ where $\ell$ is the bit length of $SK$ and $0<\mu<1$ is a constant. We now assume, in addition to $C$ and $PK$, Eve also gets to see $F(SK)$. In other words, Eve gains a fair amount of information about the secret key, but not enough to fully determine it. Moreover, we allow Eve to specify a different function $F$ every time Alice sends Bob a message. There is an obvious problem now, however. If the secret key $SK$ remained static, then Eve could start by choosing $F$ to output the first $\mu \ell$ bits, the second time she could choose $F$ to give the next $\mu \ell$ bits, and if she carries on like this, after $1/\mu$ messages she would have recovered the entire secret key. To compensate for this we allow Bob to update his secret key between messages. The public key will remain the same. There has been a lot of interesting work on this problem. In the works of Brakerski, Kalai, Katz, and Vaikuntanathan and Dodis, Haralambiev, Lopez-Alt, and Wichs many schemes are presented that are provably secure against continual leakage. In these schemes, however, information about the secret key is permitted to be leaked between updates, but only a tiny amount is allowed to be leaked during the update process itself. In our current work, we offer the first scheme that allows a constant fraction of the information used in the update to be leaked. The proof is based on subgroup decision assumptions in composite order bilinear groups. I recently came across the following passage regarding the mathematical profession from Adam Smith’s influential work The Theory of Moral Sentiments that I thought others might find interesting: Mathematicians, on the contrary, who may have the most perfect assurance, both of the truth and of the importance of their discoveries, are frequently very indifferent about the reception which they may meet with from the public. The two greatest mathematicians that I ever have had the honour to be known to, and, I believe, the two greatest that have lived in my time, Dr Robert Simpson of Glasgow, and Dr Matthew Stewart of Edinburgh, never seemed to feel even the slightest uneasiness from the neglect with which the ignorance of the public received some of their most valuable works. The great work of Sir Isaac Newton, his Mathematical Principles of Natural Philosophy, I have been told, was for several years neglected by the public. The tranquillity of that great man, it is probable, never suffered, upon that account, the interruption of a single quarter of an hour. Natural philosophers, in their independency upon the public opinion, approach nearly to mathematicians, and, in their judgments concerning the merit of their own discoveries and observations, enjoy some degree of the same security and tranquillity. The morals of those different classes of men of letters are, perhaps, sometimes somewhat affected by this very great difference in their situation with regard to the public. Mathematicians and natural philosophers, from their independency upon the public opinion, have little temptation to form themselves into factions and cabals, either for the support of their own reputation, or for the depression of that of their rivals. They are almost always men of the most amiable simplicity of manners, who live in good harmony with one another, are the friends of one another’s reputation, enter into no intrigue in order to secure the public applause, but are pleased when their works are approved of, without being either much vexed or very angry when they are The entire text is available here. Allison and I recently completed a paper titled Restriction estimates for the paraboloid over finite fields. In this note we obtain some endpoint restriction estimates for the paraboloid over finite Let $S$ denote a hypersurface in $\mathbb{R}^{n}$ with surface measure $d\sigma$. The restriction problem for $S$ is to determine for which pairs of $(p,q)$ does there exist an inequality of the form $\displaystyle ||\hat{f}||_{L^{p'}(S,d\sigma)} \leq C ||f||_{L^{q'}(\mathbb{R}^n)}.$ We note that the left-hand side is not necessarily well-defined since we have restricted the function $\hat{f}$ to the hypersurface $S$, a set of measure zero in $\mathbb{R}^{n}$. However, if we can establish this inequality for all Schwartz functions $f$, then the operator that restricts $\hat{f}$ to $S$ (denoted by $\hat{f}|_{S}$), can be defined whenever $f \in L^{q}$. In the Euclidean setting, the restriction problem has been extensively studied when $S$ is a sphere, paraboloid, and cone. In particular, it has been observed that restriction estimates are intimately connected to questions about certain partial differential equations as well as problems in geometric measure theory such as the Kakeya conjecture. The restriction conjecture states sufficient conditions on $(p,q) $ for the above inequality to hold. In the case of the sphere and paraboloid, the question is open in dimensions three and higher. In 2002 Mockenhaupt and Tao initiated the study of the restriction phenomena in the finite field setting. Let us introduce some notation to formally define the problem in this setting. We let $F$ denote a finite field of characteristic $p >2$. We let $S^{1}$ denote the unit circle in $\mathbb{C}$ and define $e: F \rightarrow S^1$ to be a non-principal character of $F$. For example, when $F = \mathbb{Z}/p \mathbb{Z}$, we can set $e(x) := e^{2\pi i x/p}$. We will be considering the vector space $F^n$ and its dual space $F_*^n$. We can think of $F^n$ as endowed with the counting measure $dx$ which assigns mass 1 to each point and $F_*^n$ as endowed with the normalized counting measure $d\xi$ which assigns mass $|F|^{-n}$ to each point (where $|F|$ denotes the size of $F$, so the total mass is equal to 1 here). For a complex-valued function $f$ on $F^n$, we define its Fourier transform $\hat{f}$ on $F_*^n$ by: $\displaystyle \hat{f}(\xi) := \sum_{x \in F^n} f(x) e(-x \cdot \xi).$ For a complex-valued function $g$ on $F_*^n$, we define its inverse Fourier transform $g^{\vee}$ on $F^n$ by: $\displaystyle g^{\vee}(x) := \frac{1}{|F|^n} \sum_{\xi \in F_*^n} g(\xi) e(x\cdot \xi).$ It is easy to verify that $(\hat{f})^\vee = f$ and $\widehat{(g^{\vee})} = g$. We define the paraboloid $\mathcal{P} \subset F_*^n$ as: $\mathcal{P} := \{(\gamma, \gamma \cdot \gamma): \gamma \in F_*^{n-1}\}$. This is endowed with the normalized “surface measure” $d\sigma$ which assigns mass $|\mathcal{P}|^{-1}$ to each point in $\mathcal{P}$. We note that $|\mathcal{P}| = |F|^{n-1}$. For a function $f: \mathcal{P} \rightarrow \mathbb{C}$, we define the function $(f d\sigma)^\vee: F^n \rightarrow \mathbb{C}$ as follows: $\displaystyle (f d\sigma)^\vee (x) := \frac{1}{|\mathcal{P}|} \sum_{\xi \in \mathcal{P}} f(\xi) e(x \cdot \xi).$ For a complex-valued function $f$ on $F^n$ and $q \in [1, \infty)$, we define $\displaystyle ||f||_{L^q(F^n, dx)} := \left( \sum_{x \in F^n} |f(x)|^q \right)^{\frac{1}{q}}.$ For a complex-valued function $f$ on $\mathcal{P}$, we similarly define $\displaystyle ||f||_{L^q(\mathcal{P},d\sigma)} := \left( \frac{1}{|\mathcal{P}|} \sum_{\xi \in \mathcal{P}} |f(\xi)|^q \right)^{\frac{1}{q}}.$ Now we define a restriction inequality to be an inequality of the form $\displaystyle ||\hat{f}||_{L^{p'}(S,d\sigma)} \leq \mathcal{R}(p\rightarrow q) ||f||_{L^{q'}(\mathbb{R}^n)},$ where $\mathcal{R}(p\rightarrow q)$ denotes the best constant such that the above inequality holds. By duality, this is equivalent to the following extension estimate: $||(f d\sigma)^\vee||_{L^q(F^n, dx)} \leq \mathcal{R}(p\rightarrow q) ||f||_{L^p(\mathcal{P},d\sigma)}.$ We will use the notation $X \ll Y$ to denote that quantity $X$ is at most a constant times quantity $Y$, where this constant may depend on the dimension $n$ but not on the field size, $|F|$. For a finite field $F$, the constant $\mathcal{R}(p\rightarrow q)$ will always be finite. The restriction problem in this setting is to determine for which $(p,q)$ can we upper bound $\mathcal{R}(p\ rightarrow q)$ independently of $|F|$ (i.e. for which $(p,q)$ does $\mathcal{R}(p \rightarrow q) \ll 1$ hold). Mockenhaupt and Tao solved this problem for the paraboloid in two dimensions. In three dimensions, we require $-1$ not be a square in $F$ (without this restriction the parabaloid will contain non-trivial subspaces which lead to trivial counterexamples, but we will not elaborate on this here). For such $F$, they showed that $\mathcal{R}(8/5+\epsilon \rightarrow 4) \ll 1$ and $\mathcal{R}(2 \rightarrow \frac{18}{5}+\epsilon) \ll 1$ for every $\epsilon>0$. When $\epsilon=0$, their bounds were polylogarithmic in $|F|$. Mockenhaupt and Tao’s argument for the $\mathcal{R}(8/5 \rightarrow 4) $ estimate proceeded by first establishing the estimate for characteristic functions. Here one can expand the $L^4$ norm and reduce the problem to combinatorial estimates. A well-known dyadic pigeonhole argument then allows one to pass back to general functions at the expense of a logarithmic power of $|F|$. Following a similar approach (but requiring much more delicate Gauss sum estimates), Iosevich and Koh proved that $\mathcal{R}(\frac{4n}{3n-2}+ \epsilon \rightarrow 4) \ll 1$ and $\mathcal{R}(2 \rightarrow \frac{2n^2}{n^2-2n+2} + \epsilon) \ll 1$ in higher dimensions (in odd dimensions some additional restrictions on $F$ are required). Again, however, this argument incurred a logarithmic loss at the endpoints from the dyadic pigeonhole argument. In this note we remove the logarithmic losses mentioned above. Our argument begins by rewriting the $L^4$ norm as $||(fd\sigma)^{\vee}||_{L^4}=||(fd\sigma)^{\vee}(fd\sigma)^{\vee}||_{L^2}^{1/2}$. We then adapt the arguments of the prior papers to the bilinear variant $||(fd\sigma)^{\vee}(gd\sigma)^{\vee}||_{L^2}^{1/2}$ in the case that $f$ and $g$ are characteristic functions. To obtain estimates for arbitrary functions $f$, we can assume that $f$ is non-negative real-valued and decompose $f$ as a linear combination of characteristic functions, where the coefficients are negative powers of two (we can do this without loss of generality by adjusting only the constant of our bound). We can then employ the triangle inequality to upper bound $||(fd\sigma)^{\vee}||_{L^4}$ by a double sum of terms like $||(\chi_j d\sigma)^{\vee}(\chi_k d\sigma)^{\vee}||_{L^2}^{1/2}$, where $\chi_i$ and $\chi_j$ are characteristic functions, weighted by negative powers of two. We then apply our bilinear estimate for characteristic functions to these inner terms and use standard bounds on sums to obtain the final estimates. Our method yields the following theorems: Theorem For the paraboloid in $3$ dimensions with $-1$ not a square, we have $\mathcal{R}(8/5 \rightarrow 4) \ll 1$ and $\mathcal{R}(2 \rightarrow \frac{18}{5}) \ll 1$. Theorem For the paraboloid in $n$ dimensions when $n \geq 4$ is even or when $n$ is odd and $|F| = q^m$ for a prime $q$ congruent to 3 modulo 4 such that $m(n-1)$ is not a multiple of 4, we have $\ mathcal{R}(\frac{4n}{3n-2} \rightarrow 4) \ll 1$ and $\mathcal{R}(2 \rightarrow \frac{2n^2}{n^2-2n+2}) \ll 1$. We recently learned that in unpublished work Bennett, Carbery, Garrigos, and Wright have also obtained the results in the $3$-dimensional case. Their argument proceeds rather differently than ours and it is unclear (at least to me) if their argument can be extended to the higher dimensional settings. I recently arXiv’ed a short note titled An Improved Upper Bound for the Sum-free Subset Constant. In this post I will briefly describe the result. We say a set of natural numbers $A$ is sum-free if there is no solution to the equation $x+y=z$ with $x,y,z \in A$. The following is a well-known theorem of Erdős. Theorem Let $A$ be a finite set of natural numbers. There exists a sum-free subset $S \subseteq A$ such that $|S| \geq \frac{1}{3}|A|$. The proof of this theorem is a common example of the probabilistic method and appears in many textbooks. Alon and Kleitman have observed that Erdős’ argument essentially gives the theorem with the slightly stronger conclusion $|S| \geq \frac{|A|+1}{3}$. Bourgain has improved this further, showing that the conclusion can be strengthened to $|S| \geq \frac{|A| + 2}{3}$. Bourgain’s estimate is sharp for small sets, and improving it for larger sets seems to be a difficult problem. There has also been interest in establishing upper bounds for the problem. It seems likely that the constant $1 /3$ cannot be replaced by a larger constant, however this is an open problem. In Erdős’ 1965 paper, he showed that the constant $\frac{1}{3}$ could not be replaced by a number greater than $3/7 \ approx .429$ by considering the set $\{2,3,4,5,6,8,10\}$. In 1990, Alon and Kleitman improved this to $12/29 \approx .414$. In a recent survey of open problems in combinatorics, it is reported that Malouf has shown the constant cannot be greater than $4/10 = .4$. While we have not seen Malouf’s proof, we note that this can be established by considering the set $\{1,2,3,4,5,6,8,9,10,18\}$. In this note we further improve on these results by showing that the optimal constant cannot be greater than $11/28 \approx .393$. Update (May 2, 2010): After posting this preprint, Stefan Neuwirth informed us that Rudin’s question had been previously answered by Y. Meyers in 1968. It appears that Meyers’ construction doesn’t, however, say anything about the anti-Freiman problem. Indeed Meyers’ set (and all of its subsets) contains a $B_{2}[2]$ set of density $1/4$. Hence, the construction of a $\Lambda(4)$ set that doesn’t contain a large $B_{2}[2]$ set still appears to be new. A revised version of the paper has been posted reflecting this information. Most notably, we have changed the title to “On the Structure of Sets of Large Doubling”. Allison Lewko and I recently arXiv’ed our paper “Sets of Large Doubling and a Question of Rudin“. The paper (1) answers a question of Rudin regarding the structure of ${\Lambda(4)}$ sets (2) negatively answers a question of O’Bryant about the existence of a certain “anti-Freiman” theorem (3) establishes a variant of the (solved) Erdös-Newman conjecture. I’ll briefly describe each of these results below. — Structure of ${\Lambda(4)}$ sets — Before describing the problem we will need some notation. Let ${S \subset {\mathbb Z}^d}$ and define ${R_{h}(n)}$ to be the number of unordered solutions to the equation ${x_{1}+\ldots + x_{h}=n}$ with ${x_{1},\ldots,x_{h} \in S}$. We say that ${S}$ is a ${B_{h}[G]}$ set if ${R_{h}(n) \leq G}$ for all ${n \in Z^d}$. There is a similar concept with sums replaced by differences. Since this concept is harder to describe we will only introduce it in the case ${h=2}$. For ${S \subset Z^{d}}$ we define ${R_{2}^{\circ}(n)}$ to be the number of solutions to the equation ${x_{1}-x_{2} = n}$ with ${x_{1},x_{2}\in S}$. If ${R_{2}^{\circ}(n)\leq G}$ for all nonzero ${n}$ we say that ${S}$ is a ${B_{2}^{\circ}[G]}$ set. Let ${S}$ be a subset of the integers ${{\mathbb Z}^{d}}$, and call ${f : \mathbb{T}^{d} \rightarrow {\mathbb C}}$ an ${S}$-polynomial if it is a trigonometric polynomial whose Fourier coefficients are supported on ${S}$ (i.e. ${\hat{f}(n) = 0}$ if ${n \in {\mathbb Z^{d}} \setminus S}$). We say that ${S}$ is a ${\Lambda(p)}$ set (for ${p>2}$) if $\displaystyle ||f||_{L^p} \leq K_{p}(S) ||f||_{L^{2}} \ \ \ \ \ (1)$ holds for all ${S}$-polynomials where the constant ${K_{p}(S)}$ only depends on ${S}$ and ${p}$. If ${p}$ is an even integer, we can expand out the ${L^{p}}$ norm in 1. This quickly leads to the following observation: If ${S}$ is a ${B_{h}[G]}$ set then ${S}$ is also an ${\Lambda(2h)}$ set (${h>1}$, ${h \in Z}$). One can also easily show using the triangle inequality that the union of two $ {\Lambda(p)}$ sets is also a ${\Lambda(p)}$ set. It follows that the finite union of ${B_{h}[G]}$ sets is a ${\Lambda(2h)}$ set. In 1960 Rudin asked the following natural question: Is every ${\Lambda (2h)}$ set is a finite union of ${B_{h}[G]}$ sets? In this paper we show that the answer is no in the case of ${\Lambda(4)}$ sets. In fact, we show a bit more than this. One can easily show that a ${B_{2}^{\circ}[G]}$ set is also a ${\Lambda(4)}$ set. Our first counterexample to Rudin’s question proceeded (essentially) by constructing a ${B_{2}^{\circ}[2]}$ set which wasn’t the finite union of ${B_{2}[G]}$ sets. This however raised the following variant of Rudin’s question: Is every ${\Lambda(4)}$ set the mixed finite union of ${B_{2}[G]}$ and ${B_{2}^{\circ}[G]}$ sets? We show that the answer to this question is no as well. To do this we construct a ${B_{2}[G]}$ set, A, which isn’t a finite union of ${B_{2}^{\circ}[G]}$ sets, and a ${B_{2}^{\circ}[G]}$ set, ${B}$, which isn’t the finite union of ${B_{2}[G]}$ sets. We then consider the product set ${S= A \times B \subset Z^{2}}$ which one can prove is a ${\Lambda(4)}$ subset of ${Z^{2}}$. It isn’t hard to deduce from this that ${S}$ is a ${\Lambda(4)}$ subset of ${Z^2} $ that isn’t a mixed finite union of ${B_{2}[G]}$ and ${B_{2}^{\circ}[G]}$ sets. Moreover, one can (essentially) map this example back to ${Z}$ while preserving all of the properties stated above. Generalizing this further, we show that there exists a ${\Lambda(4)}$ set that doesn’t contain (in a sense that can be made precise) a large ${B_{2}[G]}$ or ${B_{2}^{\circ}[G]}$. This should be compared with a related theorem of Pisier which states that every Sidon set contains a large independent set (it is conjectured that a Sidon set is a finite union of independent sets, however this is We have been unable to extend these results to ${\Lambda(2h)}$ sets for ${h>2}$. Very generally, part of the issue arises from the fact that the current constructions hinges on the existence of arbitrary large binary codes which can correct strictly more than a ${1/2}$ fraction of errors. To modify this construction (at least in a direct manner) to address the problem for, say, ${\Lambda (6)}$ sets it appears one would need arbitrary large binary codes that can correct strictly more than a ${2/3}$ fraction of errors. However, one can show that such objects do not exist. — Is there an anti-Freiman theorem? — Let ${A}$ be a finite set of integers and denote the sumset of ${A}$ as ${A+A = \{a+b : a,b \in A\}}$. A trivial inequality is the following $\displaystyle 2|A|-1 \leq |A+A| \leq {|A| \choose 2}.$ In fact, it isn’t hard to show that equality only occurs on the left if ${A}$ is an arithmetic progression and only occurs on the right if ${A}$ is a ${B_{2}[1]}$ set. A celebrated theorem of Freiman states that if ${|A+A| \approx |A|}$ then ${A}$ is approximately an arithmetic progression. More precisely, if ${A}$ is a finite set ${A \subseteq {\mathbb Z}}$ satisfying ${|A+A| \leq \delta |A|}$ for some constant ${\delta}$, then ${A}$ is contained in a generalized arithmetic progression of dimension ${d}$ and size ${c |A|}$ where ${c}$ and ${d}$ depend only on ${\delta}$ and not on ${|A|}$. It is natural to ask about the opposite extreme: if ${|A+A| \geq \delta |A|^2}$, what can one say about the structure of ${A}$ as a function only of ${\delta}$? A first attempt might be to guess that if ${|A+A|\geq \delta |A|^2}$ for some positive constant ${\delta}$, then ${A}$ can be decomposed into a union of ${k}$${B_2[G]}$ sets where ${k}$ and ${G}$ depend only on ${\delta}$. This is easily shown to be false. For example, one can start with a ${B_2[1]}$ of ${n}$ elements contained in the interval ${[n+1,\infty)}$ and take its union with the arithmetic progression ${[1,n]}$. It is easy to see that ${|A+A| \geq \frac{1}{10} |A|^2}$ regardless of ${n}$. However, the interval ${[1,n]}$ cannot be decomposed as the union of ${k}$${B_2[G]}$ sets with ${k}$ and ${G}$ independent of ${n}$. There are two ways one might try to fix this problem: first, we might ask only that ${A}$ contains a ${B_2[G]}$ set of size ${\delta' |A|}$, where ${\delta'}$ and ${G}$ depend only on ${\delta}$. (This formulation was posed as an open problem by O’Bryant here). Second, we might ask that ${|A'+A'|\geq \delta |A'|^2}$ hold for all subsets ${A' \subseteq A}$ for the same value of ${\delta}$. Either of these changes would rule out the trivial counterexample given above. In this paper we show that even applying both of these modifications simultaneously is not enough to make the statement true. We provide a sequence of sets ${A \subseteq {\mathbb Z}}$ where ${|A'+A'|\geq \delta |A'|^2}$ holds for all of their subsets for the same value of ${\delta}$, but if we try to locate a ${B_2 [G]}$ set, ${B}$, of density ${1/k}$ in ${A}$ then ${k}$ must tend to infinity with the size of ${A}$. As above, our initial construction of such a sequence of ${A}$‘s turned out to be ${B^\circ_2 [2]}$ sets. This leads us to the even weaker anti-Freiman conjecture: (Weak Anti-Freiman) Suppose that ${A \subseteq {\mathbb Z}}$ satisfies ${|A'+A'|\geq \delta |A'|^2}$ and ${|A'-A'|\geq \delta |A'|^2}$ for all subsets ${A' \subseteq A}$. Then ${A}$ contains either a ${B_2[G]}$ set or a ${B^\circ_2[G]}$ set of size ${\geq \delta' |A|}$, where ${G}$ and ${\delta'}$ depend only on ${\delta}$. We conclude by showing that even this weaker conjecture fails. The constructions are the same as those used in the ${\Lambda(4)}$ results above. The two problems are connected by the elementary observation that if ${A'}$ is a subset of a ${\Lambda(4)}$ set ${A}$ then ${|A'+A'|\geq \delta |A'|^2}$ holds where ${\delta}$ only depends on the ${\Lambda(4)}$ constant ${K_{4}(A)}$ of the set ${A} — A variant of the Erdös-Newman conjecture — In the early 1980′s Erdös and Newman independently made the following conjecture: For every ${G}$ there exists a ${B_{2}[G]}$ that isn’t a finite union of ${B_{2}[G']}$ sets for any ${G'\leq G-1}$. This conjecture was later confirmed by Erdös for certain values of ${G}$ using Ramsey theory, and finally resolved completely by Nešetřil and Rödl using Ramsey graphs. One further application of our technique is the following theorem which can be viewed as an analog of the Erdös-Newman problem with the roles of the union size and ${G}$ reversed. Theorem 1 For every ${k >1}$ there exists a union of $k$${B_{2}[1]}$ sets that isn’t a finite union of ${k'\leq k-1}$${B_{2}[G]}$ sets for any ${G}$. A key component in the work of Green, Tao, and Ziegler on arithmetic and polynomial progressions in the primes is the dense model theorem. Roughly speaking this theorem allows one to model a dense subset of a sparse pseudorandom set by dense subset of the the ambient space. In the work of Green, Tao, and Zeigler this enabled them to model (essentially) the characteristic function of the set of primes with (essentially) the characteristic function of a set of integers with greater density. They then were able to obtain the existence of certain structures in the model set via Szemerédi’s theorem and its generalizations. More recently, simplified proofs of the dense model theorem have been obtained independently by Gowers and Reingold, Trevisan, Tulsiani and Vadhan. In addition, the latter group has found applications of these ideas in theoretical computer science. In this post we give an expository proof of the dense model theorem, substantially following the paper of Reingold, Trevisan, Tulsiani and With the exception of the min-max theorem from game theory (which can be replaced by (or proved by) the Hahn-Banach theorem, as in Gowers’ approach) the presentation is self-contained. (We note that the the theorem, as presented below, isn’t explictly stated in the Green-Tao paper. Roughly speaking, these ideas can be used to simplify/replace sections 7 and 8 of that paper.) Two weeks ago, not far from the UT math department, I found (or rather I was found by) a very friendly stray dog (pictured below). Since it was raining and the nearby streets were busy, I fed the dog and then brought it to Austin’s Town Lake animal shelter. The next day I called the shelter to learn that if the dog wasn’t adopted within three days it would likely be euthanized. With the help of several other members of the UT mathematical community, dozens of emails, phone calls, and Internet postings were made in an effort to find the pup a home. (In fact, the mathematical blogsphere was represented in these efforts.) (more…)
{"url":"http://lewko.wordpress.com/","timestamp":"2014-04-16T07:14:42Z","content_type":null,"content_length":"137256","record_id":"<urn:uuid:3f8ba7cf-5887-45f6-b615-c04e0a549ec4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Chernoff bounds bounds give bounds for Poisson trials . Considers an which happens with p. Let X(n) be the number of that happen in n . For each of these trials, the event happens with probability p, so the expected value of X is np, and follows a binomial distribution . However, this does not give strong indications on the distribution. Chernoff bounds provide that information. Chernoff bounds say: P(X ≥ (1+x)np) < e^-x^2np/3 P(X ≤ (1-x)np) > e^-x^2np/2 Conceptually, these bounds say that the number of events that actually occur is close to the expected number. Consider the case of the unemployement survey. You pick 1000 people and they say they are unemployed with probability p (presume p = 6%). What is the probability that you say the unemployment is 6.6% or more based on the results of your survey? P(X ≥ (1+.1) np) < e^- 0.1^2np/3 = e^^-.01 1000 0.06/3 = e^-0.6 ~ 54.9%. If you want accurate results, you'll need to ask more people (*). If you ask 10000 people instead, the Chernoff bound goes down to 0.25%. These bounds are used quite often in the complexity analysis of randomized algorithms. The (relative) simplicity of these formulae allows the design of algorithms by careful selection of x. These bounds are also useful for bounding error distributions on experiments and surveys (as in the example above). Chernoff bounds in general involve probability tail bounds. Herman Chernoff introduced the formulae above in his 1952 paper "A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations". The bounds are derived from the moment generating function. The later inequality can be generalized to deal with non-uniform Poisson trials by replacing np with u, the expected value of X. I do not know a simple form of the similar generalization of the first (*) Strictly speaking, just because the bound gives a high upper bound does not mean the value is large. However, if you ask enough people, these bounds will prove that the probability of error is small. Consult confidence interval for more information on survey error.
{"url":"http://everything2.com/title/Chernoff+bounds","timestamp":"2014-04-19T12:14:05Z","content_type":null,"content_length":"21292","record_id":"<urn:uuid:c5178b97-ef34-43cc-9dbc-d271fefd6d0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Seminar in Algebra and Number Theory Reflection Groups and Hecke Algebras Fall 2005 P. Achar Problem Set 3a Due: October 18, 2005 In the study of groups and representations, it is often useful to have a method for taking a representation of a subgroup and producing from it a representation of the larger group. Here we will develop such a method for reflection groups, known as "truncated induction" or "MacDonald-Lusztig-Spaltenstein induction." Let W be a reflection group acting on V , and let W be a subgroup of W that is also generated by reflections. Assume that V is equipped with a W- (and hence W -) invariant inner product , . Let V = {v V | wv = v for all w W }. amd let V = (V ) . So W acts as an essential reflection group on V . (Note: W is not necessarily a parabolic subgroup--it may or may not be equal to the full stabilizer of V .) Let S = Sym(V ), S = Sym((V ) ). and S = Sym((V ) ). Sk , (S )k , and (S )k denote the subspaces of homogeneous degree-k polynomials in each of the preceding.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/033/4897424.html","timestamp":"2014-04-17T05:14:52Z","content_type":null,"content_length":"8127","record_id":"<urn:uuid:64a10193-9cde-487f-aad4-43a4b1e5f950>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Northern Nevada Girls Math and Technology Program Mathematics Standards and Technology Goals Selected Common Core State Standards: Mathematics Grade 7 Geometry Draw, construct, and describe geometrical figures and describe the relationships between them. • 7.G.A.1 Solve problems involving scale drawings of geometric figures, including computing actual lengths and areas from a scale drawing and reproducing a scale drawing at a different scale. • 7.G.A.2 Draw (freehand, with ruler and protractor, and with technology) geometric shapes with given conditions. Focus on constructing triangles from three measures of angles or sides, noticing when the conditions determine a unique triangle, more than one triangle, or no triangle. • 7.G.A.3Describe the two-dimensional figures that result from slicing three-dimensional figures, as in plane sections of right rectangular prisms and right rectangular pyramids. • Solve real-life and mathematical problems involving angle measure, area, surface area, and volume. • 7.G.B.4 Know the formulas for the area and circumference of a circle and use them to solve problems; give an informal derivation of the relationship between the circumference and area of a • 7.G.B.5 Use facts about supplementary, complementary, vertical, and adjacent angles in a multi-step problem to write and solve simple equations for an unknown angle in a figure. • 7.G.B.6 Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right Grade 7 Statistics & Probability Use random sampling to draw inferences about a population. • 7.SP.A.1 Understand that statistics can be used to gain information about a population by examining a sample of the population; generalizations about a population from a sample are valid only if the sample is representative of that population. Understand that random sampling tends to produce representative samples and support valid inferences. • 7.SP.A.2 Use data from a random sample to draw inferences about a population with an unknown characteristic of interest. Generate multiple samples (or simulated samples) of the same size to gauge the variation in estimates or predictions. For example, estimate the mean word length in a book by randomly sampling words from the book; predict the winner of a school election based on randomly sampled survey data. Gauge how far off the estimate or prediction might be. Draw informal comparative inferences about two populations. • 7.SP.B.3 Informally assess the degree of visual overlap of two numerical data distributions with similar variabilities, measuring the difference between the centers by expressing it as a multiple of a measure of variability. For example, the mean height of players on the basketball team is 10 cm greater than the mean height of players on the soccer team, about twice the variability (mean absolute deviation) on either team; on a dot plot, the separation between the two distributions of heights is noticeable. • 7.SP.B.4 Use measures of center and measures of variability for numerical data from random samples to draw informal comparative inferences about two populations. For example, decide whether the words in a chapter of a seventh-grade science book are generally longer than the words in a chapter of a fourth-grade science book. Investigate chance processes and develop, use, and evaluate probability models. • 7.SP.C.5 Understand that the probability of a chance event is a number between 0 and 1 that expresses the likelihood of the event occurring. Larger numbers indicate greater likelihood. A probability near 0 indicates an unlikely event, a probability around 1/2 indicates an event that is neither unlikely nor likely, and a probability near 1 indicates a likely event. • 7.SP.C.6 Approximate the probability of a chance event by collecting data on the chance process that produces it and observing its long-run relative frequency, and predict the approximate relative frequency given the probability. For example, when rolling a number cube 600 times, predict that a 3 or 6 would be rolled roughly 200 times, but probably not exactly 200 times. • 7.SP.C.7 Develop a probability model and use it to find probabilities of events. Compare probabilities from a model to observed frequencies; if the agreement is not good, explain possible sources of the discrepancy. • 7.SP.C.7a Develop a uniform probability model by assigning equal probability to all outcomes, and use the model to determine probabilities of events. For example, if a student is selected at random from a class, find the probability that Jane will be selected and the probability that a girl will be selected. • 7.SP.C.7b Develop a probability model (which may not be uniform) by observing frequencies in data generated from a chance process. For example, find the approximate probability that a spinning penny will land heads up or that a tossed paper cup will land open-end down. Do the outcomes for the spinning penny appear to be equally likely based on the observed frequencies? • 7.SP.C.8 Find probabilities of compound events using organized lists, tables, tree diagrams, and simulation. • 7.SP.C.8a Understand that, just as with simple events, the probability of a compound event is the fraction of outcomes in the sample space for which the compound event occurs. • 7.SP.C.8b Represent sample spaces for compound events using methods such as organized lists, tables and tree diagrams. For an event described in everyday language (e.g., “rolling double sixes”), identify the outcomes in the sample space which compose the event. • 7.SP.C.8c Design and use a simulation to generate frequencies for compound events. For example, use random digits as a simulation tool to approximate the answer to the question: If 40% of donors have type A blood, what is the probability that it will take at least 4 donors to find one with type A blood? Grade 8 Algebra (Expressions & Equations; Functions) Work with radicals and integer exponents. • 8.EE.A.1 Know and apply the properties of integer exponents to generate equivalent numerical expressions. For example, 3^2 × 3^–5 = 3^–3 = 1/3^3 = 1/27. • 8.EE.A.2 Use square root and cube root symbols to represent solutions to equations of the form x^2 = p and x^3 = p, where p is a positive rational number. Evaluate square roots of small perfect squares and cube roots of small perfect cubes. Know that √2 is irrational. • 8.EE.A.3 Use numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. For example, estimate the population of the United States as 3 times 10^8 and the population of the world as 7 times 10^9, and determine that the world population is more than 20 times • 8.EE.A.4 Perform operations with numbers expressed in scientific notation, including problems where both decimal and scientific notation are used. Use scientific notation and choose units of appropriate size for measurements of very large or very small quantities (e.g., use millimeters per year for seafloor spreading). Interpret scientific notation that has been generated by Understand the connections between proportional relationships, lines, and linear equations. Analyze and solve linear equations and pairs of simultaneous linear equations. • 8.EE.C.7 Solve linear equations in one variable. • 8.EE.C.7a Give examples of linear equations in one variable with one solution, infinitely many solutions, or no solutions. Show which of these possibilities is the case by successively transforming the given equation into simpler forms, until an equivalent equation of the form x = a, a = a, or a = b results (where a and b are different numbers). • 8.EE.C.7b Solve linear equations with rational number coefficients, including equations whose solutions require expanding expressions using the distributive property and collecting like terms. • 8.EE.C.8 Analyze and solve pairs of simultaneous linear equations. • 8.EE.C.8a Understand that solutions to a system of two linear equations in two variables correspond to points of intersection of their graphs, because points of intersection satisfy both equations simultaneously. • 8.EE.C.8b Solve systems of two linear equations in two variables algebraically, and estimate solutions by graphing the equations. Solve simple cases by inspection. For example, 3x + 2y = 5 and 3x + 2y = 6 have no solution because 3x + 2y cannot simultaneously be 5 and 6. • 8.EE.C.8c Solve real-world and mathematical problems leading to two linear equations in two variables. For example, given coordinates for two pairs of points, determine whether the line through the first pair of points intersects the line through the second pair. Define, evaluate, and compare functions. • 8.F.A.1 Understand that a function is a rule that assigns to each input exactly one output. The graph of a function is the set of ordered pairs consisting of an input and the corresponding • 8.F.A.2 Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a linear function represented by a table of values and a linear function represented by an algebraic expression, determine which function has the greater rate of change. • 8.F.A.3 Interpret the equation y = mx + b as defining a linear function, whose graph is a straight line; give examples of functions that are not linear. For example, the function A = s^2 giving the area of a square as a function of its side length is not linear because its graph contains the points (1,1), (2,4) and (3,9), which are not on a straight line. Use functions to model relationships between quantities. • 8.F.B.4 Construct a function to model a linear relationship between two quantities. Determine the rate of change and initial value of the function from a description of a relationship or from two (x, y) values, including reading these from a table or from a graph. Interpret the rate of change and initial value of a linear function in terms of the situation it models, and in terms of its graph or a table of values. • 8.F.B.5 Describe qualitatively the functional relationship between two quantities by analyzing a graph (e.g., where the function is increasing or decreasing, linear or nonlinear). Sketch a graph that exhibits the qualitative features of a function that has been described verbally. Grade 8 Geometry Understand congruence and similarity using physical models, transparencies, or geometry software. • 8.G.A.1 Verify experimentally the properties of rotations, reflections, and translations: • 8.G.A.1a Lines are taken to lines, and line segments to line segments of the same length. • 8.G.A.1b Angles are taken to angles of the same measure. • 8.G.A.1c Parallel lines are taken to parallel lines. • 8.G.A.2 Understand that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; given two congruent figures, describe a sequence that exhibits the congruence between them. • 8.G.A.3 Describe the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates. • 8.G.A.4 Understand that a two-dimensional figure is similar to another if the second can be obtained from the first by a sequence of rotations, reflections, translations, and dilations; given two similar two-dimensional figures, describe a sequence that exhibits the similarity between them. • 8.G.A.5 Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. For example, arrange three copies of the same triangle so that the sum of the three angles appears to form a line, and give an argument in terms of transversals why this is so. Understand and apply the Pythagorean Theorem. • 8.G.B.6 Explain a proof of the Pythagorean Theorem and its converse. • 8.G.B.7 Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions. • 8.G.B.8 Apply the Pythagorean Theorem to find the distance between two points in a coordinate system. Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. • 8.G.C.9 Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. Standards for Mathematical Practice MP1 Make sense of problems and persevere in solving them. Mathematically proficient students start by explaining to themselves the meaning of a problem and looking for entry points to its solution. They analyze givens, constraints, relationships, and goals. They make conjectures about the form and meaning of the solution and plan a solution pathway rather than simply jumping into a solution attempt. They consider analogous problems, and try special cases and simpler forms of the original problem in order to gain insight into its solution. They monitor and evaluate their progress and change course if necessary. Older students might, depending on the context of the problem, transform algebraic expressions or change the viewing window on their graphing calculator to get the information they need. Mathematically proficient students can explain correspondences between equations, verbal descriptions, tables, and graphs or draw diagrams of important features and relationships, graph data, and search for regularity or trends. Younger students might rely on using concrete objects or pictures to help conceptualize and solve a problem. Mathematically proficient students check their answers to problems using a different method, and they continually ask themselves, “Does this make sense?” They can understand the approaches of others to solving complex problems and identify correspondences between different approaches. MP2 Reason abstractly and quantitatively. Mathematically proficient students make sense of quantities and their relationships in problem situations. They bring two complementary abilities to bear on problems involving quantitative relationships: the ability to decontextualize—to abstract a given situation and represent it symbolically and manipulate the representing symbols as if they have a life of their own, without necessarily attending to their referents—and the ability to contextualize, to pause as needed during the manipulation process in order to probe into the referents for the symbols involved. Quantitative reasoning entails habits of creating a coherent representation of the problem at hand; considering the units involved; attending to the meaning of quantities, not just how to compute them; and knowing and flexibly using different properties of operations and objects. MP3 Construct viable arguments and critique the reasoning of others. Mathematically proficient students understand and use stated assumptions, definitions, and previously established results in constructing arguments. They make conjectures and build a logical progression of statements to explore the truth of their conjectures. They are able to analyze situations by breaking them into cases, and can recognize and use counterexamples. They justify their conclusions, communicate them to others, and respond to the arguments of others. They reason inductively about data, making plausible arguments that take into account the context from which the data arose. Mathematically proficient students are also able to compare the effectiveness of two plausible arguments, distinguish correct logic or reasoning from that which is flawed, and—if there is a flaw in an argument—explain what it is. Elementary students can construct arguments using concrete referents such as objects, drawings, diagrams, and actions. Such arguments can make sense and be correct, even though they are not generalized or made formal until later grades. Later, students learn to determine domains to which an argument applies. Students at all grades can listen or read the arguments of others, decide whether they make sense, and ask useful questions to clarify or improve the arguments. MP4 Model with mathematics. Mathematically proficient students can apply the mathematics they know to solve problems arising in everyday life, society, and the workplace. In early grades, this might be as simple as writing an addition equation to describe a situation. In middle grades, a student might apply proportional reasoning to plan a school event or analyze a problem in the community. By high school, a student might use geometry to solve a design problem or use a function to describe how one quantity of interest depends on another. Mathematically proficient students who can apply what they know are comfortable making assumptions and approximations to simplify a complicated situation, realizing that these may need revision later. They are able to identify important quantities in a practical situation and map their relationships using such tools as diagrams, two-way tables, graphs, flowcharts and formulas. They can analyze those relationships mathematically to draw conclusions. They routinely interpret their mathematical results in the context of the situation and reflect on whether the results make sense, possibly improving the model if it has not served its MP5 Use appropriate tools strategically. Mathematically proficient students consider the available tools when solving a mathematical problem. These tools might include pencil and paper, concrete models, a ruler, a protractor, a calculator, a spreadsheet, a computer algebra system, a statistical package, or dynamic geometry software. Proficient students are sufficiently familiar with tools appropriate for their grade or course to make sound decisions about when each of these tools might be helpful, recognizing both the insight to be gained and their limitations. For example, mathematically proficient high school students analyze graphs of functions and solutions generated using a graphing calculator. They detect possible errors by strategically using estimation and other mathematical knowledge. When making mathematical models, they know that technology can enable them to visualize the results of varying assumptions, explore consequences, and compare predictions with data. Mathematically proficient students at various grade levels are able to identify relevant external mathematical resources, such as digital content located on a website, and use them to pose or solve problems. They are able to use technological tools to explore and deepen their understanding of concepts. MP6 Attend to precision. Mathematically proficient students try to communicate precisely to others. They try to use clear definitions in discussion with others and in their own reasoning. They state the meaning of the symbols they choose, including using the equal sign consistently and appropriately. They are careful about specifying units of measure, and labeling axes to clarify the correspondence with quantities in a problem. They calculate accurately and efficiently, express numerical answers with a degree of precision appropriate for the problem context. In the elementary grades, students give carefully formulated explanations to each other. By the time they reach high school they have learned to examine claims and make explicit use of definitions. MP7 Look for and make use of structure. Mathematically proficient students look closely to discern a pattern or structure. Young students, for example, might notice that three and seven more is the same amount as seven and three more, or they may sort a collection of shapes according to how many sides the shapes have. Later, students will see 7 × 8 equals the well remembered 7 × 5 + 7 × 3, in preparation for learning about the distributive property. In the expression x^2 + 9x + 14, older students can see the 14 as 2 × 7 and the 9 as 2 + 7. They recognize the significance of an existing line in a geometric figure and can use the strategy of drawing an auxiliary line for solving problems. They also can step back for an overview and shift perspective. They can see complicated things, such as some algebraic expressions, as single objects or as being composed of several objects. For example, they can see 5 – 3(x – y)^2 as 5 minus a positive number times a square and use that to realize that its value cannot be more than 5 for any real numbers x and y. MP8 Look for and express regularity in repeated reasoning. Mathematically proficient students notice if calculations are repeated, and look both for general methods and for shortcuts. Upper elementary students might notice when dividing 25 by 11 that they are repeating the same calculations over and over again, and conclude they have a repeating decimal. By paying attention to the calculation of slope as they repeatedly check whether points are on the line through (1, 2) with slope 3, middle school students might abstract the equation (y – 2)/(x – 1) = 3. Noticing the regularity in the way terms cancel when expanding (x – 1)(x + 1), (x – 1)(x^2 + x + 1), and (x – 1)(x^3 + x2 + x + 1) might lead them to the general formula for the sum of a geometric series. As they work to solve a problem, mathematically proficient students maintain oversight of the process, while attending to the details. They continually evaluate the reasonableness of their intermediate Source: Common Core State Standards Initiative (http://www.corestandards.org/Math)
{"url":"http://www.unr.edu/girls-math-camp/program-info/curriculum-standards","timestamp":"2014-04-20T10:49:55Z","content_type":null,"content_length":"49206","record_id":"<urn:uuid:4d419d8b-70d5-4c0c-b7ff-47347edffd0e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Astrophysics and Algorithms: A DIMACS Workshop on Massive Astronomical Data Sets Astrophysics and Algorithms: A DIMACS Workshop on Massive Astronomical Data Sets The MACHO Project David Bennett University of Notre Dame The MACHO Project has been taking data since late 1992, and has now accumulated more than 70,000 dual color images of 1/2 square degree fields of the Magellanic Clouds and the Galactic bulge for a total of 5.3 Tb of raw image data. The observed fields are rather crowded with an average density of about 1 million detected stars per square degree. More than 80\% of the data which has been taken to data has been reduced with our SoDOPHOT point spread function fitting photometry program, The reduced photometry database contains 50 billion individual photometric measurements and occupies 500 Gb of storage space which is split between rotating disk and a robotic tape library. The incoming data of up to 7 Gb of image data per night is reduced within a few hours of data taking so that new gravitational microlensing events can be discovered and announced in progress. The photometry database is regularly accessed by both the alert system which requires the rapid access to the lightcurves of a few stars and by complete analysis passes which must sequentially access several hundred Gb of reduced data. Some of the computational problems that the MACHO Project has faced (and solved) will be discussed. The USNO PMM Program Dave Monet US Naval Observatory Flagstaff Station At last count, the U.S. Naval Observatory's Precision Measuring Machine has digitized and processed 7,560,089,606,712 pixels from the Palomar, ESO, AAO, UKST, and Lick photographic sky survey plates, and 52-byte records have been computed for each of 6,648,074,159 detections. The pixel database occupies 1866 rolls of 8-mm tape (only 5,199,639,984,384 pixels were saved), and the detection database occupies 581 CD-ROMs housed in two jukeboxes. The PMM program's first catalog was USNO-A1.0 (see http://www.usno.navy.mil/pmm for details), and completion of its known tasks will take another two years. The presentation will include a brief description of the PMM, some of the lessons learned during the first 3.5 years of operation, and a discussion of the problems anticipated in going from G-rated products such as USNO-A to X-rated products such as public access to the pixel and detection databases. DSS-II and GSC-II: STScI All-Sky Image and Catalog Databases Barry M. Lasker, Gretchen R. Greene, Mario J. Lattanzi, Brian J. McLean, and Antonio Volpicelli ST ScI and OATo A program of digitizing photographic sky survey plates (DSS-II), now quite close to completion, is approaching its final size, a 5 Tbyte collection of 1.1 Gbyte plate scans that cover the entire sky in 42 square degree fields. A set of image processing and object recognition tools applied to these data then results in a list of 4E9 (estimated) objects constituting the second Guide Star Catalog (GSC-II), which consists of positions, proper motions, magnitudes, and colors for each object. In order to preserve generality in the exploitation of these data, we maintain the connection between the images (plate scans) and the GSC-II catalog objects by associating the plate-calibration data (astrometry, photometry, classification) in FITS-like header structures pertinent to each plate. Internally, all the GSC-II data, ie, both the raw plate measures and the calibrated astronomical results, are stored in a database called COMPASS (Catalog of Objects and Measured Parameters from All-Sky Surveys). COMPASS, an object-oriented system built on the Objectivity (tm) DBMS, has an expected final size of 4 Tbytes, is structured for identifying systematic calibration effects so as to optimize the calibrations, and is organized on the sky with the hierarchical triangulated mesh developed by the SDSS Archive team. COMPASS is also used to support consistent object naming between plates, as well as cross-matching with other optical surveys and with data from other wavebands. A much smaller "export" catalogue, in ESO SkyCat format (about 100 Gbyte), will also be produced. The Two Micron All Sky Survey Carol Lonsdale IPAC, JPL/Caltech The 2 Micron All Sky Survey (2MASS) project, a collaboration between the University of Massachusetts (Dr. Mike Skrutskie, PI) and the Infrared Processing and Analysis Center, JPL/Caltech funded primarily by NASA and the NSF, will scan the entire sky utilizing two new, highly automated 1.3m telescopes at Mt. Hopkins, AZ and at CTIO, Chile. Each telescope simultaneously scans the sky at J, H and Ks with a three channel camera using 256x256 arrays of HgCdTe detectors to detect point sources brighter than about 1 mJy (to SNR=10), with a pixel size of 2.0 arcseconds. The data rate is $\sim 19$ Gbyte per night, with a total processed data volume of 13 Tbytes of images and 0.5 Tbyte of tabular data. The 2MASS data is archived nightly into the Infrared Science Information System at IPAC, which is based on an Informix database engine, judged at the time of purchase to have the best commercially available indexing and parallelization flexibility, and a 5 Tbyte-capacity RAID multi-threaded disk system with multi-server shared disk architecture. I will discuss the challenges of processing and archiving the 2MASS data, and of supporting intelligent query access to them by the astronomical community across the net, including possibilities for cross-correlation with other remote data sets. The FIRST Radio Survey Richard L. White (STScI) Robert H. Becker (UC-Davis & LLNL/IGPP) David J. Helfand (Columbia) The FIRST (Faint Images of the Radio Sky at Twenty-cm) survey began in 1993 and has to date covered 4800 square degrees of the north and south Galactic caps. The NRAO Very Large Array is used to create 1.4 GHz images with a resolution of 5.4 arcsec and a 5-sigma sensitivity of 1 mJy for point sources. Both the sensitivity and spatial resolution are major improvements over previous radio The FIRST survey has some unusual characteristics compared with most other surveys discussed at this workshop. Our data volume is not so overwhelming (the total image data currently stands at 0.6 Tbytes), but the data processing involved in constructing the final images is computationally intensive. It requires about 17 hours of CPU time on a Sparc-20 processor to process a square degree of sky (only 20 minutes of VLA observing); the production of the current image database consumed 9 years of Sparc-20 processing time! The data reduction for the FIRST survey has been carried out on a shoestring. The imaging pipeline was developed by 2 to 3 people and has been operated by a single person (RHB) for practically the entire project. It consequently must be highly automated and robust, which is non-trivial for radio imaging. Finally, the FIRST survey is being carried out using a national telescope facility. This makes some things easier (we did not have to build a telescope) and some harder (we must fight continually to maintain our observing time allocation.) Both images and catalogs from the FIRST survey are released essentially immediately after their construction. They are available on the web at http://sundog.stsci.edu Barry F. Madore NASA/IPAC Extragalactic Database Infrared Processing and Analysis Center Jet Propulsion Laboratory California Institute of Technology Pasadena California NED has been operating in the public domain since 1990. Originally composed of a merger of a few well known catalogs of galaxies containing around 30,000 entries each, the object database has now grown to over 750,000, and will soon exceed 3,000,000 extragalactic objects. The problems and challenges unique to a heterogeneous scientific database will be addressed. Design of a successful user interface will be discussed. And the existing shortcominmgs of NED will be The question of doing original and meaningful research with a literature-based database in extragalactic astronomy will be critically reviewed, and plans for upgrading NED in the near future will be Automated Galaxy Classification in Large Sky Surveys S. C. Odewahn Current efforts to perform automatic galaxy classification using artificial neural network image classifiers are reviewed. For both DPOSS Schmidt plate and WFPC2 CCD imagery, a variety of two-dimensional photometric parameter spaces produce a segregation by Hubble type. Through the use of hidden node layers, an artifical neural network is capable of mapping complicated, highly nonlinear data spaces. This powerful technique is used to map a multivariate photometric parameter space to the revised Hubble system of galaxy classification. I discuss a new morphological classifiction approach using Fourier image models to identify barred and ringed spiral systems. Multi-color photometric and morphological type catalogs derived from large image data sets provided by new ground and space-based surveys will be used to compute wavelength-dependent galaxy number counts (see HST example below in Panel B) over a large range in apparent magnitude and provide an observational basis for studies of galaxy formation and evolution. Also see http://astro.caltech.edu/~sco/sco1/talks/Talks.html The Sloan Digital Sky Survey and its Science Database Alex Szalay Johns Hopkins University Astronomy is about to undergo a major paradigm shift, with data sets becoming larger, and more homogeneous, for the first time designed in the top-down fashion. In a few years it may be much easier to ``dial-up'' a part of the sky, when we need a rapid observation than wait for several months to access a (sometimes quite small) telescope. With several projects in multiple wavelengths under way, like the SDSS, 2MASS, GSC-2, POSS2, ROSAT, FIRST and DENIS projects, each surveying a large fraction of the sky, the concept of having a ``digital sky,'' with multiple, TByte-size databases interoperating in a seamless fashion is no longer an outlandish idea. More and more catalogs will be added and linked to the existing ones, query engines will become more sophisticated, and astronomers will have to be just as familiar with mining data as with observing on telescopes. The Sloan Digital Sky Survey is a project to digitally map about $1/2$ of the Northern sky in five filter bands from UV to the near IR, and is expected to detect over 200 million objects in this area. Simultaneously, redshifts will be measured for the brightest 1 million galaxies. The SDSS will revolutionize the field of astronomy, increasing the amount of information available to researchers by several orders of magnitude. The resultant archive that will be used for scientific research will be large (exceeding several Terabytes) and complex: textual information, derived parameters, multi-band images, and spectra. The catalog will allow astronomers to study the evolution of the universe in greater detail and is intended to serve as the standard reference for the next several decades. As a result, we felt the need to provide an archival system that would simplify the process of ``data mining'' and shield researchers from any underlying complex architecture. In our efforts, we have invested a considerable amount of time and energy in understanding how large, complex data sets can be explored. Mathematical Methods for Mining in Massive Data Sets Helene E. Kulsrud Center for Communications Research - Princeton / Institute for Defense Analyses With the advent of higher bandwidth and faster computers, distributed data sets in the petabyte range are being collected. The problem of obtaining information quickly from such data bases requires new and improved mathematical methods. Parallel computation and scaling issues are important areas of research. Techniques such as decision trees, vector-space methods, bayesian and neural nets have been utilized. A short desciption of some successful methods and the problems to which they have been applied will be presented. Trends in High-End Computing and Storage Technologies: Implications for Astronomical Data Analysis Tom Prince In certain areas of astronomical research, advances in computing and information technologies will determine the shape and scope of future research activities. I will review trends and projections for computing, storage, and networking technologies, and explore some of the possible implications for astronomical research. I will discuss several examples of technology-enabled data analysis projects including the Digital Sky project and the search for gravitational waves by LIGO. Inverse Problems in Helioseismology Sarbani Basu Institute for Advanced Study, Princeton, NJ Helioseismology is the study of the Sun using data obtained by monitoring solar oscillations. The data consist of frequencies of normal modes which are most commonly described by spherical harmonics and have three `quantum' numbers associated with them -- the radial order $n$, the degree $\ell$ and the azimuthal order $m$. In the absence of asphericities, all modes with the same $n$ and $\ell$ have the same frequency and the frequency is determined by the spherically symmetric structure. Asymmetry is introduced mainly by rotation and cause the $(n,\ell)$ multiplet to ``split'' into $2\ell +1$ components. To date, the frequencies of about $10^6$ modes have been measured. These therefore, provide $10^6$ observational constraints in addition to the usual constraints of mass, radius and luminosity. However, no solar model constructed so far has been able to reproduce the observed frequencies to within errors. Hence, the interior of the Sun is studied by inverting the observed frequencies. There are essentially two types of inversion problems in helioseismology. The first is inverting for rotation, which is a linear inversion problem, and the second is inversion to obtain solar structure which is not a linear problem and hence the problem needs to be linearized before it can be solved. In this talk I shall describe some of the common methods used in helioseismic inversions and talk about some of the techniques used to reduce the problem to a manageable form -- both in terms of memory and time required. "Fast" Statistical Methods for Interpolation and Model Fitting in One-Dimensional Data Bill Press Harvard University There exist several "fast" (in the sense of linear running time) methods for applying the full machinery of linear prediction and global linear fitting to large one-dimensional data sets such as time series or spectra. These methods make practical calculations which would otherwise have been rejected for their $N^3$ running times. This talk will review the status of these methods and give applications. The software for applying these methods is available, free, on the web. Science With Digital POSS-II (DPOSS) S. G. Djorgovski The ongoing processing of the digitized POSS-II (DPOSS) will result in a catalog containing over 50 million galaxies and over 2 billion stellar objects, complete down to the equivalent limiting magnitude of B ~ 22 mag, over the entire northern sky. The creation, maintenance, and effective scientific exploration of this huge dataset has posed non-trivial technical challenges. A great variety of scientific projects will be possible with this vast new data base, including studies of the large-scale structure in the universe and of the Galactic structure, automatic optical identifications of sources from radio through x-ray bands, generation of objectively defined catalogs of clusters and groups of galaxies, generation of statistically complete catalogs of galaxies to be used in redshift surveys, searches for high-redshift quasars and other active objects, searches for variable or extreme-color objects, etc. Efficient Width Computation of High-Dimensional Point Sets Andreas Brieden Technische Universitaet Muenchen In analyzing high-dimensional point sets several geometric quantities may play an important role. E.g., assume that a point set originally located in an $(n-1)$-dimensional hyperspace can only be measured, by the influence of n oise, to be in an $n$-dimensional space. Then the knowledge of the (Euclidean) width of the convex hull of this point set and also of a width-generating hyperplane can be used to project the data back into a pro per hyperspace. Iterating this process it is also possible to project point sets to lower-dimensional subspaces. In this talk, efficient approximation algorithms for the width-computation are presented that turn out to be asymptotically optimal (with respect to a standard computing model in computational complexity). The presented approach can be extended to other quantities like diameter, inradius, circumradius and the norm-maximum in $l_p$-spaces. Joint work with Peter Gritzmann, Technische Universitaet Muenchen and Victor Klee, University of Washington, Seattle. Shapefinders: a New Shape Diagnostic for Large-Scale Structure Sergei F. Shandarin Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045 We construct a set of shape-finders which determine shapes of compact surfaces (iso-density surfaces in galaxy surveys or N-body simulations) without fitting them to ellipsoidal configurations as done earlier. The new indicators arise from simple, geometrical considerations and are derived from fundamental properties of a surface such as its volume, surface area, integrated mean curvature and connectivity characterized by the Genus. These `Shapefinders' could be used to diagnose the presence of filaments, pancakes and ribbons in large scale structure. Their lower-dimensional generalization may be useful for the study of two-dimensional distributions such as temperature maps of the Cosmic Microwave Background. Challenges in Analysing Future CMB Space Missions Francois Bouchet Insitut d'Astrophysique, Paris The planned CMB missions (MAP/NASA/circa 2000 and PLANCK/ESA/circa 2005) will produce full sky maps of the microwave sky in different frequencies with resolutions better than half a degree. The optimal extraction of information (in particular cosmological) from the quite large database of "timelines" poses a variety of problems which I will survey. I will also describe some of the partial answer obtained so far in the context of the PLANCK scentific preparation. CMB and LSS Power Spectrum Analysis Max Tegmark Institute for Advanced Studies I describe numerical challenges involved in analyzing cosmic microwave background (CMB) and large-scale structure (LSS) data sets. These include mapmaking (regularized linear inversions), power spectrum estimation, Karhunen-Loeve data compression and computation of the Fisher information matrix for cosmological parameters. An Efficient and Stable Fast Spherical Transform Algorithm Dan Rockmore Darthmouth University In this talk we explain and present an implementation of a fast spherical harmonic expansion algorithm. Asymptotically, and in exact arithmetic, we compute exactly a full spherical transform of a function with harmonics of at most order $N$ in $O(N^2 (\log N)^2)$ operations vs. $O(N^3)$ required by direct computation. We require a similar number of operations to perform the inverse transform which goes from Fourier coefficients to sample values. The key component of the fast spherical transform algorithm is the fast Legendre transform which, assuming a precomputed data structure of size $O(N \log N)$, can be performed in $O(N (\log N)^2)$ operations. This asymptotic result is achieved by novel application of the three-term recurrence relation which Legendre functions satisfy. These are general techniques applicable to any set of orthogonal polynomials. Experimental results from our implementation on an HP Exemplar X-Class computer model SPP2000, show a significant speed-up at large problem sizes, with little degradation in numerical stability. There is also evidence which suggests that similar performance should be possible on an SGI Origin. This is joint work with D. M. Healy (Dartmouth College), P. Kostelec (Dartmouth College) and S. S. B. Moore (GTE/BBN) A Fast Method to Bound the CMB Power Spectrum Likelihood Function Julian Borrill Center for Particle Astrophysics, Berkeley, CA As the Cosmic Microwave Background (CMB) radiation is observed to higher and higher angular resolution the size of the resulting datasets becomes a serious constraint on their analysis. In particular current algorithms to determine the location of, and curvature at, the peak of the power spectrum likelihood function from a general $N_{p}$-pixel CMB sky map scale as $O(N_{p}^{3})$. Moreover the current best algorithm --- the quadratic estimator --- is a Newton-Raphson iterative scheme and so requires a `sufficiently good' starting point to guarantee convergence to the true maximum. Here we present an algorithm to calculate bounds on the likelihood function at any point in parameter space using Gaussian quadrature and show that, judiciously applied, it scales as only $O(N_{p}^{7/3})$. Approaches to Gamma-Ray Burst Classification Jon Hakkila (Mankato State U.), David J. Haglin (Mankato State U.), Richard J. Roiger (Mankato State U.), Robert S. Mallozzi (U. Alabama Huntsville), Geoffrey N. Pendleton (U. Alabama Huntsville), and Charles A. Meegan (NASA/MSGC) An understanding of gamma-ray burst (grb) physics is dependent upon interpreting the large body of grb spectral and temporal data. Although many grb spectral and temporal attributes have been identified by various researchers, considerable disagreement exists as to the physical meaning and relative importance of each. We present preliminary but promising attempts to classify grbs using data mining techniques and artificial intelligence classification algorithms. Multiscale Methods in Astronomical Image Processing, Cluster Analysis, and Information Retrieval Fionn Murtagh University of Ulster We will survey multiresolution methods - discrete wavelet transforms and other multiscale transforms - in astronomical image proccessing and data analysis. Objectives include: noise filtering, deconvolution, visualization, image registration, object detection and image compression. A range of examples will be discussed. The extension of this work to cater for detection of point pattern clusters will be described. Finally some very recent applications of this approach to large hypertext dependence arrays will be [1] J-L Starck, F Murtagh and A Bijaoui, Image and Data Analysis: The Multiscale Approach, Cambridge University Press, to appear about April 1998. [2] F Murtagh, "A palette of multiresolution applications", http://hawk.infm.ulst.ac.uk:1998/multires A Multiscale Vision Model and Applications to Astronomical Image and Data Analyses A. Bijaoui, E. Slezak, and B.Vandame Observatoire de la Cote d'Azur, B.P. 229, 06304 Nice Cedex 4 France Many researches were carried out on the automated identification of the astrophy sical sources, and their relevant measurements. Some vision models have been developed for this task, their use depending on the image content. We have developed a multiscale vision model (MVM) (BR95) well suited for analyzing complex structures such like interstellar clouds, galaxies, or cluster of galaxies. Our model is based on a redundant wavelet transform. For each scale we detect significant wavelet coefficients by application of a decision rule based on their probability density functions (PDF) under the hypothesis of a uniform distribution. In the case of a Poisson noise, this PDF can be determined from the autoconvolution of the wavelet function histogram (SLB93). We may also apply Anscombe's transform, scale by scale in order to take into account the integrated number of events at each scale (FSB98). Our aim is to compute an image of all detected structural features. MVM allows us to build oriented trees from the neighbouring of significant wavelet coefficients. Each tree is also divided into subtrees taking into account the maxima along the scale axis. This leads to identify objects in the scale space, and then to restore their images by classical inverse methods. This model works only if the sampling is correct at each scale. It is not generally the case for the orthogonal wavelets, so that we apply the so-called a trous algorithm (BSM94) or a specific pyramidal one (RBV98). It leads to extract superimposed objets of different size, and it gives for each of them a separate image, from which we can obtain position, flux and p attern parameters. We have applied these methods to different kinds of images, photographic plates, CCD frames or X-ray images. We have only to change the statistical rule for extr acting significant coefficients to adapt the model from an image class to another one. We have also applied this model to extract clusters hierarchically distributed or to identify regions devoid of objects from galaxy counts. A. Bijaoui and F. Rue. A multiscale vision model adapted to the astronomical images. Signal Processing, 46:345--362, 1995. [BR95] A. Bijaoui, J.L. Starck, and Murtagh F. Restauration des images multi-echelles par l'algorithme a trous. Traitement du Signal, 11:229--243, 1994. [BSM94] D. Fadda, E. Slezak, A. Bijaoui. Density estimation with non-parametric methods. Astron. and Astrophys. Sup. Ser 127 pp. 335-352 1998. [FSB98] F. Rue, A. Bijaoui, B.Vandame. A Pyramidal Vision Model for astronomical images. I.E.E.E. Image Processing, submitted 1997. [RBV98] E. Slezak, V. de Lapparent, A. Bijaoui. Objective Detection of voids and High density structures in the first CfA redshift survey slice. Ap. J. 409 pp.517-529 1993. [SLB93] Analysing Very Large Data Sets From Cosmological Simulations Renyue Cen Princeton University Observatory Current large scale cosmological simulations generate data of order 100 GB/per simulation. Post-simulation analyses of such large data sets pose a severe challenge to simulators. We will present some methods that we use to circumvent the problem of limited RAM size (assuming CPU time permits). Contacting the Center Document last modified on November 2, 1998.
{"url":"http://dimacs.rutgers.edu/Workshops/Astro/abstracts.html","timestamp":"2014-04-20T23:28:30Z","content_type":null,"content_length":"30180","record_id":"<urn:uuid:ebee2899-7065-44d9-b855-40df33dd1d92>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplying Decimals When multiplying a decimal number by another decimal number, it again helps to be reminded what they'd look like in fraction form. Sample Problem As with addition and subtraction, converting decimals to fractions and back again is pretty inefficient. Thankfully, just as with addition and subtraction, we can get around that. Just plow right through those "detour" signs. In the example 0.8 x 0.4, we multiplied two decimals with one decimal place each. When we wrote the numbers as fractions, we were multiplying two fractions that each had 10 in the denominator. The product of those fractions gave us a denominator of 100, so the corresponding decimal had two decimal places. Once again, we're just counting zeros. Better than counting crows. Suppose a, b and c are three decimal numbers. How do you figure out the number of decimal places in the product of a x b x c? (add the number of decimal places in a to the number of decimal places in b to the number of decimal places in c) Yes, unfortunately, you will need to know how to multiply three numbers together. Curse those three-dimensional shapes. Multiplying Decimals Practice: Given 0.4 and 0.005, write each number as a fraction. Find the denominator of the product of How many decimal places will there be in the final product of 0.4 × 0.005? How many decimal places will there be in the product of 0.005 × 0.1237? How many decimal places will there be in the product of 0.3 × 0.87? Solve 0.245 x 0.02 x 0.9 x 0
{"url":"http://www.shmoop.com/number-types/multiplying-decimals-help.html","timestamp":"2014-04-20T08:56:20Z","content_type":null,"content_length":"49896","record_id":"<urn:uuid:0a39e7a0-5c93-4fd3-af15-18516afb7efd>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help understanding Conway's Game of Life up vote 1 down vote favorite I'm trying to write code for Conway's Game of Life to determine the immediate next pattern for a given pattern of cells, but I'm not sure whether I really understand the steps. So for example consider the below toad pattern. The cells marked x are alive and those marked - are dead. The above should transpose into the following The rules as we know are: 1. A live cell with less than 2 or more than 3 neighbours dies 2. A live cell with exactly 2 or 3 neighbours survives 3. A dead cell with exactly 3 neighbours comes to life. So, the first cell in the input c[0,0] is - and it has 3 live neigbours (one horizontally,vertically and diagonally each), so it should be alive in the output, but it's not. Can someone please 1 What do you mean it should be alive in the output, but its not.? Do you have a bug in your program or you don't understand how it can work at all? – Peter Lawrey Jan 17 '12 at 13:46 1 perhaps a bug in your code? why does the first state in your post have 2 rows while the second state has 4 rows? you haven't provided any code so how is anyone to know whats wrong with your code? :) – Nerdtron Jan 17 '12 at 13:47 Thanks for the reply Peter and Nerdtron.I havent started coding. Im, just trying to understand, how the above transposition takes place. The input has two rows but output 4,and im not sure how that can happen. – Jim Jan 17 '12 at 13:55 add comment 3 Answers active oldest votes The middle two rows in your output are the ones that correspond to the two rows in your input. The upper left cell in the input corresponds to the second row extreme left in the output, up vote 3 and as you can see, it's alive. down vote Thanks for the explanation Ernest. I understand your point But if the middle two rows of the ouput are corresponding to the two rows of my input then how are the uppermost and lowermost rows being generated ? – Jim Jan 17 '12 at 13:53 Specifically, why does the output have 4 rows when the input has only two ? – Jim Jan 17 '12 at 13:56 It is implied that the eight cells in the middle are embedded in an infinite field of dead cells; you always have to do the computation on such an implicit infinite field. In practice, since the "speed of light" in Life is just one cell per generation, it's a reasonable approximation to just keep a border of one extra (not displayed) cell all around your field. You won't get the exactly correct result this way, but it's better than nothing. The larger field you use, and the more non-displayed rows you include, the more accurate your results. – Ernest Friedman-Hill Jan 17 '12 at 13:59 OK, i got it. It seems if a row has n cells the output has to be generated for n x n grid. This makes sense now. Thank You :) – Jim Jan 17 '12 at 14:00 No, at least (n+1) x (n+1) to get 100% accurate results in the next generation. The one after that may be incorrect unless your field is (n+2) x (n+2), though -- and so on. – Ernest Friedman-Hill Jan 17 '12 at 14:02 add comment It is alive in the output. It's right here: up vote 0 down -x-- The x in the first row is above the first row in the first output. The rules of Life assume an unbounded plane. If you want to call the top row of the first output 0, you can, but then the top row of the second output is -1. add comment Well it is. Your 2-line long input is the middle part of your 4-line output. I think when you look at it now you'll understand everything. up vote 0 down vote Have you looked at least at wikipedia? add comment Not the answer you're looking for? Browse other questions tagged language-agnostic or ask your own question.
{"url":"http://stackoverflow.com/questions/8895632/need-help-understanding-conways-game-of-life?answertab=oldest","timestamp":"2014-04-19T00:11:24Z","content_type":null,"content_length":"79736","record_id":"<urn:uuid:3779b192-13ad-40bf-9153-5b4dda6b8420>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Bacteria Growth Rate 1. The problem statement, all variables and given/known data A beaker contained 2000 bacteira. one hour later the beaker contained 2500 bacteria. What is the doubling time of the bacteria? 2. Relevant equations rate = (distance)/(time) Time to double = .693/((ln(1+r))^t) 3. The attempt at a solution rate = 2500/2000 My biggest problem is trying to find the rate, I used this at first, but think it is giving me the wrong answer. I know how to finish the problem, i just need help finding the rate. Thanks!
{"url":"http://www.physicsforums.com/showthread.php?t=277679","timestamp":"2014-04-16T18:59:54Z","content_type":null,"content_length":"22843","record_id":"<urn:uuid:240188f7-de5b-4d9e-a70b-f429567ebb12>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
La Marque Precalculus Tutor ...I really enjoy it and I always receive great feedback from my clients. I consider my client's grade as if it were my own grade, and I will do whatever it takes to make sure you get it, and at the same time make sure our sessions are easy and enjoyable. I can tutor almost any subject, but my spe... 38 Subjects: including precalculus, English, calculus, reading ...I am a graduate student at the University of Houston and a mathematics tutor at San Jacinto College. I have 3 to 4 years experience in mathematics. I have many interest in mathematics. 16 Subjects: including precalculus, calculus, ACT Math, logic Hello! My name is Ryan and I am a math teacher at Clear Brook High School. I have taught math for 5 years, and have tutored many high school and college level math students. 20 Subjects: including precalculus, Spanish, geometry, algebra 1 ...I also have experience as a tutor with a large private learning center. I've helped countless students of all ages with math of all levels. I've also taught SAT and ACT prep. 34 Subjects: including precalculus, reading, English, chemistry ...S. Air Force Academy for 7 years. In addition I taught at the following universities: San Antonio College, University of Maryland, University of Colorado, Auburn University at Montgomery, AL. 11 Subjects: including precalculus, calculus, geometry, statistics
{"url":"http://www.purplemath.com/la_marque_precalculus_tutors.php","timestamp":"2014-04-17T19:39:10Z","content_type":null,"content_length":"23715","record_id":"<urn:uuid:cc2fc696-ef92-4d2a-a553-f3669fcf1377>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Every continuous function is homotopic to a locally Lipschitz one up vote 10 down vote favorite I would like to know for which category/class/set of metric spaces the following holds: for any two metric spaces $X$, $Y$, for any continuous function $f:X\to Y$ there exists a locally Lipschitz continuous function $g:X\to Y$ which is homotopic to $f$. EDIT: One could also ask a class of metrizable topological spaces such that each one of them can be given a metric so that the above property holds. Actually, I am more interested in the underlying topological space than in the actual metric space. In general, the metric spaces I am considering are complete and weakly separable (there exists a sequence $(\phi_h)$ of $1$-Lipschitz functions such that for any two point $x,\ y$ $d(x,y)=\sup_h|\ I don't know if this is a known fact among experts or not; in that case, I apologize for the standard question and would ask only for a reference. ADDENDUM: Although I also have an interest for the general question as it is posed above, I could try to highlight some classes of metrizable spaces I have particular interest in knowing if they fulfill the request or not: manifolds, singular spaces (which singularities are allowed), spaces which are manifolds outside a "small" (in some sense) set, compact manifolds of infinite dimension or manifolds modeled on some "nice" linear space (Banach, Hilbert, Fréchet, ...). at.algebraic-topology mg.metric-geometry If $Y$ be a convex subset of a topological vector field,then every two maps $f,g$ from $X$ to $Y$ are homotopic because the map $(x,t)$ to $tf(x)+(1-t)g(x)$ is continuous,now a constant map is locally lipschitz continuous. – R Salimi Apr 6 '13 at 20:04 Samuele: It should work if domain is a metric simplicial complex (or an Alexandiv space) and range is CAT(k) with $k<\infty$. In general, it would be good if you were more specific about the classes of metric spaces you are interested in. – Misha Apr 7 '13 at 4:50 Well, I assume my metric space to be complete and with a weak property of separability (for any two point $x,\ y$ there exists a sequence of $1$-Lip maps $(\phi_h)$ such that $d(x,y)=\sup_h |\phi_h (x)-\phi_h(y)|$). But obviously I don't expect that every of each spaces has a metric such that my request is satisfied. I think I could ask the following: is it true for manifolds? does it remain true if we allow singularities? which ones? is it true for infinite dimensional manifolds (maybe compact)? for Banachian or Hilbertian compact manifolds, at least? – Samuele Apr 7 '13 at 6:45 Samuele: Your last condition is satisfied for all metric spaces, since you can use distance function to x as your 1-Lispchitz function. Riemannian manifolds and manifolds with singularities belong to the class from my comment, provided dimension of the domain is finite, dimension of the range could be infinite. – Misha Apr 7 '13 at 13:31 Samuele: If you have a compact manifold modeled on a Banach space, then the Banach space has to be finite-dimensional. – Misha Apr 7 '13 at 13:48 show 1 more comment 2 Answers active oldest votes A modest start. Consider two finite geometric simplicial complexes with reasonable metrics, e.g. inherited from the ambient Euclidean (or Banach) space (where simplices are affine). Then every continuous function $f$ between them is uniformly approximated by the simplicial maps of iterated baricentric subdivisions of the first complex into the second complex. All these simplicial maps are Lipschitz. When approximation is close enough to $f$ then it is homotopically equivalent to $f$. This gives a positive answer to your question for finite geometric simplicial complexes. REMARK 0 For the sake of obtaining a Lipschitz map homotopic to a given continuous map one does not need to subdivide the second complex. up vote 5 down vote On the other hand, it is not difficult to provide two metric functions (distance functions) for the unit circle $S^1$ (thus let's talk about two metric spaces anyway) such that the identity map from one of them to another is not homotopic to any locally Lipschitz function. Indeed, there will not exist any locally Lipschitz function at all (not even at any inverse image of any non-empty open set) from the first space onto the second one (under the fixed but properly selected metric functions; the first one can be the standard metrics). REMARK 1 Instead of $S^1$ we could consider a space consisting of a convergent sequence and its limit, endowed with two distance functions such that the identity is not Lipschitz (at the limit point). The only map homotopic to the identity is the identity, hence another instance of the negative answer. But $S^1$ is nicer :-) First of all, thank you. So, finite (geometric) simmplicial complexes work. Why do you need them to be finite? About the counterexample, that's interesting! But I was kind of expecting something like that: that's why I asked for a class of metric spaces, which come together with their distances. Another way to put the question could be to ask for a class of topological spaces which can be endowed with a distance (inducing their topology, hence metrizable spaces) so that the property holds. Could it be the case that CW-complexes do work? – Samuele Apr 7 '13 at 2:01 add comment Such approximation is possible under some mild assumptions about domain and range. For the domain you want to have structure of a finite dimensional structure of a metric simplicial complex of locally bounded geometry. For example, a Riemannian manifold or Alexandrov space would do. For the target you should impose some conditions implying local linear contractibility, for instance, a space which is locally CAT(k), where $k<\infty$ would suffice. The proof is based on barycentric maps of smplices, which you can find in the paper of Bruce Kleiner, "The local structure of length spaces of curvature bounded above", Math. Z. 1999. up vote 3 down vote The construction of Lipschitz approximation is the same as cellular approximation in algebraic topology. First, approximate your map on the set of vertices. Then extend to simplices by induction on skeleta, using barycentric simplices as in Kleiner's paper. Some of this might even work if domain is infinite dimensional, but you would need to control the Lipschitz constant for the barycentric maps. add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology mg.metric-geometry or ask your own question.
{"url":"https://mathoverflow.net/questions/126654/every-continuous-function-is-homotopic-to-a-locally-lipschitz-one/126689","timestamp":"2014-04-20T21:11:21Z","content_type":null,"content_length":"64639","record_id":"<urn:uuid:6057fb5f-4a00-48a9-b7bf-3f5dc72325f4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Stuff -- Kevin Walker I can be reached at I work at Microsoft Station Q. New TQFT notes: TQFTs [version 1h, 11 May 2006]. This is an early, incomplete draft. There are notational (and other) inconsistencies, and some parts have not been proof-read. Still, it's probably better than nothing. 1991 TQFT notes: "On Witten's 3-manifold Invariants". Other papers etc.: I don't necessarily recommend it, but you can also try to decipher my slides from talks.
{"url":"http://canyon23.net/math/","timestamp":"2014-04-17T12:33:06Z","content_type":null,"content_length":"7156","record_id":"<urn:uuid:70b301e3-89fa-451d-b465-1c5ff4f93466>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of Proportion ● A proportion is an equation written in the form ● In other words, two sets of numbers are proportional if one set is a constant times the other. Examples of Proportion Solved Example on Proportion A cardboard model of a Honda bike is part of an outdoor display. Its height is 4 ft. The actual Honda bike is 5 ft long and 2 ft high. Find the length of the model, if its dimensions are proportionate to the real bike. A. 11 ft B. 12 ft C. 9 ft D. 10 ft Correct Answer: D Step 1: Let n be the length of Honda bike cardboard model. Step 2: Step 3: n × 2 = 4 × 5 [Write the cross products.] Step 4: n = 10 [Simplify the expression.] Step 5: The length of the Honda bike cardboard model is 10 ft. Related Terms for Proportion ● Ratio ● Equation ● Proportion
{"url":"http://www.icoachmath.com/math_dictionary/proportion.html","timestamp":"2014-04-19T17:34:03Z","content_type":null,"content_length":"8320","record_id":"<urn:uuid:fa100e70-f220-4921-8d1f-294e2d499af2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
British Journal of Mathematical and Statistical Psychology Ryu, E. (2014), Factorial invariance in multilevel confirmatory factor analysis. British Journal of Mathematical and Statistical Psychology, 67: 172–194. doi: 10.1111/bmsp.12014 An earlier version of this paper was presented at the 8th International Amsterdam Multilevel Conference, Amsterdam Netherlands, March 2011. The author would like to thank Stephen G. West for his comments on the earlier version. This paper presents a procedure to test factorial invariance in multilevel confirmatory factor analysis. When the group membership is at level 2, multilevel factorial invariance can be tested by a simple extension of the standard procedure. However level-1 group membership raises problems which cannot be appropriately handled by the standard procedure, because the dependency between members of different level-1 groups is not appropriately taken into account. The procedure presented in this article provides a solution to this problem. This paper also shows Muthén's maximum likelihood (MUML) estimation for testing multilevel factorial invariance across level-1 groups as a viable alternative to maximum likelihood estimation. Testing multilevel factorial invariance across level-2 groups and testing multilevel factorial invariance across level-1 groups are illustrated using empirical examples. SAS macro and Mplus syntax are provided. With the development of multilevel structural equation modelling (Goldstein & McDonald, 1988; Lee, 1990; Longford & Muthén, 1992; Muthén, 1990, 1994; Muthén & Satorra, 1995), multilevel confirmatory factor analysis is increasingly used in behavioural and social sciences (e.g., Cheung, Leung, & Au, 2006; Reise, Ventura, Nuechterlein, & Kim, 2005; Zimprich, Perren, & Hornung, 2005). Multilevel confirmatory factor analysis provides an approach to estimating and evaluating measurement models with multilevel data. Like other measurement models, issues of measurement invariance potentially arise in multilevel measurement models. Measurement invariance concerns whether the relationship between an observed measure and underlying latent construct is the same across different groups (Mellenbergh, 1989; Meredith & Millsap, 1992; Millsap, 1997). When the same measure is collected from qualitatively distinct groups (e.g., males and females; public and private school students) or at different time points from the same individuals in repeated measures designs (e.g., first grade and third grade), it is critical to establish measurement invariance in order for comparisons of the constructs to be meaningful. The admissible comparison that may be made across groups depends critically on the level of measurement invariance that can be achieved (Widaman & Reise, 1997). Within the structural equation modelling tradition, measurement invariance is established by testing a hierarchical series of models that impose increasingly strict constraints on the hypothesized confirmatory factor analysis (CFA) model. Factorial invariance in CFA is typically examined at four different levels: configural, weak, strong, and strict invariance (described below). For single-level CFA models, a standard procedure for testing factorial invariance has become well established (e.g., Cheung & Rensvold, 1999; Meredith, 1993; Reise, Widaman, & Pugh, 1993). Numerous applications testing factorial invariance can be found in a variety of areas of psychological (e.g., Atienza, Balaguer, & García-Merita, 2003; Dauphinee, Schau, & Stevens, 1997; Thill et al., 2003), cross-cultural (e.g., Ang et al., 2009; Steenkamp & Baumgartner, 1998), organizational (e.g., Schaufeli & Bakker, 2004; Torkzadeh, Koufteros, & Doll, 2005), health (e.g., Ang, Shen, & Monahan, 2008; Gregorich, 2006; Malcarne, Fernandez, & Flores, 2005), and educational research (e.g., Edwards & Oakland, 2006; Green-Demers, Legault, Pelletier, & Pelletier, 2008). Factorial invariance in multilevel CFA requires examining invariance of parameters in the level-1 model and that in the level-2 model. Multilevel data have a hierarchical structure such that individual observations are nested within clusters. In multilevel modelling, level 1 indicates the lowest level in the nested structure, level 2 indicates the next level within which the level-1 observations are nested, and so on. Multilevel factorial invariance introduces additional complexities that are beyond a simple extension of the well-established procedures for testing factorial invariance in single-level CFA. First, the group membership may exist at level 1 or at level 2. To illustrate, consider a two-level model in which the data are collected from students nested within schools. The researcher may be interested in testing factorial invariance at the school level (level 2), for example, between public and private or between religious and non-religious schools. Alternatively, the researcher may be interested in testing factorial invariance at the student level (level 1), for example, between boys and girls or between first- and third-grade students. Second, when group membership is at the lower level (level 1, here students), the level-1 group membership intersects the clustered structure of multilevel data. In this case, a methodological challenge arises: how can factorial invariance be tested across level-1 groups without losing the capability of multilevel modelling to appropriately adjust for the potential dependency arising from the clustering in multilevel data? Despite the development of multilevel structural equation modelling, testing factorial invariance in multilevel CFA across multiple groups has not been fully established in the literature. Only a few studies have considered factorial invariance in multilevel CFA across level-2 groups (Davidov, Dülmer, Schlülter, Schmidt, & Meuleman, 2012; Kim, Kwok, & Yoon, 2012; Muthén, Khoo, & Gustafsson, 1997). Multilevel factorial invariance across level-1 groups has rarely been addressed, except in one recent study by Jak, Oort, and Dolan (2013a). Jak et al. proposed a five-step procedure which tests both invariance across level-1 groups and invariance across level-2 groups. For testing invariance across level-1 groups, their five-step procedure takes the approach of using the level-1 group membership as a covariate (referred to as a restricted factor analysis model in their paper), instead of taking a multiple group analysis approach. The goal of this paper is to present and illustrate a procedure for testing factorial invariance in multilevel CFA model. The presented procedure takes a multiple group analysis approach. Therefore it is within the same framework as the well-known standard procedure for testing factorial invariance in single-level CFA. Also, as will be shown later, testing invariance in factor loadings is straightforward in the procedure presented in this paper, whereas it is less straightforward in the restricted factor analysis model, particularly when the invariance is to be tested for a large number of factor loadings simultaneously (see Jak et al., 2013a, for more details about the restricted factor analysis model). The procedure for testing multilevel factorial invariance across level-2 groups is parallel to the procedure for factorial invariance in single-level CFA. Testing multilevel factorial invariance across level-1 groups, however, raises methodological challenges. This paper identifies two challenges and presents solutions to them. For both cases, testing multilevel factorial invariance is illustrated using an empirical data set. Given limitations of space, the presentation is limited to the contexts that are most widely used in the literature: two-level CFA models and factorial invariance between two groups. Let y be a vector of observed measures and η be a vector of underlying latent factors. The measurement model in CFA specifies a set of linear relations between observed measures and underlying latent where τ is a vector of measurement intercepts, Λ is factor loading matrix, and ɛ is a vector of residuals. It is assumed that where the subscript k indicates group membership. Assuming that the latent factors are uncorrelated with the residuals, the mean and covariance structure of yk are reproduced respectively by Factorial invariance is typically tested at the following four levels (Widaman & Reise, 1997): Figure 1 illustrates linear relations between an observed variable and a latent factor at four levels of invariance between two groups. Solid lines represent the linear relation between an observed variable and a latent factor in group 1; dashed lines represent the linear relation in group 2. With configural invariance, no parameter associated with latent factors is comparable across groups. When weak invariance holds, the relations between observed and latent variables in two groups are depicted by parallel but non-overlapping lines, as in Figure 1(b). In this case, the covariance matrix of latent factors Ψk is comparable across groups (directly if the same identification method is used; indirectly if different identification methods are used across groups), because the variance and covariance are based on the deviation from the mean. When strong invariance holds, the two lines exactly overlap each other, as in Figure 1(c). Under strong invariance, (3) and (4) are simplified to In this case, the means (αk) and covariance matrix (Ψk) of latent factors are comparable across groups. In (5), any difference in the means of observed scores y between groups is due to the difference in latent factor means α and therefore the means of observed variables are comparable across groups. But the variances and covariances of observed scores y are due not only to the difference in Ψ but also to the difference in Θk. In Figure 1(c), the two lines exactly overlap each other but group 1 (indicated by crosses) shows larger residual variance than group 2 (indicated by circles). Finally, when strict invariance holds (Figure 1(d)), the two lines exactly overlap each other and also the residual variance is the same in both groups. In this case, (6) is simplified to With strict invariance, the means (αk) and covariance matrix (Ψk) of latent factors are comparable across groups. As shown in (5) and (7), the group difference in both means and variances of observed scores is due to the group difference in means and variances of latent factors. In other words, all group differences on the observed scores are attributable to group difference on the latent factors. The means and variances of observed variables are comparable too. Throughout this paper, I use individual as level-1 unit and cluster as level-2 unit in multilevel data. Group indicates the group membership across which the invariance is tested. Suppose that the data are from N individuals clustered within J clusters. The clusters are a simple random sample from a population of clusters, and the individuals are a simple random sample with each cluster. The number of individuals in the jth cluster is nj, wherenj = n for all j. Let yij denote a data vector for individual i in cluster j. In multilevel CFA, the data vector yij is decomposed into two latent random components reflecting two sources of random variation in multilevel data: between-cluster random components (yBj) and within-cluster, between-individual random components (yWij): where yBj and yWij are latent (i.e., not directly observed) components. All level-1 variables are subject to implicit, model-based decomposition in multilevel CFA. The two-level CFA model specifies linear relations between yBj and underlying latent factors ηBj at level 2, and linear relations between yWij and underlying latent factors ηWij at level 1: In (9), the level-1 parameters (ΛW, ΨW, ΘW) do not have subscript j because the within-cluster covariance structure is assumed to be homogeneous across clusters (assumption (b)).1 With these assumptions, the mean and covariance structure of yij are reproduced by (10) and (11), respectively: In two-level CFA, the mean structure is captured at level 2 and the level-1 model has no mean structure, as shown in (9) and (10). I now consider establishing factorial invariance in multilevel CFA, first when the group membership is at level 2 and second when the group membership is at level 1. The procedure for testing multilevel factorial invariance across level-2 groups is parallel to the standard procedure for testing single-level factorial invariance. Because the group membership is at level 2, the data can be separated by group membership without altering the clustering in multilevel data. With level-2 group membership (k = 1, …, K), the two-level CFA model can be written as Assuming multivariate normality, the maximum likelihood (ML) solution can be obtained by the following fitting function: All four levels of invariance are testable in the level-2 model: configural, weak (not because the level-1 model has no mean structure. A model testing invariance in Suppose the standard procedure is adopted to test multilevel factorial invariance across level-1 groups (e.g., boys vs. girls in typical mixed gender schools that have both male and female students). The standard procedure first separates the data into level-1 groups. Then multilevel CFA models are specified in each group and equality of parameters is tested across the groups. In this case, the individuals are separated into groups within each cluster because the level-1 group membership intersects the level-2 cluster. Therefore the decomposition of level-1 variables occurs separately in each level-1 group: where the subscript |k indicates that the decomposition occurs separately within each level-1 group. The between-cluster component yBj|k in (14) is not constant for all individuals within the same cluster (e.g., the between components for boys are different from the between components for girls within the same school). The decomposition shown in (14) fails to capture the dependency between members of different level-1 groups who belong to the same cluster. In order to maintain the capability of multilevel modelling to take dependency due to the clustered structure into account, the decomposition should follow (8), not (14). When the group membership is at level 1, it is critical that the decomposition of the level-1 variables is not conditional on the level-1 group membership. Therefore the decomposition must occur before the data are separated into level-1 groups: where the subscript (k) within parentheses indicates that the data are separated after the level-1 variables are decomposed. The between-cluster component is constant for all individuals regardless of level-1 group membership as long as they belong to the same cluster (i.e., k). The level-2 model is equivalent regardless of level-1 group membership. Another methodological challenge in testing multilevel factorial invariance across level-1 groups is that an appropriate effective sample size should be used for the level-2 model. In (15) the effective sample size is J for all J. In the simplest case, suppose that the school sizes are equal across all J schools and there are equal numbers of boys and girls in each school. The effective sample size would be 0.5J for J for J. For a general case, the effective level-2 sample size can be obtained using weights that are based on the relative sizes of level-1 groups in each cluster. The weight in level-1 group k is obtained by where k in cluster j and nj is the total number of individuals in cluster j. The effective level-2 sample size for each level-1 group is obtained by J. Assuming multivariate normality, the ML fitting function for two-level CFA for multiple level-1 groups can be written as2 where θ is a vector of parameters, j in group k, and k). To illustrate testing multilevel factorial invariance across level-1 groups, I use Muthén's maximum likelihood (MUML: Muthén, 1989, 1990) estimation via a manual set-up for two reasons. First, MUML serves as a better vehicle for a didactic presentation, showing the decomposition of yij more explicitly. Second, there is currently no software package available to obtain ML solutions using the ML fitting function shown in (18). When the cluster sizes are equal for all clusters, the MUML estimates are equivalent to ML estimates. When the cluster sizes are not equal (i.e., unbalanced cluster sizes), MUML provides an approximated solution. The performance of MUML approximation theoretically depends on the level-2 sample size (i.e., number of clusters) and the variability of cluster sizes (Yuan & Hayashi, 2005). It has been empirically shown that MUML provides a good approximation when the number of clusters is 100 or larger (Hox, 1993; Hox & Maas, 2001). MUML via multiple group analysis requires means of observed variables (c, described below), between-cluster covariance matrix (SB), and pooled within-cluster (SPW) covariance matrix as input data: SB is an unbiased estimator of the weighted composite of the population within and between covariance matrices (ΣW + cΣB), where c is a scaling parameter, defined byc = n for balanced cases. SPW is an unbiased estimator of population within covariance matrix (ΣW). The ‘trick’ in MUML via manual set-up is to use a multiple group analysis of single-level CFA models with two ‘groups’ (Mgroup hereafter).3 In the first Mgroup, [SB]. In the second Mgroup, SPW. The mean structure in the second Mgroup is zero. The effective sample sizes are J and N - J for Mgroup 1 and Mgroup 2, respectively. The within-cluster covariance structure Mgroups. The input means are weighted by Mgroups as the number of groups. The following input statistics are required in order to use MUML estimation for testing multilevel factorial invariance across level-1 groups: where k, k, j in group k, and k. A SAS macro to compute the weights for level-2 effective sample size, scaling parameters, and the input statistics can be found in the supporting information, available with the online version of this paper. For MUML estimation via a manual set-up, 2k Mgroups are required. For example, in order to test multilevel factorial invariance between two level-1 groups, a multiple group analysis with four Mgroups needs to be specified. For group k = 1, Mgroup 1; Mgroup 2. For group k = 2, Mgroup 3; Mgroup 4. The effective sample sizes for Mgroups 1–4 are Mgroups within each level-1 group membership (e.g., Mgroup 1 and Mgroup 2 for group k). The second is for Mgroups 1 and 3). The third is a set of constraints necessary for model identification and test of factorial invariance in the hierarchy. The necessary constraints are shown in the example below. In this section, I illustrate testing multilevel factorial invariance using PISA (Programme for International Student Assessment) 2003 data (Lee, 2009; OECD, 2004, 2005). A two-level CFA model for five mathematics self-efficacy items (see Table 1)4 is shown in Figure 2. I first illustrate testing multilevel factorial invariance between two countries, New Zealand and Turkey (level-2 group membership), and then testing multilevel factorial invariance between male and female students in Turkey (level-1 group membership). Table 1. PISA 2003 mathematics self-efficacy items ICC in each country Items ICC NZL TUR 1. ^ The self-efficacy items asked “How confident do you feel about having to do the following mathematics tasks?” on a 1–4 Likert scale of 1 = very confident, 2 = confident, 3 = not very confident, and 4 = not at all confident. NZL = New Zealand; TUR = Turkey. ICC = intraclass correlation. Q31c Calculating how many square metres of tiles you need to cover a floor .080 .035 .096 Q31b Calculating how much cheaper a TV would be after a 30% discount .088 .043 .109 Q31a Using a train timetable to work out how long it would take to get from one place to another .164 .056 .115 Q31d Understanding graphs presented in newspapers .132 .039 .092 Q31 h Calculating the petrol consumption rate of a car .063 .037 .083 The self-efficacy items asked “How confident do you feel about having to do the following mathematics tasks?” on a 1–4 Likert scale of 1 = very confident, 2 = confident, 3 = not very confident, and 4 = not at all confident. NZL = New Zealand; TUR = Turkey. ICC = intraclass correlation. In New Zealand (NZL), 4,189 students were nested within 173 schools. School size ranged from 1 to 54 (mean = 24.21, standard deviation = 10.031). In Turkey (TUR), 4,030 students were nested within 159 schools. School size ranged from 2 to 35 (mean = 25.35, standard deviation = 7.865). ML estimation in Mplus 6.12 (Muthén & Muthén, 1998) was used. The goodness of fit was examined for the overall model (i.e., the fit of the school-level and student-level models was examined simultaneously) and also for each level separately using the level-specific approach described in Ryu and West (2009). In multilevel CFA, the model fit statistics for the overall model may be dominated by the level-1 model because the sample size is typically larger at level 1, and therefore the lack of fit in the level-2 model may not be detected by the fit statistics for the overall model (see Hox, 2010; Ryu & West, 2009). The level-specific approach by Ryu and West utilizes partially saturated models to evaluate the model fit at each level separately (e.g., a partially saturated model in which the level-1 model is saturated and the level-2 model is specified as hypothesized is used to evaluate the fit of the level-2 model). In both countries, the model fitted well at both levels (see Table 2). In NZL, the proportion of variance accounted for by the common factor (i.e., communality) ranged from .348 to .589 at student level, and from .531 to .934 at school level. In TUR, the communality ranged from .261 to .465 at student level, and from .785 to .989 at school level. Table 2. Model fit statistics of the two-level CFA model for mathematics self-efficacy New Zealand Turkey 1. ^ CFI = comparative fit index; RMSEA = root mean square error of approximation; SRMR = standardized root mean square residual. The fit statistics for school-level and student-level models were obtained by the level-specific approach using partially saturated models (see Ryu & West, 2009). Overall model χ^2 = 69.467, df = 10, p < .001 χ^2 = 59.223, df = 10, p < .001 CFI = .989 CFI = .990 RMSEA = .038 RMSEA = .035 SRMR[B] = .034, SRMR[W] = .018 SRMR[B] = .022, SRMR[W] = .017 School-level model df = 5, p = .453 df = 5, p = .055 CFI[PS_B] = 1.000 CFI[PS_B] = .986 RMSEA[PS_B] = .000 RMSEA[PS_B] = .086 Student-level model df = 5, p < .001 df = 5, p < .001 CFI[PS_W] = .989 CFI[PS_W] = .990 RMSEA[PS_W] = .037 RMSEA[PS_W] = .032 CFI = comparative fit index; RMSEA = root mean square error of approximation; SRMR = standardized root mean square residual. The fit statistics for school-level and student-level models were obtained by the level-specific approach using partially saturated models (see Ryu & West, 2009). For the configural invariance model, the following constraints were imposed for model identification.5 In the school-level model, Q31c was chosen as the reference variable.6 For Q31c, the factor loading was fixed to 1 ( Factorial invariance at school level was tested by a series of models with increasingly added constraints in the school-level model. The model fit statistics and likelihood ratio test statistics (LR or Δχ2: Bentler & Bonett, 1980) are summarized in Table 3a.7 Once again, the model fit was assessed in two ways: the overall model (i.e., school-level and student-level model simultaneously) and the school-level model (denoted by subscript PS_B in Table 3a) using the level-specific approach by Ryu and West (2009). In Table 3a, the strong invariance did not hold at the school level. The model was modified by removing the equality constraints on measurement intercepts for Q31a, Q31d, and Q31 h (StrongPartial). The strict invariance model with partial strong invariance (non-invariant intercepts for Q31a, Q31d, and Q31 h) yielded acceptable model fit statistics and this model was selected as an appropriate school-level model. Table 3a. Model fit statistics and likelihood ratio (LR) tests to test factorial invariance at school level between New Zealand and Turkey Model χ ^2 LR test (Δχ^2) CFI RMSEA SRMR[W] SRMR[B] CFI[PS_B] RMSEA[PS_B] 1. ^ Degrees of freedom are shown in parentheses. CFI[PS_B] and RMSEA[PS_B] are fit indices for the school-level model obtained by the level-specific approach by Ryu and West (2009). For Strong, StrongPartial, and Strict models, CFI and CFI[PS_B] were obtained using more restricted null models that are nested within the hypothesized model (Widaman & Thompson, 2003). No equality constraint on the school-level intercepts of Q31a, Q31d, and Q31 h. For StrongPartial, the LR test compares the StrongPartial to the weak invariance model. Configural 128.706 (20), p < .001 .990 .036 .018 .029 .989 .041 Weak 133.916 (24), p < .001 5.210 (4), p = .266 .990 .033 .018 .037 .986 .039 Strong 354.927 (28), p < .001 221.011 (4), p < .001 .973 .053 .019 .248 .772 .195 StrongPartial^a 136.581 (25), p < .001 2.665 (1), p = .103 .990 .033 .018 .037 .986 .041 Strict 151.965 (30), p < .001 15.384 (5) p = .009 .989 .031 .018 .064 .968 .054 Degrees of freedom are shown in parentheses. CFIPS_B and RMSEAPS_B are fit indices for the school-level model obtained by the level-specific approach by Ryu and West (2009). For Strong, StrongPartial, and Strict models, CFI and CFIPS_B were obtained using more restricted null models that are nested within the hypothesized model (Widaman & Thompson, 2003). No equality constraint on the school-level intercepts of Q31a, Q31d, and Q31 h. For StrongPartial, the LR test compares the StrongPartial to the weak invariance model. Factorial invariance at student level was tested by increasingly imposing constraints in the student-level model. The model fit statistics and LR test statistics are summarized in Table 3b. For student-level factorial invariance, the model fit was assessed for the overall model and for the student-level model (denoted by subscript PS_W in Table 3b). Note that the chi-square test and the LR test were significant at p < .05 for all models and all model comparisons at the student level, because the sample size was large (8,219 students). As mentioned earlier, strong invariance is not considered at the student level as there is no mean structure in the level-1 model. The fit indices were not satisfactory for strict invariance model. The model was modified by removing the equality constraint on the student-level residual variance of Q31d (StrictPartial). The partial strict invariance model (non-invariant residual variance for Q31d) was selected as an appropriate student-level Table 3b. Model fit statistics and likelihood ratio (LR) tests to test factorial invariance at student level between New Zealand and Turkey Model χ ^2 LR test (Δχ^2) CFI RMSEA SRMR[W] SRMR[B] CFI[PS_W] RMSEA[PS_W] 1. ^ Degrees of freedom are shown in parentheses. CFI[PS_W] and RMSEA[PS_W] are fit indices for the student-level model obtained by the level-specific approach by Ryu and West (2009). For Strict and StrictPartial models, CFI and CFI[PS_W] were obtained using more restricted null models that are nested within the hypothesized model (Widaman & Thompson, 2003). No equality constraint on the student-level residual variance of intercept of Q31d. For StrictPartial, the LR test compares StrictPartial to the weak invariance model. Configural 128.706 (20), p < .001 .990 .036 .018 .029 .990 .035 Weak 191.966 (24), p < .001 63.260 (4), p < .001 .984 .041 .023 .031 .983 .037 Strict 677.705 (29), p < .001 485.739 (5), p < .001 .941 .074 .041 .050 .935 .054 StrictPartial^a 328.468 (28), p < .001 136.502 (4), p < .001 .972 .051 .030 .039 .970 .044 Degrees of freedom are shown in parentheses. CFIPS_W and RMSEAPS_W are fit indices for the student-level model obtained by the level-specific approach by Ryu and West (2009). For Strict and StrictPartial models, CFI and CFIPS_W were obtained using more restricted null models that are nested within the hypothesized model (Widaman & Thompson, 2003). No equality constraint on the student-level residual variance of intercept of Q31d. For StrictPartial, the LR test compares StrictPartial to the weak invariance model. Finally, the selected school-level and student-level models were combined. For the final model, χ2 = 354.675 (df = 39, p < .001), CFI = .974, RMSEA = .045, SRMRW = .030, SRMRB = .067. The estimates are shown in Figure 3. At school level, the non-invariant intercepts are interpreted as follows: the school-level components of Q31a and Q31d would be higher for schools in Turkey than those for schools with the same level of mathematics self-efficacy in New Zealand. In other words, given the same level of math self-efficacy, the observed scores of Q31a and Q31d would be higher for students who belong to schools in Turkey than for students who belong to schools in New Zealand. For Q31 h, the direction is the opposite: given the same level of mathematics self-efficacy, the observed scores of Q31 h would be lower for students who belong to schools in Turkey than for students who belong to schools in New Zealand. Therefore the means of Q31a, Q31d, and Q31 h are not comparable across the two countries. If strong invariance had held for all five items at school level, the difference in the common factor (SELFEFFB) mean (0.193, as shown in Figure 3) would have been interpreted as the difference in school-level mathematics self-efficacy between NZL and TUR. At student level, strict invariance did not hold for Q31d. The non-invariant residual variance is interpreted as follows: the difference in the student-level component of Q31d is larger for students in Turkey compared to the difference in the student-level component of Q31d associated with the same difference of mathematics self-efficacy for students in New Zealand. In other words, the same difference in the student-level component of Q31d does not reflect the same difference in the common factor SELFEFFW between the two countries. Therefore the variance and covariance associated with Q31d cannot be compared between the two countries (i.e., the estimated variances of SELFEFFW, 0.311 for NZL and 0.375 for TUR in Figure 3, are not comparable). The data used for this example were from 4,030 students – 1,748 females and 2,282 males – nested within 159 schools in Turkey. School size ranged from 2 to 35 (mean = 25.35, standard deviation = 7.865). The proportion of female students ranged from 0 to 1, the mean proportion was .4215, and the standard deviation was .2263. Before testing multilevel factorial invariance using MUML estimation, the estimates and standard errors obtained by ML were compared to those obtained by MUML. As shown in Table 4, the estimates and standard errors obtained by MUML were comparable to those obtained by ML, even though the school size was not balanced. Table 4. ML and MUML estimates and standard errors for the two-level factor model ML MUML ML MUML 1. ^ Standard errors are shown in parentheses. Student level School level Fixed to 1 Fixed to 1 Fixed to 1 Fixed to 1 0.970 (0.030) 0.970 (0.030) 1.052 (0.063) 1.051 (0.062) 0.955 (0.030) 0.958 (0.030) 1.065 (0.064) 1.052 (0.062) 0.760 (0.029) 0.760 (0.029) 0.950 (0.071) 0.950 (0.069) 0.944 (0.031) 0.946 (0.032) 0.874 (0.072) 0.875 (0.071) 2.162 (0.026) 2.137 (0.026) 2.039 (0.026) 2.013 (0.026) 2.284 (0.026) 2.257 (0.025) 2.158 (0.025) 2.132 (0.025) 2.401 (0.025) 2.382 (0.025) 0.394 (0.012) 0.395 (0.012) 0.004 (0.003) 0.004 (0.003) 0.354 (0.011) 0.354 (0.011) 0.001 (0.002) 0.001 (0.002) 0.334 (0.010) 0.333 (0.010) 0.001 (0.002) 0.002 (0.003) 0.522 (0.013) 0.521 (0.013) 0.007 (0.004) 0.008 (0.004) 0.486 (0.013) 0.486 (0.013) 0.015 (0.004) 0.015 (0.004) ψ [SELFEFFW] 0.319 (0.016) 0.318 (0.016) ψ [SELFEFFB] 0.071 (0.012) 0.075 (0.012) α [SELFEFFB] Fixed to 0 Fixed to 0 The weights for effective level-2 sample size were .4215 and .5785 for females and males, respectively. The effective level-2 sample sizes were Mgroups are required to test factorial invariance between females (Mgroups 1 and 2) and males (Mgroups 3 and 4; see Figure 4). The effective sample sizes for Mgroups 1–4 were 67.0185, 1680.9815, 91.9815, and 2190.0185, respectively. Note that the sum of school-level effective sample sizes is equal to the number of schools (67.0185 + 91.9815 = 159). For model specification, three sets of constraints were imposed. First, the student-level model was set equal between two Mgroups within each level-1 group membership (e.g., between Mgroups 1 and 2 for females, between Mgroups 3 and 4 for males). Second, the school-level model was set equal between females and males (e.g., [Mgroups 1 and 3). The third is a set of constraints necessary for model identification and factorial invariance testing in the hierarchy. The constraints for configural invariance model are summarized in Table 5. Mplus syntax for configural invariance model can be found in the online supporting information. Table 5. Constraints for testing multilevel factorial invariance between level-1 groups (females and males) Females Males ^Mgroup 1 ^Mgroup 2 ^Mgroup 3 ^Mgroup 4 1. ^ group indicates groups in multiple group analysis for MUML estimation. An equivalent school-level model can be specified with School level School-level model constrained to be equal between ^Mgroup 1 and ^Mgroup 3 Student level Student-level model for females constrained to be equal Student-level model for males constrained to be equal group indicates groups in multiple group analysis for MUML estimation. An equivalent school-level model can be specified with Once again the school-level model is equivalent between females and males, and the factorial invariance between females and males at the school level is not of concern. Configural, weak, strong, and strict invariance were tested at student level. The model fit statistics and LR test statistics are summarized in Table 6. The configural invariance model fitted well. The weak invariance model also fitted well: the LR test was not significant even with the large sample size (Δχ2 = 4.580 for df = 4, p = .333). The strong invariance model yielded a significant LR test, but the fit statistics indicated good model fit (CFI = .974, RMSEA = .057, and SRMR = .056). Finally, for the strict invariance model, the LR test was significant (Δχ2 = 20.193 for df = 5, p = .001), but the fit statistics indicated acceptable fit (CFI = .971, RMSEA = .057, and SRMR = .058). Considering the sensitivity of the LR test statistic to large sample size and the other fit indices indicating good model fit, the strict invariance model was selected as an appropriate student-level model. Table 6. Model fit statistics and likelihood ratio (LR) tests to test factorial invariance at student level between females and males χ ^2 LR test (Δχ^2) CFI RMSEA SRMR 1. ^ Degrees of freedom are shown in parentheses. For the strong and strict models, CFI was obtained using more restricted null models that are nested within the hypothesized model (Widaman & Thompson, 2003). Configural 132.449 (35), p < .001 .981 .053 .051 Weak 137.029 (39), p < .001 4.580 (4), p = .333 .981 .050 .052 Strong 185.306 (43), p < .001 48.277 (4), p < .001 .974 .057 .056 Strict 205.499 (48), p < .001 20.193 (5), p = .001 .971 .057 .058 Degrees of freedom are shown in parentheses. For the strong and strict models, CFI was obtained using more restricted null models that are nested within the hypothesized model (Widaman & Thompson, The estimated strict invariance model is shown in Figure 5. Both the means and (co)variances of the common factor (SELFEFFW) are comparable between females and males. The estimated mean of SELFEFFW was lower for males than females by –0.234 (p < .001). The estimated variance of SELFEFFW was 0.292 and 0.331 for females and males, respectively. The equality of the variance between the two groups can be tested by adding an equality constraint on the variance of SELFEFFW. The fit statistics for this constrained model were χ2 = 209.822 (df = 49, p < .001), CFI = .970, RMSEA = .057, and SRMR = .062. The LR test statistic (compared to the strict invariance model) was Δχ2 = 4.463 for df = 1, p = .035. Considering the sensitivity of the LR test to large sample size, it was concluded that the variance of SELFEFFW was not different between females and males. This paper has presented and illustrated a procedure to test factorial invariance in multilevel CFA. To emphasize the key idea again, a simple extension of the standard procedure is not appropriate for testing multilevel factorial invariance across level-1 groups, because the dependency between members of different level-1 groups is ignored. In the procedure presented, the decomposition of level-1 variables is not altered by level-1 group membership, and therefore the dependency is appropriately taken into account among all individuals within the same cluster regardless of their level-1 group membership. In testing factorial invariance in multilevel CFA, it is critical to distinguish at which level the group membership is. When the group membership is at level 2, multilevel factorial invariance can be tested using a simple extension of the standard procedure. Weak, strong, and strict invariance can be tested at level 2, and weak and strict invariance can be tested at level 1. Strong invariance is not of concern at level 1 because the mean structure is zero in the level-1 model. When the group membership is at level 1, multilevel factorial invariance can be tested using the procedure presented in this paper. The level-2 model is equivalent regardless of level-1 group membership. Weak, strong, and strict invariance can be tested in the level-1 model. Factorial invariance is a question of whether the relationship between an observed measure and underlying latent factor is the same across different groups. In single-level confirmatory factor model with no cross-loading, one latent factor underlies each observed variable. The parameters (e.g., factor loadings, measurement intercepts) describe the relationships between observed measures and latent factors. In a multilevel confirmatory factor model with no cross-loading, typically two latent factors underlie each level-1 observed variable: one for the within-cluster random component and the other for the between-cluster random component of the level-1 observed variable. For example, in the two-level factor model shown in Figure 2, the observed variable Q31c is hypothesized to load onto the within-cluster factor SELFEFFW and onto the between-cluster factor SELFEFFB. For typical level-1 observed variables in multilevel data which have non-zero within-cluster and between-cluster variances, the relationship between the observed variables and underlying common factors consists of two parts. At level 1, the parameters describe the relationships between the within-cluster latent random components of the observed variables (e.g., Q31cW) and common factor (e.g., SELFEFFW). At level 2, the parameters describe the relationships between the between-cluster latent random components of the observed variables (e.g., Q31cB) and common factor (e.g., SELFEFFB). When the group membership is at level 2, there are four possible outcomes: (a) an observed measure is measurement invariant (at the desired level of hierarchy) with both level-1 and level-2 common factors (e.g., Q31c is measurement invariant with both SELFEFFW and SELFEFFB); (b) a measure is measurement invariant with the level-1 common factor but not with the level-2 factor (e.g., Q31c is measurement invariant with SELFEFFW but not with SELFEFFB); (c) a measure is measurement invariant with the level-2 common factor but not with the level-1 factor (e.g., Q31c is measurement invariant with SELFEFFB but not with SELFEFFW); and (d) a measure is not measurement invariant with any of the underlying common factors. When the group membership is at level 1, the level-2 model is equivalent regardless of level-1 group membership. Therefore in this case there are two possible outcomes: a measure is measurement invariant with the level-1 common factor or not (e.g., Q31c is measurement invariant with SELFEFFW or not). Since multilevel CFA assumes that the within-cluster and between-cluster models are uncorrelated, the required level of invariance depends on which latent factor selection or inference is made upon. Measurement invariance must be established at the corresponding level at which an inference is made. For example, in Figure 3, an inference made upon SELFEFFB at school level would be invalidated by the non-invariance in measurement intercepts of Q31aB, Q31 dB, and Q31hB. But the non-invariance in measurement intercepts for Q31aB, Q31 dB, and Q31 hB would not be critical for the inference regarding SELFEFFW at student level. In conclusion, the procedure for testing multilevel factorial invariance across level-1 groups can be summarized as follows: the decomposition of level-1 variables should follow the clustered structure of multilevel data before the data are separated into level-1 groups; the level-2 model is equivalent regardless of level-1 group membership; the effective sample size should be used for the level-2 model; the level-1 model needs mean structure to represent the relative difference in means between level-1 groups; and weak, strong, and strict invariance can be tested in the level-1 model. The procedure presented can easily be implemented using standard SEM software packages. I hope this paper will motivate researchers and provide them with a viable tool to consider the issue of factorial invariance in multilevel CFA models. Jak, Oort, and Dolan (2013b) present a test for the homogeneity of within-cluster measurement model across clusters in multilevel data (referred to as ‘cluster bias’ in their paper). Both ML and MUML estimation are based on the assumption of multivariate normality. Alternative test statistics and model evaluation procedures that do not rely on normal theory are available for multilevel structural equation models (e.g., Yuan & Bentler, 2002, 2003). To avoid confusion, I use ‘Mgroup’ to indicate ‘groups’ in multiple group analysis for MUML estimation, ‘group’ to indicate the group membership between which the invariance is tested, and ‘cluster’ to indicate the level-2 unit in multilevel data. Although the items were measured using 4-point scales, the distribution of the measures was not severely deviated from the normal. The skewness ranged from 0.003 to 0.609, and the excess kurtosis ranged from –0.831 to 0.316. The illustrated examples used MUML estimation to obtain an approximated ML solution assuming multivariate normality. When the normality assumption is violated, an alternative estimation method (e.g., maximum likelihood estimation with robust standard error [MLR]) is recommended. There are alternative ways to identify the model. The interpretation of the results is easiest if the reference variable method is used. Alternative identifications do not affect the likelihood ratio The reference variable method relies on the assumption that the factor loading for the reference variable is truly invariant across groups. In this example, the weak invariance model in which all factor loadings were constrained to be equal across groups was acceptable at both school and student levels. If weak invariance does not hold for any of the items, the test of invariance may yield different results depending on the choice of reference variable. In this case it is crucial that the loading for the reference variable is truly invariant across groups (see Cheung & Lau, 2012; Johnson & Meade, 2007; Johnson, Meade, DeVernet, 2009; Rensvold & Cheung, 1998). CFI was obtained using a more restricted null model that is nested within the hypothesized model (Widaman & Thompson, 2003). Filename Format Size Description Dataset 1: SAS macro to compute the input statistics for MUML estimation bmsp12014-sup-0001.pdf application/PDF 35K Dataset 2: Mplus syntax for testing factorial invariance in multilevel confirmatory factor analysis Dataset 1: SAS macro to compute the input statistics for MUML estimation Dataset 2: Mplus syntax for testing factorial invariance in multilevel confirmatory factor analysis Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
{"url":"http://onlinelibrary.wiley.com/doi/10.1111/bmsp.12014/full","timestamp":"2014-04-21T10:19:02Z","content_type":null,"content_length":"222808","record_id":"<urn:uuid:6b443752-2fb1-4a98-b343-d80aba6f7c58>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Finitely-generated algebra over Z up vote 0 down vote favorite Let A be an artin ring which is also a finitely generated algebra over Z. Show that $|A|<\infty$. If A would have been a field then I know how to prove it. I know that A is a product of local rings, so I could restrict the question to Local artin rings that are finitely generated algebra over Z. But how does this help? Thanks, Yatir 1 Your algebra is a finite-length module over a polynomial ring of the form $\mathbb Z[x_1,\dots,x_n]$, and the simple modules of such a ring are of finite cardinal. – Mariano Suárez-Alvarez♦ Jul 5 '10 at 9:15 @yatir: It seems the hardest case is when $A$ is a field... – fherzig Jul 5 '10 at 9:39 1 @Mariano: "the simple modules of $\mathbb Z[x_1,\dots,x_n]$ are of finite cardinal" is (pretty much) equivalent to the question asked. (The case when $A$ is a field.) I'm not saying this isn't a standard fact... – fherzig Jul 5 '10 at 9:50 add comment 2 Answers active oldest votes Take $A$ local (you already reduced to it), with $m$ the max. ideal. I claim that $A/m$ is a finite field. Suppose first that it has char. 0. Then we get injections $\mathbb Z \to \ mathbb Q \to A/m$. By Zariski's lemma, $\mathbb Q \to A/m$ is finite, since it is of finite type. Now (unfortunately I don't have it on me), Atiyah-Macdonald have a beautiful lemma which says that if $A \subset B \subset C$ are (comm.) rings, $A$ noetherian, $A \subset C$ of finite type, $B \subset C$ finite, then $A \subset B$ is of finite type. up vote 4 down vote In our case, $\mathbb Z \to \mathbb Q$ is of finite type, contradiction. Thus $\mathbb Z/p \to A/m$ is of finite type, hence finite for some prime number $p$. So $A/m$ is a finite accepted field. Also $m^n = 0$ for some $n$ since $A$ is artin local. Finally, $m^i/m^{i+1}$ is a f.d. $A/m$-vector space (since $A$ is noetherian), so it is finite as well. And $|A| = \sum |m^i I remember now that Emerton gave a different proof that $A/m$ is a finite field in his notes on Jacobson rings posted here. – fherzig Jul 5 '10 at 9:30 Just one question, How did you derive the last equation? – yatir Jul 5 '10 at 9:47 Use the short exact sequences $0 \to m^i/m^{i+1} \to A/m^{i+1} \to A/m^i \to 0$ and induct. – fherzig Jul 5 '10 at 9:54 Shortcut: since {(0)} is not constructible in Spec Z, Chevalley's theorem also rules out char. 0. – user2035 Jul 6 '10 at 7:56 That's nice, thanks. – fherzig Jul 6 '10 at 9:14 add comment As you already mentioned, it is enough to show that every local artinian ring $A$, which is of finite type over $Z$, is finite. Let $m$ be the maximal ideal of $A$. By a standard filtration argument, we may assume $m^2=0$. Now $A/m$ is a finite field (since it is of finite type over some $Z/p$, apply Noether Normalization). Also, $m$ is an artinian, thus up vote 0 finite-dimensional $A/m$-vector space, and thus finite. Hence, also $A$ is finite. down vote How did you see that $A/m$ has positive characteristic? – fherzig Jul 5 '10 at 9:32 There are several ways to prove it. You mentioned one. Of course, our proofs are identical. – Martin Brandenburg Jul 5 '10 at 10:23 add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/30599/finitely-generated-algebra-over-z/","timestamp":"2014-04-18T11:18:16Z","content_type":null,"content_length":"66285","record_id":"<urn:uuid:ae60721e-92b5-4e2f-aab9-5b0a65d4b281>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Items where Author is "Porter, Professor David" Number of items: 13. Bennetts, L. G., Biggs, N. R.T. and Porter, D. (2009) Wave scattering by an axisymmetric ice floe of varying thickness. IMA Journal of Applied Mathematics, 74 (2). pp. 273-295. ISSN 0272-4960 doi: Bennetts, L.G., Biggs, N. and Porter, D. (2009) The interaction of flexural-gravity waves with periodic geometries. Wave Motion, 46 (1). pp. 57-73. ISSN 0165-2125 doi: 10.1016/j.wavemoti.2008.08.002 Porter, R. and Porter, D. (2006) Approximations to the scattering of water waves by steep topography. Journal Of Fluid Mechanics, 562. pp. 279-302. ISSN 0022-1120 Chamberlain, P. G. and Porter, D. (2006) Multi-mode approximations to wave scattering by an uneven bed. Journal Of Fluid Mechanics, 556. pp. 421-441. ISSN 0022-1120 Biggs, N. R. T. and Porter, D. (2005) Wave scattering by an array of periodic barriers. IMA Journal Of Applied Mathematics, 70 (6). pp. 908-936. Chamberlain, P. G. and Porter, D. (2005) Wave scattering in a two-layer fluid of varying depth. Journal Of Fluid Mechanics, 524. pp. 207-228. Porter, D. and Porter, R. (2004) Approximations to wave scattering by an ice sheet of variable thickness over undulating bed topography. Journal Of Fluid Mechanics, 509. pp. 145-179. Wakefield, M. A., Baines, M. J. and Porter, D. (2004) Bounds on physical quantities of interest via degenerate saddle-shaped functionals. Computers & Fluids, 33 (5-6). pp. 881-888. Porter, D. and Biggs, N. R. T. (2004) Systems of integral equations with weighted difference kernels. Proceedings Of The Edinburgh Mathematical Society, 47. pp. 205-230. Wakefield, M.A., Baines, M.J. and Porter, D. (2003) Bounds on physical quantities. Computers & Fluids, 33. pp. 881-888. Porter, R. and Porter, D. (2003) Scattered and free waves over periodic beds. Journal Of Fluid Mechanics, 483. pp. 129-163. Porter, D. (2003) The mild-slope equations. Journal Of Fluid Mechanics, 494. pp. 51-63. Conference or Workshop Item Porter, D. and Porter, R. (2004) Wave scattering by an ice sheet of variable thickness. In: Proceedings, 19th International Workshop on Water Waves and Floating Bodies, Cortona, Italy.
{"url":"http://centaur.reading.ac.uk/view/creators/90002722.html","timestamp":"2014-04-20T13:44:25Z","content_type":null,"content_length":"17728","record_id":"<urn:uuid:80aaeda5-be92-4eb8-8b77-bccad662d278>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 2. Expand (4x – 3y)4 using Pascal’s Triangle. (2 points) • one year ago • one year ago Best Response You've already chosen the best response. i tend to line these up in columns you know the row of the triangle that you should use? Best Response You've already chosen the best response. Best Response You've already chosen the best response. 1 121 1331 14641 , yeah, its the 4th 1 4 6 4 1 <-- pascals triangle row a^4 a^3 a^2 a^1 a^0 <-- first term b^0 b^1 b^2 b^3 b^4 <-- second term ----------------------- multiply them down and add them all up for the answer Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5092ec3ee4b0b86a5e52e6ea","timestamp":"2014-04-20T21:00:10Z","content_type":null,"content_length":"32603","record_id":"<urn:uuid:95686844-7dc3-46d4-ae00-aa73f3c8dcf0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
PTC Mathcad Prime Worksheets Directory - Mechanical Engineering Want a free copy of PTC Mathcad Express? Download your copy today. Worksheets marked with Worksheet name Description Analyzing Process Capabilities This worksheet using PTC Mathcad takes a normal distribution to represent the amount of products lying outside the tolerance limits. Bearing Capacity and Maximum Load of a This worksheet shows you how to calculate the bearing capacity factor for a specific pier, and then uses this calculation to find the maximum load of the Pier pier. Calculating Force on a Truss Connection This worksheet using PTC Mathcad helps you to calculate the force and pressure on a connection joint within the specified dimensions of the joint for a Joint particular material, and under particular trusses and weld lengths. Choosing a Cam-Type Clutch to Drive a This worksheet using PTC Mathcad shows you how to properly choose a cam-type clutch to drive a centrifugal pump. Centrifugal Pump Determining Suction Lift Pump Height This worksheet using PTC Mathcad helps you to determine the maximum height that a pump can be placed under a reservoir, and maintain the operating Using Solve Blocks conditions set by a pump manufacturer. Example of Using Laplace Transforms to This worksheet illustrates PTC Mathcad's ability to symbolically solve an ordinary differential equation using Laplace transforms. In this example, from Solve an ODE dynamics, the worksheet demonstrates how to find the motion x(t) of a mass m attached to a spring (strength k) and dashpot (coefficient c) due to a known applied force F(t). Finding the Actual Force and Break Capacity of a Long-Shoe Internal Drum This worksheet using PTC Mathcad shows you how to determine the actuating force and the brake capacity of a long-shoe internal brake. Finding the Shear Force and Bending This worksheet using Mathcad provides you with an example of how the shear force and the bending moment along a simply supported beam can be determined as a Moment Along a Beam function of the distance from one end. Finite vs. Infinite Life and Fatigue This worksheet using PTC Mathcad provides an example of how to calculate for the fatigue failure of a rotating beam specimen made of ductile steel. Fatigue Failure for a Steel Object Failures are when machine parts fail due to repeated or fluctuating stresses, mainly because of the very large number of times it is subjected to the fluctuating stresses. Linear and Angular Momentum of Three This worksheet looks at three small spheres of equal mass that are connected to a small ring by three inextensible, inelastic cods of equal length and equal Small Balls space. Preventing System Failures with This worksheet shows you how to determine the minimum number of pumps needed so that the probability of not having a system failure (system reliability) Reliability Testing for Pump Systems during a three year period is more than .95. The Weibull Distribution Function in This worksheet using PTC Mathcad shows you how to define a 2-parameter Weibull distribution and shows you how the distribution can be influenced by scale Reliability Statistics and shape parameters.
{"url":"http://communities.ptc.com/docs/DOC-3783","timestamp":"2014-04-21T14:57:34Z","content_type":null,"content_length":"110782","record_id":"<urn:uuid:d10223cc-45c8-46f3-870c-a08f0a7bc20e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic problem using for loops w/ functions. help :( Basic problem using for loops w/ functions. help :( Would someone help me with this word problem? :( Write a function that will accept three integers as parameters, call these three integers as start , end and step respectively. -assume that start is always less than end and step is a positive integer greater than zero. -thereafter, the function should print the numbers such that the first item is start, the second item is equivalent to the first item plus the value of step, the third item is equivalent to second item plus the value of step and so on. the last value to be printed should not be greater than end. I should have a macro for this.... Announcements - C Programming Announcements - General Programming Boards can you help me with this ? :( Probably. You should make an attempt first though, and that is what Salem is getting at. void notepad(int,int,int); void main() int a,b,c; void notepad(int a,int b, int c) int start,end,step; printf("Enter the 1st Number :"); printf("Enter the 2st Number :"); printf("Enter the 3st Number :"); is this correct? :( im still newbie for C Well the first step would be for you to show that you can either - call a function with parameters, but can't do the loop - can do the loop in main, but somehow cannot call a function - can do a loop, but only if step is 1 Now, where exactly are you stuck? Try start += step where i would put "start+=step" ? :( sorry. Where in the code you have at the moment do you see start being modified? Why do you pass a, b, and c to your function? They aren't used or modified. Don't use void main: SourceForge.net: Void main - cpwiki > where i would put "start+=step" ? You're making me think you didn't write the above code. Just think about it. In the loop, instead of incrementing by 1 every time, increment by step. You don't need "start=start;".
{"url":"http://cboard.cprogramming.com/c-programming/147959-basic-problem-using-loops-w-functions-help-printable-thread.html","timestamp":"2014-04-20T07:41:58Z","content_type":null,"content_length":"11674","record_id":"<urn:uuid:971480ab-af1c-4af4-bb6d-5cd21d5621f3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Theory Seminar • Thursday, September 4, 2008. Speaker: Michael Filaseta, USC. Title: A collection of problems to ponder on polynomials. Abstract: In this talk, we discuss results and problems associated with polynomials. The topics in our collection of problems will vary, but the main focus will be on polynomials all of whose coefficients are either 0 or 1 and the problems will typically be ones associated with their factorization. Although we will have some answers to reveal to some of the questions, there will be plenty of open problems presented. • Thursday, September 11, 2008. Speaker: Matthew Boylan, USC. Title: Gaussian hypergeometric series. Abstract: Introduced by Greene in the early 1980s, Gaussian hypergeometric series are finite field analogs of classical hypergeometric series. In this talk we will discuss some of their connections with number theory. • Thursday, September 18, 2008. Speaker: Michael Mossinghoff, USC and Davidson College. Title: Barker sequences: Come on down! Abstract: A Barker sequence is a finite sequence of integers a_0,...a_{n-1}, each +1 or -1, for which the absolute value of the sum over j of a_j times a_{j+k} <= 1 for k not equal to 0. It has long been conjectured that long Barker sequences do not exist. We describe some recent work connecting this problem to several open questions posed by Littlewood, Mahler, Erdos, Newman, Golay, and others about the existence of polynomials with +1 or -1 coefficients that are ``flat'' in some sense over the unit circle. If time permits, we will also describe some related work concerning mean values of L_p norms and Mahler's measure for certain families of polynomials. • Thursday, September 25, 2008. Speaker: Ethan Smith, Clemson University. Title: A Barban-Davenport-Halberstam asymptotic for number fields. Abstract: Let a and q be coprime. Dirichlet's Theorem on primes in arithmetic progressions is a well-known result giving information about the distribution of primes congruent to a modulo q. That is, the theorem tells us approximately how many such primes are in the interval [1,x]. In the mid 1960s, Barban, and independently, Davenport and Halberstam, began a study of the mean square error for the approximation given in Dirichlet's Theorem. The so-called Barban-Davenport-Halberstam Theorem essentially says that the square of the error in Dirichlet's approximation is small on average. Later, Montgomery, Hooley, and others sharpened their theorem by giving an asymptotic formula for the mean square error. In this talk, we will discuss a natural generalization of these ideas to the setting of number fields. • October 9 - 12, 2008. FALL BREAK: No seminar this week. • Saturday October 11 and Sunday, October 12, 2008. Palmetto Number Theory Series (PANTS) VII, College of Charleston, Charleston, South Carolina. • Thursday, October 16, 2008. Speaker: Michael Filaseta, USC. Title: The density of square-free 0, 1-polynomials. Abstract: This talk will be based on work with Sergei Konyagin from several years ago concerning the density of polynomials that are not divisible by the square of a non-constant polynomial among all polynomials having coefficients either 0 or 1. We will examine how this density result is connected to squarefree numbers missing digits in a given base. The talk makes a good introduction to some analytic number theoretic ideas. • Thursday, October 23, 2008. Speaker: Matt Boylan, USC. Title: S_4-modular forms. Abstract: Let f be an integer weight modular form with integer coefficients and let l be prime. By work of Serre and Deligne, one can associate to f a Galois representation whose image lies in GL_2(Z/lZ), the group of 2x2 invertible matrices with entries in Z/lZ. For given f, let G(f,l) be its image. In this talk we will discuss properties of modular forms f for which G(f,l) modulo scalar matrices is isomorphic to S_4, the permutation group on 4 elements. It turns out that the Fourier coefficients of such forms satisfy striking congruence properties modulo l. Moreover, the study of such forms is related to the famous Artin Conjecture on the analytic continuation of L-functions constructed from Galois representations. • Thursday, October 30, 2008. Speaker: Tim Flowers, Clemson University. Title: Asymptotics of Bernoulli, Euler, and Strodt Polynomials. Abstract: It is well known that both Bernoulli polynomials and Euler polynomials on a fixed interval are asymptotically sinusoidal. A recent paper by Borwein, Calkin, and Manna uses an idea of Strodt to generalize Bernoulli and Euler polynomials and view them as members of a family of polynomials. We used these ideas to study the asymptotics of non-uniform Strodt polynomials. We will describe the experimental process which led to several conjectures. In addition, we will show how experiments suggested the methods used to prove some of these results. • Thursday, November 6, 2008. Speaker: Dan Baczkowski, USC. Title: Applications of the Hardy-Littlewood Circle Method. Abstract: Goldbach's Conjecture states that every even integer > 2 can be written as the sum of two primes. The ternary Goldbach conjecture states that every odd integer > 5 is the sum of three primes. Vinogradov made monumental progress on the latter problem by showing that it was valid for sufficiently large odd integers. The proof utilizes ubiquitous techniques from combinatorics, complex analysis, Fourier analysis, number theory, and much more. We will introduce the powerful tool referred to as the Hardy-Littlewood circle method and mention some of its applications. In particular, we will discuss its relevance when sketching the proof of Vinogradov's Theorem. Be ready for a short historical overview, an intoduction to analytic number theory, the recent progress of the Goldbach conjectures, and enormous fun! • Tuesday, November 11, 2008. Speaker: Jim Brown, Clemson University. Note that this date is different than usual, but the time and location are as usual. Title: The Eisenstein ideal and generalizations Abstract: In a short paper in 1976 Ken Ribet gave a revolutionary proof of the converse of Herbrand's theorem, a result relating certain Bernoulli numbers to sizes of pieces of ideal class groups. Ribet was able to prove this result by studying congruences between modular forms and the information these congruences give in terms of Galois representations. We will briefly outline Ribet's argument. From there we will give a different (more general) formulation of Ribet's result in terms of the Eisenstein ideal. Time permitting we will then define a generalization of the Eisenstein ideal that can be used in more general settings than the one pursued by Ribet. All necessary definitions will be recalled. • Thursday, November 20, 2008. Speaker: John Webb, USC. Title: Some examples of theorems of Waldspurger and Kohnen-Zagier with an application to partitions. Abstract: In this talk we give an introduction to theorems of Waldspurger and Kohnen-Zagier. Roughly speaking, these theorems assert that Fourier coefficients of half-integral weight modular forms are square roots of central critical values of modular L-functions, up to explicitly identifiable factors. We will discuss Tunnell's solution (which relies crucially on Waldspurger's work) to the ancient congruent number problem. The congruent number problem asks for necessary and sufficient conditions for a positive integer to be the area of a right triangle with rational side lengths. Time permitting, we will also give a famous example of Kohnen and Zagier and discuss current work of the speaker which connects values of the ordinary partition function to values of modular L-functions via Waldspurger's work. • November 26 - 30, 2008. THANKSGIVING RECESS: No seminar. • Thursday, December 4, 2008. Speaker: Pradipto Banerjee, USC. Title: On a polynomial conjecture of Pal Turan. • December 8 - 15, 2008. FINAL EXAMS: The number theory seminars have concluded for the semester. • Saturday, December 13 and Sunday, December 14 , 2008. Palmetto Number Theory Series (PANTS) 8, University of South Carolina, Columbia, South Carolina. Department of Mathematics main page
{"url":"http://www.math.sc.edu/~boylan/seminars/ntseminarFa08.html","timestamp":"2014-04-17T15:26:23Z","content_type":null,"content_length":"11112","record_id":"<urn:uuid:47176249-a648-4c28-91ef-6f7fd12a843c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Complexity Theory Lecture Notes There are two graduate-level courses in complexity theory that I have taught here at Rutgers. Notes that were prepared for some of the material covered in those courses are available for your reading • Levin's Lower Bound Theorem (These notes present a lovely theorem that should be in all textbooks but isn't. Everyone knows Blum's "speed-up theorem" that shows that there are certain problems that have nothing at all like an optimal algorithm. At first glance, this might indicate that some problems have no tight lower bound on their complexity. However this result of Levin's shows that every computable function does have a tight lower bound.) 198:540 -- Combinatorial Methods in Complexity Theory Other Excellent Sets of Notes Click here to return to my home page.
{"url":"https://www.cs.rutgers.edu/~allender/lecture.notes/","timestamp":"2014-04-16T22:00:56Z","content_type":null,"content_length":"5582","record_id":"<urn:uuid:88d220ee-eb1f-4ef9-82f4-037d194045dd>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Classifying Equivariant Maps Between Fin-Dim Irreducible Modules up vote 1 down vote favorite Let $G$ be a compact semi-simple Lie group, (or to be even more concrete let $G = SL(N)$), and let $V$ and $W$ be finite dimensional irreducible representations of $G$. Surely it is very well-known how to classify the $G$-equivariant linear maps between $V$ and $W$. Can anyone enlighten me as to how this works? I am most interested in the case where $\text{dim}(V) = {\text {dim}}(W)$ and the maps are isomorphisms. Also, in the quantum setting, ie for $SL_q(N)$, does this classification pass over to a classification of comodule maps between the modules $V_q$ and $W_q$? $SL(N)$ is not compact. You need to be more precise here. Do you mean a complex or real semisimple group? Do you mean compact semisimple? In that case you probably want $SU(N)$ rather than $SL(N) 3 $. In any case this is probably too elementary for MO. The short answer is that representations are classified by highest weights, and you have a nonzero morphism between two irreducible representations if and only if they have the same highest weight. The space of intertwiners in that case is 1-dimensional. The same holds true in the quantum case as well. – MTS Jul 12 '13 at Where could I get this, in say Humphrey's book? – Milan Bernolak Jul 12 '13 at 16:31 In Humphreys' book it is in Chapter VI, Representation Theory. Alternatively you can find it in Chapter V of Knapp's book Lie Groups Beyond an Introduction, 2nd edition. – MTS Jul 12 '13 at 18:40 Great, thanks a lot. – Milan Bernolak Jul 13 '13 at 17:25 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged rt.representation-theory linear-algebra qa.quantum-algebra quantum-groups equivariant or ask your own question.
{"url":"http://mathoverflow.net/questions/136526/classifying-equivariant-maps-between-fin-dim-irreducible-modules","timestamp":"2014-04-17T15:54:54Z","content_type":null,"content_length":"51821","record_id":"<urn:uuid:c1ff52f6-9fcf-4ebb-9e82-f5a9049fcf22>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Triangle June 10th 2006, 04:54 AM #1 Jun 2006 NSW Australia Number Triangle Hi I am wondering if someone could help me on this Here are two examples of a number triangle. There are four consecutive positive integers 10, 11, 12 and 13, on the bottom line, in some order. Starting from the left, each number in the bottom line is added to the number next to it and the answer written above the space between them. This is repeated for each line, until there is a single number on top. Please see attached figure: Number triangle with different numbers in the bottom line, or with the same numbers but in a different order, are regarded as different. The two examples shown are, there different. a. By changing the arrangements of the numbers 10, 11, 12 and 13 in the bottom line, what top numbers are possible? b. Another number triangle has six consecutive integers in some order in the bottom line and top number 2006. i. Give two examples of this: one with 60, 61, 62, 63, 64 and 65, in some order, in the bottom line, and the other with 61, 62, 63, 64, 65 and 66, in some order, in the bottom line. ii. Show that these are the only possible sets of six consecutive numbers for the bottom line of a number triangle with top number 2006. iii. How many different number triangles are there with six consecutive numbers in the bottom line, in some order, and with top number 2006?(Recall that two number triangles are different if their bottom line numbers are different or their different bottom line orders are different). Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/algebra/3356-number-triangle.html","timestamp":"2014-04-19T02:35:28Z","content_type":null,"content_length":"30502","record_id":"<urn:uuid:0fdb2b5f-8038-4e3f-9dd1-d660fe89975e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
North Providence Trigonometry Tutor Find a North Providence Trigonometry Tutor ...While teaching high school geometry, I realized that some students feel challenged by what seems like a large number of formulae to memorize. As your tutor, I would walk you through the process of deriving a formula so that together we could understand where each formula comes from. I think thi... 14 Subjects: including trigonometry, chemistry, calculus, geometry ...When I taught at a Catholic middle school, I helped eighth-graders prepare for these tests. The students were happy with their actual test results, and every student was accepted by a Catholic high school. I have experience teaching with a tutoring service all levels of the ISEE. 45 Subjects: including trigonometry, Spanish, chemistry, English ...My strengths are in high school math subjects, such as Geometry, Algebra, Algebra II, and Statistics. It has been a while since I have looked at a calculus book, but I think that it would come back fairly easily with materials in hand. I can help with test prep in math subjects for SAT, ACT and AP. 17 Subjects: including trigonometry, calculus, geometry, statistics I have been teaching math at the high school level for the past 14 years. I teach at a vocational school so I must be able to adapt to all learning styles. I feel it is critical to understand why something is and not just how to go through the motions to get the correct answer. 6 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...Although especially well-versed in the Humanities, I also excelled in Math. I am a particularly verbal individual and I take pride in my ability to clearly explain concepts to people. While in college, I taught a Literature class to high school students and entirely created the syllabus and lesson plans. 32 Subjects: including trigonometry, English, reading, geometry
{"url":"http://www.purplemath.com/North_Providence_trigonometry_tutors.php","timestamp":"2014-04-21T15:22:49Z","content_type":null,"content_length":"24535","record_id":"<urn:uuid:cbe17590-39b5-41db-aaac-77d7991f7acf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Isoceles Triangle Isoceles Triangles Isoceles Triangle Isoceles triangles have at least two sides that are exactly the same length. This forces two of their angles to also be acute angles of exactly the same size. In this blue triangle, the two longer sides are the same length, which forces the two bottom angles to be the same size. If the third side is also the same length as the first two sides, then the triangle is an equilateral triangle. All equilateral triangles have to be isoceles (eye-SAH-suh-leez) triangles, but not all isoceles triangles are equilateral. Isoceles triangles, like other triangles, have a perimeter and an area. You can find out the perimeter of an isoceles triangle by adding together the length of all three sides. It's harder to find out the area. To find the area of an isoceles triangle, start by drawing a line down the middle, from the top point to the middle of the bottom side. You'll notice that an isoceles triangle has bilateral symmetry - that will be useful. Now you have two smaller triangles, but they're exactly the same size, each half the area of the original triangle. The line you drew down the middle is perpendicular to the bottom side of the triangle, so those two lines meet at right angles, forming two right triangles that are the same size. Because each of these right triangles makes half of a rectangle, you can imagine moving one of them, turning it upside down, and putting it against the other one to make a rectangle. Now it is easy to find the area of that rectangle if you know the height of the isoceles triangle and how wide the bottom is. But suppose you only know the lengths of the three sides, and not how long the center line is? No problem. You can calculate the height of the center line using the Pythagorean Theorem, because half of an isoceles triangle is a right triangle. To find out more about geometry, check out these books from Amazon.com or from your library:
{"url":"http://scienceforkids.kidipede.com/math/geometry/isoceles.htm","timestamp":"2014-04-19T18:10:43Z","content_type":null,"content_length":"12374","record_id":"<urn:uuid:80be5e99-f3c2-44a2-8ee6-c423ff4de975>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilistic Models of Cognition From Church Wiki By Noah D. Goodman, Joshua B. Tenenbaum, Timothy J. O'Donnell, and the Church Working Group^[1].^[2] (This tutorial is based on the ESSLLI Tutorial, created by Goodman, Tenenbaum, and O'Donnell.) What is thought? How can we describe the intelligent inferences made in everyday human reasoning and learning? How can we engineer intelligent machines? The computational theory of mind aims to answer these questions starting from the hypothesis that the mind is a computer, mental representations are computer programs, and thinking is a computational process—running a computer program. But what kind of program? A natural assumption is that this program take the inputs—percepts from the senses, facts from memory, etc—and compute the outputs—the intelligent behaviors. Thus the mental representations that lead to thinking are functions from inputs to outputs. However, this input-output view suffers from a combinatorial explosion: we must posit an input-output program for each task in which humans draw intelligent inferences. A different approach is to assume that mental representations are more like theories: pieces of knowledge that can support many inferences in many different situations. For instance, Newton's theory of motion makes predictions about infinitely many different configurations of objects and can be used to reason both forward in time and from the consequences of an interaction to the initial state. The generative approach posits that mental representations are more like theories in this way: they capture more general descriptions of how the world works—hence, the programs of the mind are models of the world that can be used to make many inferences. ^[3] A generative model describes a process, usually one by which observable data is generated. Generative models represent knowledge about the causal structure of the world—simplified, "working models" of a domain. These models may then be used to answer many different questions, by conditional inference. This contrasts to a more procedural or mechanistic approach in which knowledge represents the input-output mapping for a particular question directly. While such generative models often describe how we think the "actual world" works, there are many cases where it is useful to have a generative model even if there is no "fact of the matter". A prime example of the latter is in linguistics, where generative models of grammar can usefully describe the possible sentences in a language by describing a process for constructing sentences. It is possible to use deterministic generative models to describe possible ways a process could unfold, but due to sparsity of observations or actual randomness there will often be many ways that our observations could have been generated. How can we choose amongst them? Probability theory provides a system for reasoning under exactly this kind of uncertainty. Probabilistic generative models describe processes which unfold with some amount of randomness, and probabilistic inference describes ways to ask questions of such processes. This tutorial is concerned with the knowledge that can be represented by probabilistic generative models and the inferences that can be drawn from them. In order to make the idea of generative models precise we want a formal language that is designed to express the kinds of knowledge individuals have about the world. This language should be universal in the sense that it should be able to express any (computable) process. We build on the <math>\lambda</math>-calculus (as realized in functional programming languages) because the <math>\lambda</ math>-calculus describes computational processes and captures the idea that what is important is causal dependence—in particular the <math>\lambda</math>-calculus does not focus on the sequence of time, but rather on which events influence which other events. We introduce randomness into this language to construct a stochastic <math>\lambda</math>-calculus, and describe conditional inferences in this language. 1. ↑ Contributors include: Andreas Stuhlmueller, John McCoy, Tomer Ullman, Long Ouyang. 2. ↑ The construction and ongoing support of this tutorial are made possible by grants from the Office of Naval Research and the James S. McDonnell Foundation. 3. ↑ Of course, the process by which inferences are drawn from a "model" or "theory" can, and should, also be described as a computational process. It is, however, useful to separate computational descriptions of knowledge and the inferences that can be drawn from knowledge, from computational descriptions of the process of inference. This is similar to Marr and Poggio's notion of "levels of analysis" (see Marr, 1982).
{"url":"http://projects.csail.mit.edu/church/wiki/Probabilistic_Models_of_Cognition","timestamp":"2014-04-25T05:28:07Z","content_type":null,"content_length":"19952","record_id":"<urn:uuid:ecc7679d-3c2c-4496-885b-7f27cd88fa40>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Patricia G. For the last ten years, I have been working as an adjunct instructor in the area of developmental mathematics at Houston Community College System (HCCS) and Lone Star College (Fairbanks Center). I teach all levels of the developmental math classes. I enjoy working in this environment, because the student population is rather diverse. Whether the students are young or old, I seem to relate well with them. Most of the students who come in at this level simply hate math or else they fear it. I enjoy the process of helping them see the simplicity of math by stressing the rules. Eventually, they develop confidence in their own abilities, and they realize that there is nothing to fear. Patricia's subjects
{"url":"http://www.wyzant.com/Tutors/TX/Houston/8092344/?g=3FI","timestamp":"2014-04-20T18:50:50Z","content_type":null,"content_length":"75660","record_id":"<urn:uuid:31d998dc-e5c5-40a6-af1e-d6d74e566665>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please help:). the maximum amplitude and the maximum acceleration of the foundation of an industrial fan were found to be x_max=0.2mm and a_max=0.3 • one year ago • one year ago Best Response You've already chosen the best response. Here |dw:1352046269529:dw| Best Response You've already chosen the best response. so whats the question, that is correct \(a=\ddot x\) where differentiation is w.r.t time. Best Response You've already chosen the best response. oops sorry forgot to post the question Best Response You've already chosen the best response. Determine the operating speed of the fan. Best Response You've already chosen the best response. |dw:1352046585292:dw| Am I right? Best Response You've already chosen the best response. if u take x as sinusoidal, then x_max = A=0.2mm are you suppose to take it as sinusoidal ? Best Response You've already chosen the best response. i am nt sure. Bt in the answr given they use \(x=Acos(\omega t)\) i dnt get why they use it. Best Response You've already chosen the best response. i would write that in this manner , you are correct numerically. |x|=A , \(|\ddot x |=Aw^2\) so w^2= 0.3/0.2 as A=0.2 and \(|\ddot x |=0.3\) Best Response You've already chosen the best response. and this speed you will get it in rad/sec ofcourse by taking sqrt of w^2 Best Response You've already chosen the best response. right ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. where do we use \(x=Asin\omega t\) and \(x=Acos\omega t\)? Are they equal? I am nt getting this. Best Response You've already chosen the best response. they aren't equal. just phase shift of pi/2 so either of them can be used. but in this question even it doesn't make sense to me using sinusoidal wave for fan equation, if it is given, then its Best Response You've already chosen the best response. so what equation should be used here @hartnn? Best Response You've already chosen the best response. not sure, need to google it ..but since in your given answer sine waves are used, your teacher/course must expect u to take it as sinusoidal everytime.... Best Response You've already chosen the best response. Ha k. Thanxxxxx a lot for helping me:) Best Response You've already chosen the best response. welcome ^_^ Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509696b5e4b0d0275a3ce903","timestamp":"2014-04-21T08:01:53Z","content_type":null,"content_length":"129394","record_id":"<urn:uuid:05870c7e-d885-40d1-9a8f-95bb260eab3c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Maintainer diagrams-discuss@googlegroups.com Safe Haskell None The alignment of an object refers to the position of its local origin with respect to its envelope. This module defines the Alignable class for things which can be aligned, as well as a default implementation in terms of HasOrigin and Enveloped, along with several utility methods for alignment. Alignable class class Alignable a whereSource Class of things which can be aligned. alignBy :: V a -> Scalar (V a) -> a -> aSource alignBy v d a moves the origin of a along the vector v. If d = 1, the origin is moved to the edge of the envelope in the direction of v; if d = -1, it moves to the edge of the envelope in the direction of the negation of v. Other values of d interpolate linearly (so for example, d = 0 centers the origin along the direction of v). (Enveloped b, HasOrigin b) => Alignable [b] Alignable a => Alignable (Active a) (Enveloped b, HasOrigin b, Ord b) => Alignable (Set b) (InnerSpace v, OrderedField (Scalar v)) => Alignable (Envelope v) (InnerSpace v, OrderedField (Scalar v)) => Alignable (Path v) (Enveloped b, HasOrigin b) => Alignable (Map k b) (HasLinearMap v, InnerSpace v, OrderedField (Scalar v), Monoid' m) => Alignable (QDiagram b v m) General alignment functions align :: (Alignable a, Num (Scalar (V a))) => V a -> a -> aSource align v aligns an enveloped object along the edge in the direction of v. That is, it moves the local origin in the direction of v until it is on the edge of the envelope. (Note that if the local origin is outside the envelope to begin with, it may have to move "backwards".)
{"url":"http://hackage.haskell.org/package/diagrams-lib-0.7/docs/Diagrams-Align.html","timestamp":"2014-04-19T15:13:04Z","content_type":null,"content_length":"12249","record_id":"<urn:uuid:06bca9b2-eca1-439a-9f10-da6415bb4881>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: Calculating Determinant of Hessian Matrix Over Obervations [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: Calculating Determinant of Hessian Matrix Over Obervations From "Michael Blasnik" <michael.blasnik@verizon.net> To <statalist@hsphsun2.harvard.edu> Subject st: Re: Calculating Determinant of Hessian Matrix Over Obervations Date Mon, 01 Oct 2007 08:38:54 -0400 listed, then why not just calculate it directly using -gen-? gen det=A*C-B^2 Michael Blasnik ----- Original Message ----- From: "Asgar Khademvatani" <akhademv@ucalgary.ca> To: <statalist@hsphsun2.harvard.edu> Sent: Monday, October 01, 2007 7:34 AM Subject: st: Calculating Determinant of Hessian Matrix Over Obervations Dear All, I have 43 observation on t he variables A, B, and C. I am trying to make 43 symmetry Hessian matrix(2 by 2) and calculate each matrix determinant and list the calculated determinants for each observation. I have made the following loop but it is subject to error as follows; forvalues i = 1/43 { matrix H = (A[`i'], B[`i'] \ B[`i'], C[`i']) gen det_H = det(H) di "`i'" matrix list Ha_g_1 list det_H I am getting the following error as follows; | det_H | 1. | 6.28e-13 | * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-10/msg00008.html","timestamp":"2014-04-18T13:20:51Z","content_type":null,"content_length":"6720","record_id":"<urn:uuid:3580a0e3-7122-42d1-8c1b-e51b3af668c0>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
UCCS | Math - Math 1350 Lab 3 Math 1350 Calculus I - Lab 3 - Maximization and Minimization Math 135 - Calculus I - Lab 3. Optimization A large class of problems spanning many fields of human endeavor are those concerned with optimization. Simply put, in an optimization problem we attempt to maximize or minimize a given quantity in a particular situation. For example, a dairy company may want to optimize its delivery routes so that fuel expenses are kept to a minimum. In order to find a suitable solution to this problem, it is necessary for us to develop a mathematical model that accurately describes how the choice of delivery routes effects the consumption of fuel. In general, problems in optimization can be extremely difficult to solve (try to analytically describe the relationship just presented!) However problems of this type are so important that an entire branch of mathematics, the calculus of variations, was developed to handle these kinds of problems. Fortunately for us, many optimization problems such as the one below are solvable by applying fundamental principles of calculus. Suppose a company has a contract to build several open metal trash bins. Each has a square base and will hold 1000 cubic feet. It orders a pre-cut sheet for the bottom and another that it bends three times to form the four sides. (There is no top.) It must then weld the seams - one vertical and four horizontal. Records indicate that welding costs $2.10 per foot including labor and materials. The cost of the sheet metal is $1.85 per square foot. The company needs to answer the following questions: 1) What are the dimensions of the box that will minimize the cost? 2) What is the minimal cost? The Solution: In order to minimize the cost, we must first describe how the dimensions and the cost are related. Let h denote the height of the box and let b denote the length (and width) of the base. The total cost of the box is given by the equation We now need to describe these costs in terms of the relevant variables h and b . The bottom of the box has an area of b^2 square feet, and there are four sides each having an area of bh square feet. Therefore the total cost of the metal is given by 1.85(b^2+4bh). The total length of the weld is the perimeter of the base, 4 b feet, plus the height of the seam along one side, h feet. The total cost of the welding is therefore 2.10(h+4b). Substituting these expressions into the cost equation gives us We have described the cost in terms of two variables. In order to find the minimal cost we must describe the cost in terms of only one variable so that we may calculate the derivative. The total volume gives the needed relationship between h and b . The volume of the box is 1000 cubic feet. In terms of h and b , the volume is hb^2 . Therefore we have h=1000/b^2. Now we can write the cost as a function of b alone: We define the cost function in Maple: > Cost:=1.85*(b^2+4000/b)+2.1*(1000/b^2+4*b); Now we find the derivative and plot it. We will use the plot to estimate the location of the stationary point so that we may the use the "fsolve" command to find the actual value. The reason, of course, that we do this is that a function's minimum or maximum value will occur at stationary points or endpoints. To get Maple to take the derivative of the function we have named "Cost," we simply use the diff command and tell Maple what we want to take the derivative of and what to differentiate with respect to. > dCost:=diff(Cost,b); > plot(dCost,b=-20..20,y=-500..100,title=`Derivative of the cost function`); Note that you will have to experiment with the plot window in order to find the appropriate range and domain settings. From the graph it appears that the derivative is zero somewhere between b = 10 and b = 15. We use "fsolve" to tell it to look for a solution in the interval [10,15]: > bmin:=fsolve(dCost=0,b,10..15); This value of b tells what the base should be for the minimum cost to occur. We next determine the height corresponding to the minimal cost > hmin:=1000/bmin^2; In order to find the actual cost with these dimensions we can use the substitution command, "subs" in order to evaluate the cost when b = bmin . > subs(b=bmin,Cost); So each box costs approximately $998.41 to manufacture. 1. A contractor wants to bid on an order to make 150,000 boxes out of cardboard that costs 12 cents per square foot. The base of each box must be square and reinforced with an extra layer of cardboard. The contractor must assemble each box by taping the four seams around the bottom, one seam up the side, and one seam on top to make a hinged lid. Company records indicate that taping costs 11 cents per foot including labor and materials. a) Assuming that the boxes are to hold 3.5 cubic feet, write the cost of a single box as a function of the length of the base. b) What should the dimensions of the boxes be so that the production cost per box is lowest? c) If the contractor wants to make a profit of 17%, what should she bid? 2. A road from City A to City B must cross a strip of private land as shown in the figure below. Due to fees demanded by the owner, the cost of building the road on private land is 20% more per mile than it is on public land. a) Assuming the cost on public land is $89,650 per mile, what is the minimum cost of the project? b) Draw a road map with the distances clearly labeled that ensures a minimum cost, and write and explanation that the City Commissioners can understand.
{"url":"http://www.uccs.edu/~math/135lab3.html","timestamp":"2014-04-21T02:18:58Z","content_type":null,"content_length":"18421","record_id":"<urn:uuid:678628cc-956e-4453-b379-c122940e0819>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
An introduction to the plane wave expansion method for calculating photonic crystal band diagrams Aaron J. Danner Created at the University of Illinois at Urbana-Champaign, Urbana, IL 61801 in 2002 Last edited on January 31, 2011 Abstract: Starting from basic undergraduate-level electromagnetics (Maxwell’s equations) a simple method of finding band diagrams is described. Using the information here, it should be possible to fully understand what band diagrams mean, what they describe, and how they can be calculated. A simple program could easily be written. While not the most efficient method computationally, this is a good introduction to the business of band diagrams and is probably the easiest method to understand. 1. Useful downloads Right click to download a Mathematica program useful in performing planewave expansion method calculations. Left click to go to the download site for MathReader, free software which allows Mathematica code to be viewed for study even if Mathematica is not installed. 2. Basic description of the plane wave expansion method In order to design photonic crystals to take advantage of their unique properties, a calculation method is necessary to determine how light will propagate through a particular crystal structure. Specifically, given any periodic dielectric structure, we must find the allowable frequencies (eigenfrequencies) for light propagation in all crystal directions and be able to calculate the field distributions in the crystal for any frequency of light. There are several capable techniques, but one of the most studied and reliable methods is the plane wave expansion method. It was used in some of the earliest studies of photonic crystals [1-4] and is simple enough to be easily implemented. The method allows the computation of eigenfrequencies for a photonic crystal to any prescribed accuracy, commensurate with computing time. For photonic crystal applications in semiconductors such as GaAs or dielectrics, Maxwell’s equations, which will govern all field simulations that follow, take the following forms, where , , , and are the field vectors, is the current density, is time, and is the charge density: Assuming perfect dielectric materials ( the relative permeability) in a source-free region ( and ), Maxwell’s equations can be reduced to four equations, each involving only one type of field. This decoupling of the fields can be accomplished by taking the curl of both sides of Equation (2), and substituting from Equation (4) to give the two electric field equations. An equivalent process can be carried out in the opposite order to give the two magnetic field equations. If it is assumed that the fields are time-harmonic, then and the decoupled equations can be expressed as the Note that the forms for and are identical. This is expected because is constant in the equations. However, the relative permittivity is not constant in our structures (it is periodic), so the placement of the term is strict. The goal here is to find the energies and electromagnetic field configurations that are allowed to exist in a periodic structure. Essentially, what we are given in the problem is the , which will be a function of location, and we need to solve for and the fields. There are essentially three different choices of procedure at this point. All four equations, given a dielectric function, will yield one set of field distributions. (The and expansions give identical results.) After that, the other fields can simply be deduced from Maxwell’s equations. The question of which equation to solve depends on several factors. First, the equations for the magnetic fields (Equations (7) and (8)) are in a Hermitian form. Strictly speaking, the operator is Hermitian (see [5] for a detailed description of this property). Hermicity establishes that the eigenvalues are real, and that field distributions with the same eigenfrequency must be orthogonal. Usually, Hermitian eigenvalue problems are less complex computationally to solve [6], but the other forms should not be immediately overlooked as will be clear in the development which follows. Each of the decoupled equations above will yield three component equations if the vector operations are carried out. In Cartesian coordinates, they can be expressed as follows for the , , and expansions, respectively. Expansions for are not simplified because in their full forms the extra terms generated by the inner make the expressions very long. (As seen in Equations (9) - (11), each equation of consists of four terms on the left side; each equation of or consists of eight terms; and each equation of would consist of sixteen terms. The chain rule applied repeatedly to the inner products creates the extra terms.) The fields themselves and the dielectric function can be expanded in Fourier series along the directions in which they are periodic. This Fourier expansion will be truncated to a fixed number of terms, limiting the accuracy of the calculation. The truncated problem will yield an eigenvalue equation for the fields which will allow calculation of the dispersion curves. It must be pointed out that regardless of which of the four decoupled equations is solved, the eigenvalues will be the same. For a fixed number of terms, the accuracy can be improved by a proper choice. For example, when solving a problem of air spheres embedded in a dielectric, using the expansion would yield much better convergence than the others, while using the or expansions would yield better results for dielectric spheres in air [6]. The analogy with the two-dimensional structures discussed here suggests that air cylinders drilled in a dielectric background may be a problem better suited for calculation using the expansion, in terms of matrix size. The accuracy differences among the three expansions result from the different resultant spatial orientations and positions of each field. The basic approach for calculating the field distribution and eigenfrequency given a dielectric function and propagation vector is to first expand and the three components of the appropriate field vector in Fourier series. These series are then substituted into the decoupled Maxwell’s equations and the terms are reorganized into an ordinary eigenvalue problem. When the eigenvalues are calculated employing standard numerical methods (using a finite-sized matrix formed when the Fourier expansions are truncated), it is straightforward to use the eigenvalues to find the allowed propagation frequencies, and the eigenvectors to calculate the field distributions. The process is best illustrated by a simple example. 3. Example: One-dimensional photonic crystal The simplest example of a photonic crystal is a one-dimensional array of air slabs penetrating a dielectric background. Figure 1 shows the relevant axes. In this case, we will consider only waves propagating in the +z direction. In most photonic crystal dispersion curves, it is usually difficult to distinguish curves as “transverse magnetic" (TM-like) or “transverse electric" (TE-like), but in this simple case there are two basic polarizations, viz., and. Here we consider the . case only, and begin the problem by assuming that the only field components present are , , and . Although the justification for this may not be immediately apparent, the symmetry in the problem permits this. There is also nothing wrong with using all components of and ; the mode separation is then easily seen. (Indeed, this is the method used in higher-dimensional photonic crystal problems.) Here, the purpose of the early simplification is to more clearly illustrate the method. Figure 1: One-dimensional photonic crystal consisting of air slabs of width d embedded in a dielectric background with a periodicity of a. With only one component, only a single line in Equation (9) remains, which demonstrates the justification for using the expansion: Now, Fourier series expansions for the field and dielectric can be applied. In this case, a Fourier expansion for the inverse dielectric function is used. Equivalently, the constant can be moved to the right side of the equation and could be expanded. This would form a generalized Hermitian eigenvalue problem, or an ordinary eigenvalue problem if an additional matrix inversion were carried out in the subsequent step. In the notation that follows, will represent all Fourier coefficients. The indices m and n are integers. The variable means “Fourier coefficients, indexed by the integer n, for the y-component of the electric field” and the variable means “Fourier coefficients, indexed by the integer m, for .” Ideally, the summations should be infinite, but will be truncated for computation purposes. Note that if propagation in a direction other than z had been included in the formulation, then the additional terms would have been included in Equation (14). After the Fourier expansions are substituted into Equation (12), the initial eigenvalue equation is obtained. To simplify, each side of this equation is multiplied by an orthogonal function , where p is an integer, and integrated over a unit cell, i.e., . For a nontrivial solution, p can take only one value so one summation on each side of the equation can be dropped. After reorganizing terms and renaming the sums to use only the letters m and n, the eigenequation takes its final form of This forms an ordinary eigenvalue problem, where the integers m and n are truncated symmetrically about zero as is appropriate for this type of Fourier expansion. This corresponds to including only lower order plane waves in two dimensions. For example, if m and n were truncated to five terms (-2, -1, 0, 1, 2), then the full eigenvalue problem would appear as follows, using the notation : The matrix Q can be diagonalized using a variety of software packages and numerical methods, and the details will not be discussed here. After diagonalization, the eigenvalues and eigenvectors will be known. The eigenvalues give the dispersion diagram and the eigenvectors can be substituted back into the Fourier expansion for to find the field distribution at any given frequency. The only remaining problem is to find the dielectric coefficients , which can be obtained using the inverse Fourier transform: If the integral is split properly, then will be constant in the integration range. Depending upon where z = 0 is defined, the form of the result may take slightly different forms (but will make no difference as long as it is defined consistently): This is equivalent to the following, where the function : All information is now available to solve the Q matrix for the eigenvalues. Figure 2 shows the results for nine plane waves (n and m are integers between -4 and 4 inclusive). In this case, a structure was chosen with a unit period, , and . Several bandgaps are clearly visible for propagation in this direction. In this example we have examined only the case for propagation in the z direction. The interested reader is referred to [7] for a discussion of the case and off-axis propagation in the one-dimensional structure. In studies of photonic crystals, the interest is usually not in the electric or magnetic field forms themselves. It is the eigenvalues that carry information on the location of the modes in momentum space. In the general case, varying values of , , and allows construction of a complete band diagram. In more complicated structures, the band diagram is usually constructed at the boundaries of the Brillouin zone. Figure 2: Dispersion curve for propagation in the z direction for the one-dimensional photonic crystal structure. Note the presence of several bandgaps. 4. Fully vectorial, three-dimensional structures Essentially, the method remains the same for more complicated structures. Because of our interest in two-dimensional structures, we examine here the case of the triangular lattice of air holes embedded in a dielectric background. Using fabrication techniques described in the next chapter, arrays of holes can be created using electron beam lithography and etching methods. The result is a two-dimensional array of air holes in a semiconductor substrate. Therefore, for our interests the dielectric function will be periodic only in the xy plane (uniform in the z direction). This results in some simplification for the expansion. Although the structure studied is two-dimensional, propagation in all directions (including the out-of-plane propagation case) will be considered. Extension of this method to three-dimensional structures is straightforward and will be explained. In the equations used, a is the lattice spacing of a unit cell; the lattice itself is triangular within a medium with dielectric constant perforated by infinite air holes (atoms) of diameter d. The 2D triangular lattice unit cell has been widely covered in literature [2,3,5,8] and the method described here has been tested and gives equivalent results for the same problems. In this development, the triangular lattice supercell is further generalized to the N x N case and method accuracy is treated. First, we discuss the unit cell. The general formulas for the Fourier expansions in the three-dimensional case, assuming use of the expansion, are where the vectors are related to the directions of periodicity. They are actually the collection of reciprocal lattice vectors; the vectors represent the real lattice vectors, and their relationship is defined by [8]. For the structure studied here, the real and reciprocal lattice vectors are shown in Figures 3 and 4, respectively. Because it is a two-dimensional structure, there are two reciprocal lattice vectors, and . The lattice vectors shown can be expressed as Figure 3: Real lattice vectors for the 2D triangular lattice. Figure 4: Reciprocal lattice vectors for the 2D triangular lattice. Equations (21) and (22) now become the following, specifically for the triangular lattice structure: At this point the same process is used as in the one-dimensional case discussed in Section 2. Equations (27) and (28) are substituted back into the appropriate decoupled Maxwell equation (Equation (9)). Again, both sides of the resulting equation are multiplied by an orthogonal function []and integrated over a unit cell (see Figure 5). The rectangular area in Figure 5 represents one possible area of integration for the unit cell. After algebraic simplification, the result is an ordinary eigenvalue equation with form similar to the one encountered in the one-dimensional case: The matrix is given by the following, where []: Although Equations (29) and (30) are more complicated to convert into matrix form suitable for diagonalization, the process is carried out exactly as in the one-dimensional case and will not be discussed further here. Certainly, more plane waves must be used to maintain accuracy than in the one-dimensional case, but the method is the same. The calculation of the Fourier dielectric coefficients [] is also more complicated due to the area integral, but the method for doing so is equivalent. The general formula for the dielectric coefficient is given by where the area of integration is represented by the rectangular area within the solid lines in Figure 5. After changing to cylindrical coordinates, the integration is easy to split into two parts (inside the air holes and in the dielectric background). The dielectric function in each part then becomes a constant and the integral thus simplifies for the unit cell to In the derivation of Equation (32), the following property has been used, where is the nth-order Bessel function: Figure 5: Unit cell. The dotted line represents the actual cell and the solid line represents the area covered by the integral in the dielectric Fourier coefficient computation. Using another property [] results in the simplification [8] In previous studies of the triangular lattice unit cell, 140 to 225 plane waves were used [1,2,3] to calculate accurate dispersion curves. Villeneuve and Piché [2] tested the convergence to 841 plane waves and found that only 225 were necessary for good convergence. The accuracy of any given set of curves is difficult to predict because the convergence rates can change between differing The plane wave expansion method was carried out to construct dispersion curves for along the Brillouin zone shown in Figure 4. The method described using the[] expansion creates spurious modes with zero frequency, which were removed. Also, in the in-plane propagation case modes can be separated by polarization into TE-like ([] is in the xy plane) and TM-like ([] is in the xy plane) modes. Two examples have been carried out with []= 13.2 using 441 plane waves. TE-like modes were separated from the result and are shown in Figures 6 and 7 for values of and along the Brillouin zone. The gap between the first and second bands changes with the lattice dimensions, where d refers to the diameter of the air holes, and a to the lattice spacing. Figure 8 shows the variation. Figure 6: TE-like modes for air holes embedded in a background of []= 13.2 with d/a = 0.5. Figure 7: TE-like modes for air holes embedded in a background of []= 13.2 with d/a = 0.8. Figure 8: Variation of TE mode gap with lattice parameters. The two bands plotted are the two bands with the lowest eigenfrequencies ([]= 13.2). 5. Supercell techniques If a defect is introduced into the otherwise periodic structure then defect modes can arise in the photonic band structure. To study defect modes, the same plane wave expansion method can be used. The basic idea is to replace the unit cell by a more complicated unit cell and preserve the periodicity. For example, a 4 x 4 supercell with a central defect can give reasonable accuracy because the missing holes are spaced four lattice units apart. As long as confined modes do not couple to one another, then the results of the calculation should equally apply to the case of an isolated defect (missing hole) in a large array of perfect photonic crystal. Supercells are often used to calculate defect states in photonic crystals [9,10], although different authors choose to use different sizes. Although a 4 x 4 supercell is a reasonable size cell for most calculations, in order to study certain modes with more accuracy larger supercell structures may be needed. In this thesis, the dielectric coefficients for the general case of an N x N supercell with a point defect have been derived. The most basic example of a supercell is the unit cell itself (see Figure 5). The situation for a supercell is shown in Figure 9. The lattice spacing is now given by Na. For example, a 4 x 4 supercell has an overall periodicity of 4a with a defect appearing in the lattice once per period (every four unit cells). The eigenvalue equation taking this into account is identical to the unit cell case (Equation 29), except the matrix now includes the effect of the N x N supercell, Figure 9: Example of a 4 x 4 supercell. The dotted line represents the supercell itself, and the solid line represents the area covered by the integral in the dielectric Fourier coefficient where []. Analogous to Equation (30) we obtain The Area factor that appears outside the integral in Equation (31) is now dependent on the size of the supercell. For example, in a 4 x 4 supercell this becomes . In addition, it is now more complicated to integrate in cylindrical coordinates, as many air holes are distributed at points other than the center of the coordinate system. This can be accounted for by making the following substitutions into Equation (31): where []represents the position of a hole in the supercell. This creates constants which can be taken from the main integral and summed over all the positions of the photonic atoms as follows: The integral over q depends on the position of each photonic atom. As illustrated in Figure 10, depending on the supercell size N, several holes will be cut by the area of integration, and thus the integral about q will not be from 0 to 2p in each case. Fortunately, the cut atoms can be combined due to their symmetry such that only one integration needs to be carried out. To obtain the full equation, it is helpful to visualize the supercell in three parts: two offset rectangular lattices (each with period a in x, in y) intermeshed to form the whole photonic atoms around the defect, and an even number of half atoms at the edges. The numbers and positions of the atoms change with the supercell size. One of the rectangular lattices has an even number of atoms on a side and the other has an odd number of atoms on a side. The odd mesh includes the central whole atoms which will be removed later. The following quantities give these numbers, less one, as a function of N: In Equations (37) and (38), the Floor function returns the largest integer less than or equal to the argument. The crucial summation over these photonic atoms and Figure 10: N x N supercell area of integration. For a given N, the four corner atoms that the rectangle cuts will not be present; the whole atoms inside the rectangle must be included (except the central defect). the combined half-atoms is given by (The latter two large terms are the combined half-atoms, and the -1 removes the central atom.) When combined with the main equation, the final coefficients are obtained with complete generality of supercell size. When determining the accuracy of the supercell method for a given number of plane waves, it is sometimes useful to create a supercell with no defects and compare the results to those of a unit cell. In this case, a +2 term can be added to Equation (39) to give the proper summation. (This effectively reinserts the central atom and the four quarter-atoms at the corners of the region of integration.) The Bessel function simplifications have been carried out as in the unit cell case: An example of a supercell calculation was carried out for a defect in a 2D photonic crystal for the out-of-plane propagation case using a 4 x 4 supercell. Figure 11 shows the fundamental mode in this case. This is a plot of , or the time-averaged electric field. Because of the tight confinement of the mode around the defect, a larger supercell would not give significantly different results in this case. It does become important for higher-order mode calculations or calculations at lower frequencies, where the confinement is not so tight. In this case, the confinement of the energy within a one lattice unit radius is 98.19%. The plot was generated by substituting the eigenvectors back into the Fourier expansion for the electric field. Figure 11: Near field plot of lowest eigenmode () for a structure of d/a = 0.3, = 0, = 0, , = 12.25. 1089 plane waves were used in each direction. Figure 12 shows the in-plane defect mode calculated using a 4 x 4 supercell. In this case, the defect mode is superimposed on the TE-like modes of the unit cell. Folded bands from the supercell itself are not shown for clarity. Note that the defect band appears almost in the middle of the gap. This demonstrates that adding a defect introduces a localized confined state that is not present in the bulk photonic crystal. The dispersion curve of the mode is independent of frequency because of this localization. Figure 12: In-plane defect mode of a 4 x 4 supercell (with central defect) superimposed on TE-like modes of a unit cell for air holes embedded in a background of = 13.2 with d/a = 0.8. (Folded bands are not shown.) 6. Control of Accuracy As increasing numbers of planewaves are used, the eigenvalues approach the correct values asymptotically. For the 4 x 4 supercell, Figure 13 shows this behavior. For this case, i = 12 should give sufficient accuracy for band diagrams around the point of calculation shown. (This should correspond to i = 3 for a unit cell structure.) At high frequencies the accuracy for a given number of planewaves decreases, and the size of the supercell, if too small, will give incorrect defect mode eigenvalues because of coupling between adjacent supercells. Figure 13: Plots of the four lowest eigenfrequencies (bands 1, 3, 5, 7 in ascending order) as a function of the (i x i) submatrix size, which is related to the number of plane waves given by []. The structure is a 4 x 4 supercell with d/a = 0.20, = 12.25, and the position of calculation is = 0, = 0, . With increasing numbers of plane waves used, each of the four curves approaches its actual value asymptotically. Still, the question remains which expansion (, , or ) will give greater accuracy for a fixed matrix size. The relationship was analyzed in [11] and it was found that for air holes in a square lattice the expansion gave consistently better convergence results, even when large numbers of plane waves (1000) were used. It is evident that for structures presented here, the expansion should be used for this method. Other methods exist for solving the eigenequation. Instead of solving for all the eigenfrequencies at once, iterative techniques can be used [5,9,12] to find eigenvalue and eigenvectors pairs one at a time. These methods seem to rely on ordinary Hermitian eigenvalue problems, so the is exclusively used. Computing time can be saved by use of the fast Fourier transform to carry out the operation in Fourier space, instead of real space as was done here [9]. These methods usually suffer from poor convergence times at high frequencies [13]. The plane wave method presented here can also be extended to calculate transmission spectra [1,8,14], as well as modal characteristics [15,16]. 7. References [1] M. Plihal and A. A. Maradudin, “Photonic band structure of two-dimensional systems: The triangular lattice,” Phys. Rev. B, vol. 44, no. 16, pp. 8565-8571, 1991. [2] P. R. Villeneuve and M. Piché, “Photoinc band gaps in two-dimensional square and hexagonal lattices,” Phys. Rev. B, vol. 46, no. 8, pp. 4969-4972, 1992. [3] R. D. Meade, K. D. Brommer, A. M. Rappe, and J.D. Joannopoulos, “Existence of a photonic band gap in two dimensions,” Appl. Phys. Lett., vol. 61, no. 4, pp. 495-497, 1992. [4] K. M. Ho, C. T. Chan, and C. M. Soukoulis, “Existence of a photonic gap in periodic dielectric structures,” Phys. Rev. Lett., vol. 65, no. 25, pp. 3152-3155, 1990. [5] J. D. Joannopoulos, R. D. Meade and J.N. Winn, Photonic Crystals: Molding the Flow of Light. Princeton, NJ: Princeton University Press, 1995. [6] H. S. Sözüer and J. W. Haus, “Photonic bands: Convergence problems with the plane-wave method,” Phys. Rev. B, vol. 45, no. 24, pp. 13962-13972, 1992. [7] J. D. Shumpert, “Modeling of periodic dielectric structures (electromagnetic crystals),” Ph.D. dissertation, University of Michigan, 2001. [8] K. Sakoda, Optical Properties of Photonic Crystals. Berlin, Germany: Springer, 2001. [9] T. Sřndergaard, “Spontaneous emission in two-dimensional photonic crystal microcavities,” IEEE J. of Quantum Elect., vol. 36, no. 4, pp. 450-457, 2000. [10] S. G. Johnson and J. D. Joannopoulos, Photonic Crystals: The Road from Theory to Practice. Boston, MA: Kluwer Academic Publishers, 2002. [11] Z. Y. Yuan, J. W. Haus, and K. Sakoda, “Eigenmode symmetry for simple cubic lattices and the transmission spectra,” Optics Express, vol. 3, no. 1, pp. 19-27, 1998. [12] S. G. Johnson and J. D. Joannopoulos, “Block-iterative frequency-domain methods for Maxwell’s equations in a planewave basis,” Optics Express, vol. 8, no. 3, pp. 173-190, 2001. [13] S. G. Johnson (private communication), June 25, 2002. [14] A. Barra, D. Cassagne, and C. Jouanin, “Existence of two-dimensional absolute photonic band gaps in the visible,” Appl. Phys. Lett., vol. 72, no. 6, pp. 627-629, 1998. [15] N. Yokouchi, A. J. Danner, and K. D. Choquette, “Effective index model of 2D photonic crystal confined VCSELs,” presented at LEOS VCSEL Summer Topical, Mont Tremblant, Quebec, 2002. [16] J. C. Knight, T. A. Birks, R. F. Cregan, P. Russell, J.-P. de Sandro, “Photonic crystals as optical fibres - physics and applications,” Optical Materials. vol. 11, pp. 143-151, 1999. 8. Acknowledgement October, 2007: Jack (Zetao) Ma of Shizuoka University, Japan kindly provided a correction to Equation 39. January, 2011: Wang Wei of Jilin University, China kindly provided corrections to Equations 18 and 19 and the text after Equation 15.
{"url":"http://www.ece.nus.edu.sg/stfpage/eleadj/planewave.htm","timestamp":"2014-04-18T20:57:06Z","content_type":null,"content_length":"261292","record_id":"<urn:uuid:8c13adf0-a2b0-43c5-835b-07abef12e838>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
When quoting this document, please refer to the following DOI: 10.4230/LIPIcs.FSTTCS.2011.176 URN: urn:nbn:de:0030-drops-33428 URL: http://drops.dagstuhl.de/opus/volltexte/2011/3342/ Go to the corresponding Portal Adiga, Abhijin ; Chandran, L. Sunil ; Mathew, Rogers Cubicity, Degeneracy, and Crossing Number A k-box B=(R_1,R_2,...,R_k), where each R_i is a closed interval on the real line, is defined to be the Cartesian product R_1 X R_2 X ... X R_k. If each R_i is a unit length interval, we call B a k-cube. Boxicity of a graph G, denoted as box(G), is the minimum integer k such that G is an intersection graph of k-boxes. Similarly, the cubicity of G, denoted as cub(G), is the minimum integer k such that G is an intersection graph of k-cubes. It was shown in [L. Sunil Chandran, Mathew C. Francis, and Naveen Sivadasan. Representing graphs as the intersection of axis-parallel cubes. MCDES-2008, IISc Centenary Conference, available at CoRR, abs/cs/0607092, 2006.] that, for a graph G with maximum degree \Delta, cub(G) <= \lceil 4(\Delta +1) ln n\rceil. In this paper we show that, for a k-degenerate graph G, cub(G) <= (k+2) \lceil 2e log n \rceil. Since k is at most \Delta and can be much lower, this clearly is a stronger result. We also give an efficient deterministic algorithm that runs in O(n^2k) time to output a 8k(\lceil 2.42 log n\rceil + 1) dimensional cube representation for G. The crossing number of a graph G, denoted as CR(G), is the minimum number of crossing pairs of edges, over all drawings of G in the plane. An important consequence of the above result is that if the crossing number of a graph G is t, then box(G) is O(t^{1/4}{\lceil log t\ rceil}^{3/4}) . This bound is tight upto a factor of O((log t)^{3/4}). Let (P,\leq) be a partially ordered set and let G_{P} denote its underlying comparability graph. Let dim(P) denote the poset dimension of P. Another interesting consequence of our result is to show that dim(P) \leq 2(k+2) \lceil 2e \log n \rceil, where k denotes the degeneracy of G_{P}. Also, we get a deterministic algorithm that runs in O(n^2k) time to construct a 16k(\lceil 2.42 log n\rceil + 1) sized realizer for P. As far as we know, though very good upper bounds exist for poset dimension in terms of maximum degree of its underlying comparability graph, no upper bounds in terms of the degeneracy of the underlying comparability graph is seen in the literature. BibTeX - Entry author = {Abhijin Adiga and L. Sunil Chandran and Rogers Mathew}, title = {{Cubicity, Degeneracy, and Crossing Number}}, booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)}, pages = {176--190}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-34-7}, ISSN = {1868-8969}, year = {2011}, volume = {13}, editor = {Supratik Chakraborty and Amit Kumar}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2011/3342}, URN = {urn:nbn:de:0030-drops-33428}, doi = {http://dx.doi.org/10.4230/LIPIcs.FSTTCS.2011.176}, annote = {Keywords: Degeneracy, Cubicity, Boxicity, Crossing Number, Interval Graph, Intersection Graph, Poset Dimension, Comparability Graph } Keywords: Degeneracy, Cubicity, Boxicity, Crossing Number, Interval Graph, Intersection Graph, Poset Dimension, Comparability Graph Seminar: IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011) Issue Date: 2011 Date of publication: 01.12.2011 DROPS-Home | Fulltext Search | Imprint
{"url":"http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=3342","timestamp":"2014-04-18T03:00:04Z","content_type":null,"content_length":"7039","record_id":"<urn:uuid:5cc43930-8e3d-46b4-bb1e-b47608cac482>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2012 [00372] [Date Index] [Thread Index] [Author Index] Re: How to check whether an infinite set is closed under addition? • To: mathgroup at smc.vnet.net • Subject: [mg124286] Re: How to check whether an infinite set is closed under addition? • From: David Yen <tw_yen at hotmail.com> • Date: Mon, 16 Jan 2012 17:04:52 -0500 (EST) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com You have 3 choices: 1) You check all possible cases, which are countably infinite. Since you don't an infinite amount of time, you have to either use the statistical approach involving a finite but sufficient sample size or the standard proof technique involving induction. 2) You use the axioms of universal algebra, but you are stuck if people ask you why and how your objects satisfy those axioms. Well, you can simply say, "Well, my objects just meet the requirements. Have faith, my friend!" 3) You define all your positive integers as sets and all your additions as set unions, and prove closure trivially in ZFC. Else, you can define all positive integers as binary sequences and addition as a CPU instruction to show closure. However, set theorists may ask you to prove that your definition of positive integers is equivalent to their definition of natural numbers. Set theorists don't add, so you don't have to worry about that one. Whenever you deal with infinities, you must count on faith as a result of understanding. Faith without understanding is blind, but without faith (in axioms, implicit or explicit) you cannot prove anything.
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Jan/msg00372.html","timestamp":"2014-04-18T00:22:06Z","content_type":null,"content_length":"26325","record_id":"<urn:uuid:0192665a-479a-49f5-905f-d85274ba40d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: \frac { \sqrt [ 3 ]{ x } -\sqrt [ 3 ]{ 2 } }{ x-2 } Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f8886b3e4b02251ecc97157","timestamp":"2014-04-19T10:07:54Z","content_type":null,"content_length":"49063","record_id":"<urn:uuid:57aaebd0-6f9d-4d74-ade7-6cc53870af12>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Recurrence Plot of Mathematical Functions and Constants A recurrence plot illustrates the recurrence of states in a phase space where all the possible states of a system can be seen. Recurrence plots can be used to view and study mathematical functions such as sine and sinc or constants like , and so on. In the case of a function , the values used are the finite sequence , where is the size. In the case of a number, the values used are the digits of its decimal expansion taken to places. The expression plotted is , where is the Heaviside step function, is the sequence, and is a kind of tolerance. The point view is a graphical representation of the matrix , which is binary because of the unit step function. In the density view, the points are grouped in clusters to give a smoother representation of the matrix, and the matrix rows are rotated (vertical shift). The mesh draws lines that highlight the white spaces for the point view and gives reference rulers for the density view.
{"url":"http://demonstrations.wolfram.com/RecurrencePlotOfMathematicalFunctionsAndConstants/","timestamp":"2014-04-20T05:46:53Z","content_type":null,"content_length":"47105","record_id":"<urn:uuid:9b094262-4524-462e-8529-f64592ef9c14>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Neural Networks weights and bias help Date: Aug 13, 2010 10:47 AM Author: hasan batuk Subject: Neural Networks weights and bias help Hi, i am trying to learn NN toolbox. I tried to creat a networks that multiply the number by 3 and gives as output. For simplicity, i made the transfer function also linear. It works well but i tried to reach the same result by using the bias and weight values. x -> input y-> output a= [ (w1*x + b1) * w2 ]+b2 but it ends up with same as the input value. I am really confused about it. it looks very simple but i couldnt find what i miss. Thanks for your time. the code is below P=[1:4:200]; % training set T=P*3; % target set net.layers{1}.transferFcn = 'purelin'; % making transfer func. as linear y=sim(net,101) % giving a number say 101 as an input to system and taking 303 as output as expected. % trying to reach same result by using weight and bias values. it ends up with 101, not 303. a1=(net.iw{1,1}*101)+net.b{1}; % output of first layer a2=(net.lw{2,1}*a1)+net.b{2} % output of second layer
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7151927","timestamp":"2014-04-19T23:15:23Z","content_type":null,"content_length":"2025","record_id":"<urn:uuid:2af72fe8-a202-4382-b37b-3e64620b2fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Challenging Problem!!! (Equivalent Metrics) July 10th 2009, 01:56 PM Challenging Problem!!! (Equivalent Metrics) I need to show that if (X,p) is a non-compact metric space, then there exists a metric p* equivalent to p such that (X,p*) is not complete. I greatly appreciate your help! July 10th 2009, 08:38 PM What is your definition of equivalent metrics? Mine is that they generate the same topology, but with that take an infinite set $X$ with the discrete metric then for any equivalent metric we have that for every $x \in X$ there exists $a_x \in \mathbb{R} ^+$ such that $B_X(x,a_x):= \{ y \in X : d(x,y)<a_x \} =\{ x \}$ and so every Cauchy sequence in any equivalent metric is eventually constant and so it converges, therefore X is complete with any metric equivalent to the discrete metric. July 10th 2009, 09:15 PM You are right about the definition of equivalent metrics. But I'm trying to prove the statement for any metric p, not just the discrete metric. July 10th 2009, 09:42 PM Actually, I think what I did was show a counterexample: Any infinite discrete space is not compact, but with any equivalent metric is complete. July 11th 2009, 03:40 AM What is your definition of equivalent metrics? Mine is that they generate the same topology, but with that take an infinite set $X$ with the discrete metric then for any equivalent metric we have that for every $x \in X$ there exists $a_x \in \mathbb{R} ^+$ such that $B_X(x,a_x):= \{ y \in X : d(x,y)<a_x \} =\{ x \}$and so every Cauchy sequence in any equivalent metric is eventually constant and so it converges, therefore X is complete with any metric equivalent to the discrete metric. I can't agree with your counter-example; I think you should try to explicitate the part in red to see why it fails. Consider the metric on $\mathbb{N}$ defined by $d(n,n)=0$, $d(n,n+1)=\frac{1}{2^n}$ and, for $m\leq n$, $d(m,n)=\sum_{k=m}^{n-1} d(k,k+1)$. (This example corresponds to considering the subset $\ {\frac{1}{2^n}|n\geq -1\}$ with the usual topology of $\mathbb{R}$). Since $B(n,\frac{1}{2^n})=\{n\}$ (open ball), this defines the discrete topology. Yet you can see that the sequence $(n)_{n\ geq 0}$ is a Cauchy sequence that doesn't converge. July 11th 2009, 06:58 AM Isn't completeness a property of metrics rather than of topologies? For example, take $\mathbb R$ with the usual metric $d(x,y)=|x-y|$. It is readily shown that $d$ is equivalent to the metric $\bar d$ given by $\bar d(x,y)=\left|\frac x{1+|x|}-\frac y{1+|y|}\right|$. Now $\mathbb R$ is $d$-complete but not $\bar d$-complete since $(n)_{n\in\mathbb N}$ is a $\bar d$-Cauchy sequence that does not converge to anything in $(\mathbb R,\bar d)$. July 11th 2009, 07:47 AM Sure, with the exception of compact spaces. Compacity is a topological property, yet it implies completeness for any compatible metric (if there is one). The question is to prove that compacity is the only exception. This is an interesting problem, I think. July 11th 2009, 09:09 AM Laurent's example is exactly the one I've thought of. Now we need to extend it to any compact metric spaces. The idea is to assume WLOG that (X,p) is complete but not totally bounded. So we can take an unbounded sequence in (X,p), and make it Cauchy under p* by somehow contracting the distances. Since it was unbounded under p, intuitively it will be non-convergent under p*. Or any other ideas? July 14th 2009, 11:47 AM I can't agree with your counter-example; I think you should try to explicitate the part in red to see why it fails. Consider the metric on $\mathbb{N}$ defined by $d(n,n)=0$, $d(n,n+1)=\frac{1}{2^n}$ and, for $m\leq n$, $d(m,n)=\sum_{k=m}^{n-1} d(k,k+1)$. (This example corresponds to considering the subset $\ {\frac{1}{2^n}|n\geq -1\}$ with the usual topology of $\mathbb{R}$). Since $B(n,\frac{1}{2^n})=\{n\}$ (open ball), this defines the discrete topology. Yet you can see that the sequence $(n)_{n\ geq 0}$ is a Cauchy sequence that doesn't converge. Well, I was thinking of using something like Alexandroff's compactification by a point. Like when you identify $\mathbb{R} ^n$ with $A:= \mathbb{S} ^n - \{ e_n \}$ where $\mathbb{S} ^n := \{x \in \mathbb{R} ^{n+1} : \Vert x \Vert =1 \}$ and any sequence converging to $\infty$ (or $e_n$) is a Cauchy sequence in $A$ that doesn't converge in $\mathbb{R} ^n$. So far no luck in generalizing though. Is there a natural metric on your compactification if it is made over a metric space? July 14th 2009, 11:47 PM Well, I was thinking of using something like Alexandroff's compactification by a point. Like when you identify $\mathbb{R} ^n$ with $A:= \mathbb{S} ^n - \{ e_n \}$ where $\mathbb{S} ^n := \{x \in \mathbb{R} ^{n+1} : \Vert x \Vert =1 \}$ and any sequence converging to $\infty$ (or $e_n$) is a Cauchy sequence in $A$ that doesn't converge in $\mathbb{R} ^n$. So far no luck in generalizing though. Is there a natural metric on your compactification if it is made over a metric space? I don't know of any straightforward explicit construction of a metric on the one-point compactification of a metric space, but you could use a metrization theorem to show that one must exist. The restriction of this metric to the original space would then provide a solution to the problem. July 15th 2009, 03:14 AM I don't know of any straightforward explicit construction of a metric on the one-point compactification of a metric space, but you could use a metrization theorem to show that one must exist. The restriction of this metric to the original space would then provide a solution to the problem. This construction based on the one-point compactification will indeed work, but under some conditions. For instance the one-point compactification is not always Hausdorff (the base space needs to be locally compact for that) hence not metrizable. And Urysohn's theorem applies to second-countable spaces. By browsing through the Wikipedia, it seems that Stone-Cech compactification (a many-points compactification) gives a Hausdorff compact space, but Urysohn's theorem probably doesn't apply... I haven't other clues anyway. July 15th 2009, 05:00 AM This construction based on the one-point compactification will indeed work, but under some conditions. For instance the one-point compactification is not always Hausdorff (the base space needs to be locally compact for that) hence not metrizable. And Urysohn's theorem applies to second-countable spaces. That's true, that idea will only work for a locally compact space. The Stone–Cech compactification (of a noncompact space) is enormous, and as far as I know it is never metrisable. July 15th 2009, 09:34 AM Okay, but the topology we induce on $X \bigcup \{ p \}$ is not necessarily that of the ususal compactification: For example when we do the stereographic projection of $\mathbb{S} ^n$ on $\mathbb {R} ^n$ the topology we give to $\mathbb{S} ^n$ is $\tau= \{ U \subset X:U$ is open in $X \}$$\bigcup \{ p \in U : U=(X \bigcup \{ p \} )-V$ where $V \subset X$ is compact $\}$ and $X= \mathbb{R} January 23rd 2010, 10:23 AM Two metrics $d_1$ and $d_2$ on same X are equivalent, if they produces same topologies, i.e. ${\tau}_{d_1}={\tau}_{d_2}$. For general case, if the staments is true, remind that if $(X,d)$ isn't compact, he contain sequence ${\{x_n\}}_{n=1}^{\infty}$ with no convergent subsequences, but he must not be Cashy. So, question is here how to use this to construct equivalent non-complete metric in term of this sequences.
{"url":"http://mathhelpforum.com/differential-geometry/94835-challenging-problem-equivalent-metrics-print.html","timestamp":"2014-04-21T13:16:17Z","content_type":null,"content_length":"29426","record_id":"<urn:uuid:3f742fd8-820f-4860-91db-5c154c3bb9b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
San Luis Rey Math Tutor Find a San Luis Rey Math Tutor ...I also taught organic chemistry as a chemistry lecturer at UT Austin 2010 - 2011. As a graduate student, I worked extensively as a teaching assistant at UT for various chemistry classes including general chemistry, organic chemistry and labs. For the past 6 years I also tutored as often as my schedule allowed for mostly chemistry subjects. 9 Subjects: including algebra 1, prealgebra, English, reading ...And everybody has to take it. The good news, at least for you, is that that means qualified tutors, like myself, see Geometry all the time, and know how to tutor it. I usually have two or more Geometry students at any given time, and I've been tutoring since I graduated in 2009. 12 Subjects: including algebra 1, algebra 2, calculus, geometry ...This is achieved through critical questioning and thinking strategies as I help my students answer their own questions by analyzing the problems logically and/or looking at them from another angle. Sometimes I demonstrate an important point by giving my students an example and asking them to app... 24 Subjects: including trigonometry, writing, ESL/ESOL, literature ...Susan also works with the student to teach them proper study skills and organizational skills. Parents and students alike are delighted and very satisfied with Susan's program and tutoring.I am an excellent Algebra tutor. My students are successful and increase their grades. 20 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...I have taught prealgebra in a classroom and thoroughly enjoyed it. This is the point when the kids can really start sinking their teeth into some pretty complex math problems and seeing how you can use their skills to do some important things. They also get introduced to the very basic concepts of algebra and that is where they can get a bit frustrated. 11 Subjects: including algebra 1, algebra 2, calculus, geometry Nearby Cities With Math Tutor Barona Rancheria, CA Math Tutors Beach Center, CA Math Tutors Belmont Shore, CA Math Tutors Espinoza, CO Math Tutors Gilman Hot Springs, CA Math Tutors Lakeview, CA Math Tutors Naples, CA Math Tutors Oak Glen, CA Math Tutors Old Town, SD Math Tutors Pinyon Pines, CA Math Tutors Portola Hills, CA Math Tutors Romoland, CA Math Tutors Sky Valley, CA Math Tutors Smiley Heights, CA Math Tutors Villas Del Parque, PR Math Tutors
{"url":"http://www.purplemath.com/san_luis_rey_ca_math_tutors.php","timestamp":"2014-04-17T01:09:23Z","content_type":null,"content_length":"24127","record_id":"<urn:uuid:7f061ca7-d4c1-461b-9288-ff46d5a91552>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
A few questions/suggestions about MathML 2.0 From: Je'ro^me Euzenat <Jerome.Euzenat@inrialpes.fr> Date: Thu, 24 Feb 2000 11:25:01 +0200 Message-Id: <v04210103b4d9807b2648@[194.199.22.174]> To: www-math@w3.org here are some comments from a MathML user (for non rendering purposes) on the last WD. If you find them non relevant, please just ignore. I number them with regard to the WD section numbers: 4.4.6.1) I would have liked to have in the examples for "set" that of the emptyset: <set/> ? 4.4.6.12) the example of the (awaited) card operator is a bit odd because: 1) it asserts the condition of veracity of the assertion ("where A is a set with five elements"). 2) unlike many (but not all) other examples, the "Default rendering" does not render the "Example". 4.4.6.12) card is defined for sets, so why not length for lists. OpenMath had retained a common "size" element. 4.4.6.1) I feel that the "set" and "list" constructors can only have ONE "bvar" construction (because it identifies the elements of the set, that is not the case for the other constructs). It might be said in the WD (I do not see this - or the contrary - mentionned neither in 4.4.6.1(set) nor in the 4.4.5.6/4.2.3.2/4.2.1.8 (bvar) nor in the 4.2.5(condition), nor in the 4.2.2.1(tokens discussing sets)). If this is not the case, what is the meaning of: --> { x\in A, y\in B | x-y=x/y } ? This example arouse while trying to define cartesian product by: A*B = { <x,y> | x\in A /\ y\in B } with the binding of two variables x and y. Jérôme Euzenat / /\ _/ _ _ _ _ _ INRIA Rhône-Alpes, /_) | ` / ) | \ \ /_) (___/___(_/_/ / /_(_________________ 655, avenue de l'Europe / 38330 Montbonnot St Martin,/ Jerome.Euzenat@inrialpes.fr France____________________/ http://www.inrialpes.fr/exmo Received on Thursday, 24 February 2000 05:24:34 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 20 February 2010 06:12:49 GMT
{"url":"http://lists.w3.org/Archives/Public/www-math/2000Feb/0010.html","timestamp":"2014-04-20T13:39:01Z","content_type":null,"content_length":"10598","record_id":"<urn:uuid:65be807b-e1ef-49d8-a636-70ecda5d5816>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Correction to problem on order types Jeremy Clark jeremy.clark at wanadoo.fr Sun Mar 6 16:21:07 EST 2005 I don't know if this is the error Joe had in mind, but it did occur to me after I submitted an answer that there are in fact many more order types satisfying this condition. Take a set S \subset \omega_1. Then define A_x to be Q (order type of rationals) if x \in S and 1+Q otherwise. Let A_S = sum A_x for x \in \omega_1. I *think* one can show something like: if S \setminus T is stationary in \omega_1 (meets every closed unbounded set in \omega_1) then A_S cannot be order isomorphic to A_T. It is possible to define 2^\omega_1 sets such that any two of them have the property that their setwise difference is stationary. So it follows that there are 2^\omega_1 sets satisfying Joe's condition. Jeremy Clark On Mar 6, 2005, at 7:05 am, Robert M. Solovay wrote: > Joe, > Can you elaborate what the error is? I got eleven as well. The > empty set; the one point set and then 3 times 3 where the possibilities > are open, closed, or "the long line" for each end. > --Bob Solovay > On Sat, 5 Mar 2005 JoeShipman at aol.com wrote: >> In the statement of the problem, I should require that every open >> interval (a,b) with a<b is isomophic to the real numbers, not >> rational numbers. >> That was the form I had originally come up with the problem, but then >> I got too clever. Dave Marker claims that changing "reals" to >> "rationals" allows more solutions than I had contemplated, and >> pointed out why my comtemplated proof for "rationals" fails. >> -- JS >> _______________________________________________ >> FOM mailing list >> FOM at cs.nyu.edu >> http://www.cs.nyu.edu/mailman/listinfo/fom > _______________________________________________ > FOM mailing list > FOM at cs.nyu.edu > http://www.cs.nyu.edu/mailman/listinfo/fom More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-March/008831.html","timestamp":"2014-04-19T07:01:43Z","content_type":null,"content_length":"4940","record_id":"<urn:uuid:6af5c9d4-0290-4cd6-8d56-98fa3a25b149>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application Patent application title: DA CONVERTER IPC8 Class: AH03M178FI USPC Class: Class name: Publication date: 2012-03-01 Patent application number: 20120050085 Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A low power consumption DA converter includes a segment type DA converter and an R-2R resistance ladder DA converter. The segment type DA converter is coupled to a power source voltage VDD and outputs a current signal changing in a stepwise manner according to inputted upper bits D[7 to 5]. The R-2R resistance ladder DA converter is coupled to the segment type DA converter in series between the power source voltage VDD and a ground voltage GND, and outputs an output voltage Vout changing in a stepwise manner. The R-2R resistance ladder DA converter changes the output voltage Vout by raising or lowering a reference voltage Vref according to the lower bits D[4 to 0] and the current signal from the segment type DA converter. A DA converter comprising: a first DA conversion unit that is coupled to a first voltage source and outputs a current signal changing in a stepwise manner according to an inputted first digital signal; and a second DA conversion unit that is coupled to the first DA conversion unit in series between a second voltage source different from the first voltage source and the first voltage source and outputs an output voltage changing in a stepwise manner, wherein the second DA conversion unit changes the output voltage by raising or lowering a reference voltage supplied from a reference voltage source coupled to the second DA conversion unit according to an inputted second digital signal and the current signal. The DA converter according to claim 1, which changes the output voltage in sup.(m+n) steps according to the first digital signal of m bits (m is an integer of 2 or more) and the second digital signal of n bits (n is an integer of 2 or more). The DA converter according to claim 2, wherein the first DA conversion unit includes ( sup.m-1) first constant current sources and changes the current signal by controlling each of the ( sup.m-1) first constant current sources according to ( sup.m-1) decode signals generated by decoding the first digital signal of m bits. The DA converter according to claim 2, wherein the second DA conversion unit includes: a resistance ladder circuit that is coupled to the first DA conversion unit via a first node and outputs the output voltage from an output terminal; and n second constant current sources that are coupled between the resistance ladder circuit and the second voltage source according to the second digital signal of n bits, and wherein the second DA conversion unit changes the output voltage by changing combination of the second constant current sources coupled to the resistance ladder circuit. The DA converter according to claim 4, wherein the second constant current sources that are not coupled to the resistance ladder circuit among the n second constant current sources are coupled between the first node and the second voltage source, and wherein each of the n second constant current sources constantly outputs current. The DA converter according to claim 5, wherein the second DA conversion unit further includes a first resistance coupled to the reference voltage source, and wherein the second DA conversion unit discharges a differential current between the output current from the n second constant current sources and the current signal to the reference voltage source via the first resistance, or the differential current is supplied to the second DA conversion unit from the reference voltage source via the first resistance. The DA converter according to claim 4, wherein the first constant current sources output current having the same value as that of current outputted from the second constant current sources. The DA converter according to claim 6, wherein the first constant current sources output current having a value different from a value of current outputted from the second constant current sources, and wherein a resistance value of the first resistance is set so that a value obtained by multiplying a value of current outputted from the first constant current sources by the resistance value of the first resistance is constant. The DA converter according to claim 4, wherein the resistance ladder circuit includes n second resistances coupled in series between the first node and the output terminal, and (n-1) resistances coupled between the first node and output terminal side terminals of the second resistances other than the second resistance whose one end is coupled to the first node, and wherein the n second constant current sources are respectively coupled to the output terminal side terminals of the n second resistances or coupled to the first node. The DA converter according to claim 9, wherein the (n-1) resistances includes a third resistance coupled between the output terminal and the first node and (n-2) fourth resistances other than the third resistance. The DA converter according to claim 10, wherein the resistance value of the third resistance is the same as the resistance value of the fourth resistances and two times the resistance value of the second resistances. The DA converter according to claim 10, wherein the resistance value of the third resistance is the same as the resistance value of the second resistances, and wherein the resistance value of the fourth resistances is two times the resistance value of the third resistance. The DA converter according to claim 11, wherein the resistance value of the first resistance is two times the combined resistance value between the first node and the output terminal of the resistance ladder circuit. The DA converter according to claim 4, wherein the first constant current sources supply current to the reference voltage source side terminal of the first node or the first resistance. The DA converter according to claim 2, wherein the first DA conversion unit includes ( sup.m-1-p (p is an integer of 1 or more) first constant current sources and changes the current signal by controlling each of the ( sup.m-1-p) first constant current sources according to ( sup.m-1-p) decode signals among ( sup.m-1) decode signals generated by decoding the first digital signal of m bits. The DA converter according to claim 15, wherein the second DA conversion unit includes: a resistance ladder circuit that is coupled to the first DA conversion unit via a first node and outputs the output voltage from an output terminal; n second constant current sources that are coupled between the resistance ladder circuit and the second voltage source according to the second digital signal of n bits; and p third constant current sources that are coupled between the resistance ladder circuit and the second voltage source according to the p decode signals other than the ( sup.m-1-p) decode signals, and wherein the second DA conversion unit changes the output voltage by changing combination of the second and the third constant current sources coupled to the resistance ladder circuit. The DA converter according to claim 15, wherein p is an integer of 1 or more satisfying p=( 2. 18. The DA converter according to claim 2, wherein the first digital signal of m bits is a signal including the upper m bits of a digital signal of (m+n) bits, and wherein the second digital signal of n bits is a signal including the lower n bits of the digital signal of (m+n) bits. A DA converter comprising: a first constant current cell unit including a plurality of first constant current sources; a second constant current cell unit which is coupled to the first constant current cell unit in series via a first node and includes a plurality of second constant current sources, the number of which is the same as that of the first constant current sources; and a resistance circuit which is coupled between the first node and an output terminal and outputs an output voltage changing in a stepwise manner according to a flowing current and a voltage of the first node from the output terminal, wherein, when the value of the most significant bit of an inputted digital signal is a first value, the first constant current cell unit changes an output current in a stepwise manner by controlling the number of the first constant current sources that output current according to q (q is an integer of 1 or more) bits excluding the most significant bit included in the digital signal, and the second constant current cell unit couples the second constant current sources to the first node either directly or via the resistance circuit according to r (r is an integer of 2 or more) bits excluding the most significant bit and the q bits included in the digital signal, wherein, when the value of the most significant bit is a second value different from the first value, the second constant current cell unit changes an output current in a stepwise manner by controlling the number of the second constant current sources that output current according to the q bits, and the first constant current cell unit couples the first constant current sources to the first node either directly or via the resistance circuit according to the r bits included in the digital signal, wherein the voltage of the first node changes in a stepwise manner according to the change of the output current of the first constant current cell unit or the output current of the second constant current cell unit, and wherein the resistance circuit changes the output voltage by raising or lowering the voltage of the first node according to a combination of the first constant current sources or the second constant current sources coupled to the resistance circuit. The disclosure of Japanese Patent Application No. 2010-192647 filed on Aug. 30, 2010 including the specification, drawings and abstract is incorporated herein by reference in its entirety. BACKGROUND [0002] The present invention relates to a DA converter, and in particular to a low power consumption DA converter. As high performance and low power consumption of large scale integration (LSI) are required, high performance (reduction in the amount of glitches) and low current consumption of DA converter are increasingly required. Generally, a current summing DA converter (for example, Japanese Unexamined Patent Application Publication No. Sho 62 (1987)-5729) is used as a DA converter in which the amount of glitches is reduced. However, an ordinary current summing DA converter has a problem that the current consumption is large. Therefore, a DA converter that can reduce current consumption is desired to be developed. An example of the current summing DA converter as mentioned above will be described. FIG. 10 is a circuit block diagram showing a configuration of an eclectic DA converter 900 in which a current summing DA converter is mounted. As shown in FIG. 10 , the eclectic DA converter 900 includes a driver unit 91, a segment decoder unit 92, an R-2R driver unit 93, a segment type (current summing) DA converter 94, and an R-2R resistance ladder DA converter 95. The eclectic DA converter 900 processes the upper m bits of inputted (m+n) bits (m and n are integers of 2 or more) by the segment type (current summing) DA converter 94 and processes the lower n bits by the R-2R resistance ladder DA converter 95. The upper m bits are inputted into the segment decoder unit 92 via the driver unit 91. The lower n bits are inputted into the R-2R resistance ladder DA converter 95 via the R-2R driver unit 93. The segment decoder unit 92 has (2 -1) decoders (not shown in FIG. 10 ). Thereby, a digital signal of the upper m bits inputted into the segment decoder unit 92 is decoded into a signal of (2 -1) bits. The segment type DA converter 94A has (2 -1) current sources and current switches. The (2 -1) current sources (current value I ) and current switches are switched to an off state or an on state according to the signal of (2 -1) bits outputted from the segment decoder unit 92. Thereby, the digital signal of the upper m bits is converted into an analog amount in a range from 0 [V] to -(2 ×(2/3)×R [V]. The R-2R resistance ladder DA converter 95 has n current sources (current value I ) and current switches, and a resistance ladder. The resistance ladder includes resistances R (resistance value is R) and resistances 2R (resistance value is 2R). Each of the n current sources and current switches is switched to an off state or an on state according to one bit of a lower n-bit signal. Thereby, the lower n-bit signal is converted into an analog amount in a range from 0 [V] to - ×(2/3)×R [V] by the resistance ladder. An analog output corresponding to a digital signal of (m+n) bits inputted into the eclectic DA converter 900 has an analog amount obtained by summing up the analog amounts generated by the segment type DA converter 94 and the R-2R resistance ladder DA converter 95. SUMMARY [0008] However, inventors found that the aforementioned DA convertor causes a problem as described below. The aforementioned eclectic DA convertor 900 sums up currents flowing in the segment type DA converter 94 and the R-2R resistance ladder DA converter 95, and converts the summed-up result into a voltage. Therefore, an eclectic DA convertor such as the eclectic DA convertor 900 causes a problem that current consumption increases. A DA converter according to an aspect of the present invention includes: a first DA conversion unit that is coupled to a first voltage source and outputs a current signal changing in a stepwise manner according to an inputted first digital signal; and a second DA conversion unit that is coupled to the first DA conversion unit in series between a second voltage source different from the first voltage source and the first voltage source and outputs a current signal changing in a stepwise manner, in which the second DA conversion unit changes the output voltage by raising or lowering a reference voltage supplied from a reference voltage source coupled to the second DA conversion unit according to an inputted second digital signal and the current signal. In this DA converter, the first DA conversion unit and the second DA conversion unit are coupled in series. Therefore, it is possible to reduce current flowing in the DA converter compared with a case in which the first DA conversion unit and the second DA conversion unit are coupled in parallel. A DA converter according to another aspect of the present invention includes: a first constant current cell unit including multiple first constant current sources; a second constant current cell unit which is coupled to the first constant current cell unit in series via a first node and includes multiple second constant current sources, the number of which is the same as that of the first constant current sources; and a resistance circuit which is coupled between the first node and an output terminal and outputs an output voltage changing in a stepwise manner according to a flowing current and a voltage of the first node from the output terminal, in which, when the value of the most significant bit of an inputted digital signal is a first value, the first constant current cell unit changes an output current in a stepwise manner by controlling the number of the first constant current sources that output current according to q (q is an integer of 1 or more) bits excluding the most significant bit included in the digital signal, and the second constant current cell unit couples the second constant current sources to the first node either directly or via the resistance circuit according to r (r is an integer of 2 or more) bits excluding the most significant bit and the q bits included in the digital signal, when the value of the most significant bit is a second value different from the first value, the second constant current cell unit changes the output current in a stepwise manner by controlling the number of the second constant current sources that output current according to the q bits, and the first constant current cell unit couples the first constant current sources to the first node either directly or via the resistance circuit according to the r bits included in the digital signal, the voltage of the first node changes in a stepwise manner according to the change of the output current of the first constant current cell unit or the output current of the second constant current cell unit, and the resistance circuit changes the output voltage by raising or lowering the voltage of the first node according to a combination of the first constant current sources or the second constant current sources coupled to the resistance circuit. In this DA converter, the first constant current cell unit and the second constant current cell unit are coupled in series. Therefore, it is possible to reduce current flowing in the DA converter compared with a case in which the first constant current cell unit and the second constant current cell unit are coupled in According to the aspects of the present configuration, it is possible to provide a low power consumption DA converter. BRIEF DESCRIPTION OF THE DRAWINGS [0020] FIG. 1 is a circuit block diagram showing a configuration of a DA converter according to a first embodiment; [0021] FIG. 2 is a circuit block diagram showing a configuration of a DA converter according to the first embodiment; [0022] FIG. 3 is a graph showing an output voltage Vout of the DA converter according to the first embodiment; [0023] FIG. 4 is a circuit block diagram showing a configuration of a DA converter according to a second embodiment; [0024] FIG. 5 is a circuit block diagram showing a configuration of a DA converter according to a third embodiment; [0025] FIG. 6 is a graph showing an output voltage Vout of a DA converter according to the third embodiment; [0026] FIG. 7 is a circuit block diagram showing a configuration of a DA converter according to a fourth embodiment; [0027] FIG. 8 is an operation table showing an operation of the DA converter according to the fourth embodiment; [0028] FIG. 9 is a graph showing an output voltage Vout of the DA converter according to the fourth embodiment; and [0029] FIG. 10 is a circuit block diagram showing a configuration of an eclectic DA converter according to a fourth embodiment. DETAILED DESCRIPTION [0030] Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the drawings, the same elements are given the same reference numerals and repetitive descriptions will be omitted as necessary. First Embodiment [0031] First, a DA converter according to a first embodiment will be described. FIG. 1 is a circuit block diagram showing a configuration of a DA converter 100 according to the first embodiment. The DA converter 100 includes a segment type DA converter 11 and an R-2R resistance ladder DA converter 21. The segment type DA converter 11 and the R-2R resistance ladder DA converter 21 are coupled in series between a power source voltage VDD and a ground voltage GND. A digital signal of (m+n) bits is inputted into the DA converter 100. Here, n and m are integers larger than or equal to 2. The upper m bits of the digital signal of (m+n) bits are inputted into the segment type DA converter 11. The lower n bits of the digital signal of (m+n) bits are inputted into the R-2R resistance ladder DA converter 21. The segment type DA converter 11 includes a driver unit 1a, a segment decoder unit 2a, and a constant current cell unit 3a. The upper m bits are inputted into the segment decoder unit 2a via the driver unit 1a. The segment decoder unit 2a decodes the upper m bits and outputs a generated decode signal. The constant current cell unit 3a includes (2 -1) constant current sources Ia1 to Ia(2 -1) and (2 -1) switches Sa1 to Sa(2 -1). A constant current source Iak (k is an integer satisfying 1≦k≦2 -1) and a switch Sak are coupled in series between the power source voltage VDD and the R-2R resistance ladder DA converter 21 (node Va of the R-2R resistance ladder DA converter 21). A corresponding decode signal from the segment decoder unit 2a is inputted into the control terminal of the switch Sak. The R-2R resistance ladder DA converter 21 includes an R-2R driver unit 4, a constant current cell unit 5a, an R-2R resistance ladder 6a, and a resistance Rc. The node Va of the R-2R resistance ladder DA converter 21 is coupled to the segment type DA converter 11. The lower n bits are inputted into the constant current cell unit 5a via the R-2R driver unit 4. The constant current cell unit 5a includes n constant current sources Ib1 to Ibn, n switches Sb1 to Sbn, n switches Sc1 to Scn, and n inverters IVa1 to IVan. The positive terminal of a constant current source Ibj (j is an integer satisfying 1≦j≦n) is coupled to a node Nj of the R-2R resistance ladder 6a described below via a switch Sbj. The negative terminal of the constant current source Ibj is coupled to the ground voltage GND. A corresponding lower bit is inputted into the control terminal of the switch Sbj. The switch Scj is coupled between the positive terminal of the constant current source Ibj and the node Va. An inverted signal of a corresponding lower bit is inputted into the control terminal of the switch Scj via the inverter IVaj. The R-2R resistance ladder 6a includes resistances Ra1 to Ra(n-1) and resistances Rb1 to Rbn. Here, the resistance value of the resistances Ra1 to Ra(n-1) is 2R. The resistance value of the resistances Rb1 to Rbn is R. An output voltage Vout is outputted from the output terminal of the R-2R resistance ladder 6a. The resistances Rb1 to Rbn are coupled in series between the node Va and the output terminal. Terminals on the output terminal sides of the resistances Rb1 to Rbn are respectively defined as nodes N1 to Nn. The resistances Ra1 to Ra(n-1) are respectively coupled between the nodes N1 to N(n-1) and the node Va. Therefore, a combined resistance value between the node Va and the output terminal of the R-2R resistance ladder 6a is R. The resistance Rc is coupled between the node Va of the R-2R resistance ladder 6a and a reference voltage source that generates a reference voltage Vref. The resistance value of the resistance Rc is R. The current value of the constant current sources Ia1 to Ia(2 -1) of the segment type DA converter 11 and the current value of the constant current sources Ib1 to Ibn of the R-2R resistance ladder DA converter 21 are I. Next, an operation of the DA converter 100 will be described. When the voltage at the node Va is Va, the voltage Va is determined by a current value obtained by adding an output current βI of the constant current cell unit 3a to an output current nI of the constant current cell unit 5a and the R-2R resistance ladder 6a. Here, β is the number of the constant current sources that are turned on in the constant current cell unit 3a, and β is an integer from 0 to (2 -1). In this case, the voltage Va is represented by the following formula (1): =Vref+(β-n)×I×2R (1) The output voltage Vout outputted from the DA converter 100 is represented by the following formula (2): = Va - 1 2 n × α × I × 2 R ( 2 ) ##EQU00001## , α is an integer from 0 to 2 Next, a current flow in the DA converter 100 will be described. The output current βI of the constant current cell unit 3a flows into the node Va. The current value of the output current βI varies in a range from 0 to (2 -1)I according to variation of data of the upper m bits. The output current nI of the constant current cell unit 5a and the R-2R resistance ladder 6a also flows into the node Va. The current value of the output current nI is always nI and constant. When the output current βI and the output current nI have the same value, an equilibrium state is generated, and no current flows in the resistance When the output current βI is larger than the output current nI, a current having a current value (β-n)I flows from the node Va to the reference voltage Vref in the resistance Rc. On the other hand, when the output current βI is smaller than the output current nI, a current having a current value (n-β)I flows from the reference voltage Vref to the node Va in the resistance Rc. The maximum current that flows in the DA converter 100 is (2 Here, a current flow in an eclectic DA converter 900 which handles (m+n) bits including the upper m bits and the lower n bits will be discussed. In this case, the maximum current that flows in a segment type DA converter 94 is (2 -1)I. A current that flows in an R-2R resistance ladder DA converter 95 is nI. Therefore, the maximum current that flows in the eclectic DA converter 900 is {(2 Therefore, the DA converter 100 can reduce current consumption by nI compared with the eclectic DA converter 900. Thus, according to the present configuration, it is possible to provide a low power consumption DA converter. Specifically, in the eclectic DA converter 900, the segment type DA converter 94 and the R-2R resistance ladder DA converter 95 are coupled in parallel. Further, the segment type DA converter 94 and the R-2R resistance ladder DA converter 95 include constant current sources coupled to the power source in the same polarity. Therefore, it is necessary to supply current respectively and separately to the segment type DA converter 94 and the R-2R resistance ladder DA converter 95. On the other hand, in the DA converter 100 according to the present embodiment, the segment type DA converter 11 and the R-2R resistance ladder DA converter 21 are coupled in series. In other words, the constant current sources Ib1 to Ibn of the constant current cell unit 5a uses all or part of the current outputted from the constant current sources Ia1 to Ia(2 -1) of the constant current cell unit 3a. Thereby, the DA converter 100 can reduce consumption current compared with the eclectic DA converter 900. Next, as a specific example, a case in which the DA converter 100 is an 8-bit DA converter will be described. Hereinafter, the 8-bit DA converter 100 is referred to as a DA converter 101. FIG. 2 is a circuit block diagram showing a configuration of the DA converter 101. An 8-bit digital signal is divided into the upper 3 bits D[7 to 5] and the lower 5 bits D[4 to 0], and inputted into the DA converter 101. The constant current cell unit 3a is provided with 7(2 ) constant current sources Ia1 to Ia7. The constant current cell unit 5a is provided with 5 constant current sources Ib1 to Ib5. The R-2R resistance ladder 6a includes resistances Ra1 to Ra4 and resistances Rb1 to Rb5. [0045] FIG. 3 is a graph showing an output voltage Vout of the DA converter 101. In FIG. 3 , the horizontal axis indicates 8-bit code and the vertical axis indicates the value of the output voltage Vout. The DA converter 101 outputs 256 steps (8 bits) of output voltages in a range shown by the following formula (3): - 191 16 IR ≦ Vout ≦ Vref + 4 IR ( 3 ) ##EQU00002## Next, the current flow in the DA converter 101 will be described. The output current βI of the constant current cell unit 3a flows into the node Va. The current value of the output current βI varies in a range from 0 to 7I according to variation of data of the upper 3 bits. The output current nI of the constant current cell unit 5a and the R-2R resistance ladder 6a also flows into the node Va. The current value of the output current nI is always 5I and constant. When the output current βI and the output current nI have the same current value of 5I, an equilibrium state is generated, and no current flows in the resistance Rc. When the output current βI is larger than the output current nI, a current having a current value up to (7-5)I=2I flows from the node Va to the reference voltage Vref in the resistance Rc. When the output current βI is smaller than the output current nI, a current having a current value up to (5-0)I=5I flows from the reference voltage Vref to the node Va in the resistance Rc. The maximum current that flows in the DA converter 101 is 7I. Here, a current flow in the eclectic DA converter 900 which handles 8 bits including the upper 3 bits and the lower 5 bits will be discussed. In this case, the maximum current that flows in the segment type DA converter 94 is 7I. The current that flows in the R-2R resistance ladder DA converter 95 is 5I. Therefore, the maximum current that flows in the eclectic DA converter 900 is 12I. Thus, it is possible to specifically confirm that the DA converter 101 can reduce current consumption by nI compared with the eclectic DA converter 900. Second Embodiment [0050] Next, a DA converter according to a second embodiment will be described. FIG. 4 is a circuit block diagram showing a configuration of a DA converter 200 according to the second embodiment. The DA converter 200 is a modified DA converter 100 of the first embodiment in which the segment type DA converter 11 is replaced by a segment type DA converter 12. The segment type DA converter 12 is a modified segment type DA converter 11 in which the constant current cell unit 3a is replaced by a constant current cell unit 3b. The constant current cell unit 3b is a modified constant current cell unit 3a of the DA converter 100 to which switches Sd1 to Sd(2 -1) and inverters IVb1 to IVb(2 -1) are added. In the constant current cell unit 3b, a switch Sdk (k is an integer satisfying 1≦k≦2 -1) is coupled between the negative terminal of a constant current source Iak and the reference voltage Vref. An inverted signal of a corresponding output signal of the segment decoder unit 2a is inputted into the control terminal of the switch Sdk via the inverter IVbk. The other configuration of the DA converter 200 is the same as that of the DA converter 100, so the description is omitted. Next, an operation of the DA converter 200 will be described. In the constant current cell unit 3a, when the switch Sak is turned off, the switch Sdk is turned on. In this case, a current flows from the constant current source Iak to the reference voltage Vref. On the other hand, when the switch Sak is turned on, the switch Sdk is turned off. In this case, a current flows from the constant current source Iak to the node Va. Therefore, the constant current cell unit 3b always outputs a current having a value of (2 -1)I. On the other hand, in the same manner as in the first embodiment, the constant current cell unit 5b outputs a current having a value of nI. In other words, in the DA converter 200, a current having a current value (β-n)I always flows from the reference voltage Vref to the node Va in the resistance Rc. In this case, even when there is a parasitic resistance between the reference voltage source (not shown in FIG. 4 ) that generates the reference voltage Vref and the resistance Rc, the value of the current (β-n)I flowing in the resistance Rc is constant, so it is possible to prevent the reference voltage Vref from fluctuating. Therefore, according to the present configuration, it is possible to output a stable output voltage by preventing the reference voltage Vref from fluctuating. In other words, the DA converter 200 can generate an output voltage whose fluctuation amplitude is constant. Third Embodiment [0055] Next, a DA converter according to a third embodiment will be described. FIG. 5 is a circuit block diagram showing a configuration of a DA converter 300 according to the third embodiment. The DA converter 300 is a modified DA converter 100 of the first embodiment in which the segment type DA converter 11 and the R-2R resistance ladder DA converter 21 are respectively replaced by a segment type DA converter 13 and an R-2R resistance ladder DA converter 22. The segment type DA converter 13 is a modified segment type DA converter 11 in which the constant current cell unit 3a is replaced by a constant current cell unit 3c. The constant current cell unit 3c is a modified constant current cell unit 3a from which the constant current source Ia(2 -1) and the switch Sa(2 -1) are deleted. Among the (2 -1) decode signals generated by the segment decoder unit 2a, (2 -2) decode signals are inputted into the constant current cell unit 3c. The decode signal other than the decode signals inputted into the constant current cell unit 3c is inputted into the R-2R resistance ladder DA converter 22. The R-2R resistance ladder DA converter 22 is a modified R-2R resistance ladder DA converter 21 in which the constant current cell unit 5a and the R-2R resistance ladder 6a are respectively replaced by a constant current cell unit 5c and the R-2R resistance ladder 6b. The constant current cell unit 5b is a modified constant current cell unit 5a of the R-2R resistance ladder DA converter 21 to which a constant current source Ib(n+1), a switch Sb(n+1), a switch Sc (n+1), and an inverter IVa(n+1) are added. The positive terminal of the constant current source Ib(n+1) is coupled to the output voltage Vout via the switch Sb(n+1). The positive terminal of the constant current source Ib(n+1) is also coupled to the node Va via the switch Sc(n+1). The negative terminal of the constant current source Ib(n+1) is coupled to the ground voltage GND. A corresponding decode signal from the segment decoder unit 2a is inputted into the control terminal of the switch Sb(n+1). An inverted signal of a corresponding decode signal from the segment decoder unit 2a is inputted into the control terminal of the switch Sc(n+1). The R-2R resistance ladder DA converter 22 is a modified R-2R resistance ladder DA converter 21 in which the constant current cell unit 5a is replaced by the constant current cell unit 5c and further the R-2R resistance ladder 6a is replaced by the R-2R resistance ladder 6b. The R-2R resistance ladder 6b is a modified R-2R resistance ladder 6a to which a resistance Ran and a resistance Rb(n+1) are added on the side of the output voltage Vout. The resistance value of the resistance Ran is 2R. The resistance value of the resistances Rb(n+1) is R. Therefore, in the same way in the R-2R resistance ladder 6a, a combined resistance value between the node Va and the output terminal of the R-2R resistance ladder 6b is R. In other words, it can be said that the DA converter 300 is a modified DA converter 100 in which one of the constant current sources in the segment type DA converter is moved into the R-2R resistance ladder DA converter. Next, an operation of the DA converter 300 will be described. In the description below, as an example, a case in which the DA converter 300 is an 8-bit DA converter will be described. Hereinafter, the 8-bit DA converter 300 is referred to as a DA converter 301. Here, among the 7 decode signals generated by decoding the upper 3 bits (m=3), 6 decode signals are inputted into the segment type DA converter 13. The lower 5 bits (n=5) and one decode signal other than those inputted into the segment type DA converter 13 are inputted into the R-2R resistance ladder DA converter 22. In this case, the voltage at the node Va is determined by a current value obtained by adding an output current βI of the constant current cell unit 3c to an output current (n+1)I of the constant current cell unit 5b. Here, β is an integer from 0 to 6 (=2 -2). In this case, the voltage Va is represented by the following formula (4): =Vref+{β-(n+1)}×I×2R (4) The output voltage Vout outputted from the DA converter 300 is represented by the following formula (5). Here, α is the same as that in the formula (2): = Va - 1 2 n × α × I × 2 R ( 5 ) ##EQU00003## Next, the current flow in the DA converter 301 will be described. The output current βI of the constant current sources Ia1 to Ia(2 -2) of the constant current cell unit 3c flows into the node Va. The output current βI varies in a range from 0 to 6I according to variation of data of the upper 3 bits. A current (n+1)I flows from the node Va to the constant current cell unit 5b. Since n is 5, the current value of the output current (n+1)I is always 6I and constant. When the output current βI is 6I, an equilibrium state is generated, and no current flows in the resistance Rc. When the output current βI is larger than 6I, a current having a current value (β-6)I flows from the node Va to the reference voltage Vref in the resistance Rc. On the other hand, when the output current βI is smaller than the output current 6I, a current having a current value (6-β)I flows from the reference voltage Vref to the node Va in the resistance Rc. Therefore, the maximum current that flows in the DA converter 300 is 6I. Therefore, in the DA converter 301, the maximum current consumption can be reduced to 6I. Thus, according to the present configuration, it is possible to further reduce the current consumption compared with the DA converter 101 according to the first embodiment. [0067] FIG. 6 is a graph showing an output voltage Vout of the DA converter 301. In FIG. 6 , the horizontal axis indicates 8-bit code and the vertical axis indicates the value of the output voltage Vout. The DA converter 301 outputs 256 steps (8 bits) of output voltages in a range shown by the following formula (6): ≦Vout≦Vref-16IR (6) In the DA converter 300, a part of the constant current sources in the segment type DA converter is moved into the R-2R resistance ladder DA converter. At this time, the number of constant current sources that can be moved is not limited to one, but multiple constant current sources can be moved. However, in order to minimize the current consumption in the entire DA converter, it is desired that the number of the constant current sources included in the segment type DA converter is the same as the number of the constant current sources included in the R-2R resistance ladder DA converter. Specifically, when the number of the constant current sources moved from the segment type DA converter to the R-2R resistance ladder DA converter is p, p is desired to be an integer of 1 or more satisfying p=(2 -n-1)/2. In this case, among (2 -1) decode signals generated by decoding the upper m bits, (2 -1-p) decode signals are inputted into the segment type DA converter 13. The lower n bits and p decode signals other than those inputted into the segment type DA converter 13 are inputted into the R-2R resistance ladder DA converter 22. Fourth Embodiment [0069] Next, a DA converter according to a fourth embodiment will be described. FIG. 7 is a circuit block diagram showing a configuration of a DA converter 400 according to the fourth embodiment. As shown in FIG. 7 , the DA converter 400 includes a driver unit 1b, a segment decoder unit 2b, a constant current cell unit 3d, an R-2R driver unit 4, a constant current cell unit 5c, an R-2R resistance ladder 6a, a resistance Rc, and selectors 8 and 9. In the present embodiment, to simplify the description, a case in which 8-bit digital signal is inputted into the DA converter 400 will be described. In this case, an 8-bit digital signal is divided into the upper 3 bits D[7 to 5] and the lower 5 bits D[4 to 0], and inputted into the DA converter 400. However, the digital signal inputted into the DA converter 400 is not limited to an 8-bit signal. In the same way as in the first to the third embodiments, a digital signal of (m+n) bits can be inputted into the DA converter 400. The R-2R driver unit 4 and the R-2R resistance ladder 6a of the DA converter 400 are the same as those of the DA converter 100 according to the first embodiment, so the description is omitted. The upper 3 bits D[7 to 5] is inputted into the driver unit 1b. The driver unit 1b divides the inputted upper 3 bits D[7 to 5] into the most significant bit D[7] and the other upper bits D[6 and 5] and outputs them. The most significant bit D[7] and the other upper bits D[6 and 5] are inputted into the segment decoder unit 2b from the driver unit 1b. The segment decoder unit 2b decodes the other upper bits D[6 and 5] according to the most significant bit D[7]. The signals decoded in the segment decoder unit 2b are outputted to the selectors 8 and 9 as output signals. The most significant bit D[7], the lower 5 bits [4 to 0], and the output signals of the segment decoder unit 2b are inputted into the selector 8. Further, an inverted signal of the lower 5 bits [4 to 0] is inputted into the selector 8 via the inverter INV1. The selector 8 outputs output signals Xa1 to Xa5 and Xb1 to Xb5 according to the most significant bit D[7]. The lower 5 bits [4 to 0] and the output signals of the segment decoder unit 2b are inputted into the selector 9. Further, an inverted signal of the lower 5 bits [4 to 0] is inputted into the selector 9 via the inverter INV1. Furthermore, an inverted signal of the most significant bit D[7] is inputted into the selector 9 via the inverter INV2. The selector 9 outputs output signals Ya1 to Ya5 and Yb1 to Yb5 according to the inverted signal of the most significant bit D[7] (in other words, according to the most significant bit D[7]). The constant current cell unit 3d includes constant current sources Ia1 to Ia5, switches Sa1 to Sa5, and switches Sf1 to Sf5. The constant current cell unit 5c includes constant current sources Ib1 to Ib5, switches Sb1 to Sb5, and switches Sg1 to Sg5. A constant current source Iah (h is an integer satisfying 1≦h≦5), a switch Sah, a switch Sbh, and a constant current source Ibh are coupled in series in this order between the power source voltage VDD and the ground voltage GND. The switch Sah and the switch Sbh are coupled to each other via a corresponding node Nh of the R-2R resistance ladder 6a. The negative terminal of the constant current source Iah is coupled to the node Va via the switch Sfh. The positive terminal of the constant current source Ibh is coupled to the node Va via the switch Sgh. An output signal Xah is inputted into the control terminal of the switch Sah. An output signal Xbh is inputted into the control terminal of the switch Sfh. An output signal Yah is inputted into the control terminal of the switch Sbh. An output signal Ybh is inputted into the control terminal of the switch Sgh. Next, the operation of the DA converter 400 will be described. The operation of the DA converter 400 is controlled by the most significant bit D[7]. FIG. 8 is an operation table showing the operation of the DA converter 400. The operation of the segment decoder unit 2b is controlled by the most significant bit D[7]. First, the operation when the most significant bit D[7] is "0" will be described. In this case, when the other upper bits D[6 and 5] are [00], the segment decoder unit 2b outputs [00011]. When the other upper bits D[6 and 5] are [01], the segment decoder unit 2b outputs [00111]. When the other upper bits D[6 and 5] are [10], the segment decoder unit 2b outputs [01111]. When the other upper bits D[6 and 5] are [11], the segment decoder unit 2b outputs [11111]. Next, the operation when the most significant bit D[7] is "1" will be described. In this case, when the other upper bits D[6 and 5] are [00], the segment decoder unit 2b outputs [11111]. When the other upper bits D[6 and 5] are [01], the segment decoder unit 2b outputs [01111]. When the other upper bits D[6 and 5] are [10], the segment decoder unit 2b outputs [00111]. When the other upper bits D[6 and 5] are [11], the segment decoder unit 2b outputs [00011]. In summary, signals corresponding to the other upper bits [6 and 5] outputted by the segment decoder unit 2b are inverted when the most significant bit D[7] is inverted. Next, the operation of the selector 8 will be described. The operation of the selector 8 is controlled by the most significant bit D[7]. When the most significant bit D[7] is "0", the selector 8 outputs the output data of the segment decoder unit 2b as the output signals Xb1 to Xb5. The selector 8 also outputs the output signals Xa1 to Xa5 as signals to turn off the switches Sa1 to Sa5. Thus, in this case, the number of the constant current sources that are turned on in the constant current cell unit 3d is controlled by the output data of the segment decoder unit 2b. Therefore, the constant current cell unit 3d functions as a constant current cell unit of the segment type DA converter. On the other hand, when the most significant bit D[7] is "1", the selector 8 outputs the output data of the R-2R driver unit 4 as the output signals Xa1 to Xa5. The selector 8 also outputs the inverted data of the output data of the R-2R driver unit 4 as the output signals Xb1 to Xb5. Thus, in this case, the coupling relationship between the constant current cell unit 3d and the R-2R resistance ladder 6a is controlled by the output data of the R-2R driver unit 4. Therefore, the constant current cell unit 3d functions as a constant current cell unit of the R-2R resistance ladder DA converter. Next, the operation of the selector 9 will be described. The inverted signal of the most significant bit D[7] is inputted into the selector 9. Therefore, the selector 9 operates complementarily with the selector 8. When the most significant bit D[7] is "1", the selector 9 outputs the output data of the segment decoder unit 2b as the output signals Yb1 to Yb5. The selector 9 also outputs the output signals Ya1 to Ya5 as signals to turn off the switches Sb1 to Sb5. Thus, in this case, the number of the constant current sources that are turned on in the constant current cell unit 5c is controlled by the output data of the segment decoder unit 2b. Therefore, the constant current cell unit 5c functions as a constant current cell unit of the segment type DA converter. On the other hand, when the most significant bit D[7] is "0", the selector 9 outputs the output data of the R-2R driver unit 4 as the output signals Ya1 to Ya5. The selector 9 also outputs the inverted data of the output data of the R-2R driver unit 4 as the output signals Yb1 to Yb5. Thus, in this case, the coupling relationship between the constant current cell unit 5c and the R-2R resistance ladder 6a is controlled by the output data of the R-2R driver unit 4. Therefore, the constant current cell unit 5c functions as a constant current cell unit of the R-2R resistance ladder DA converter. Therefore, in the DA converter 400, when the most significant bit D[7] is "0", the constant current cell unit 3d functions as a constant current cell unit of the segment type DA converter, and the constant current cell unit 5c functions as a constant current cell unit of the R-2R resistance ladder DA converter. On the other hand, when the most significant bit D[7] is "1", the constant current cell unit 3d functions as a constant current cell unit of the R-2R resistance ladder DA converter, and the constant current cell unit 5c functions as a constant current cell unit of the segment type DA converter. In this way, the DA converter 400 can replace the segment type DA converter with the R-2R resistance ladder DA converter and vice versa in their coupling relationships on the basis of the most significant bit. In the DA converter 400, when the most significant bit D[7] is "0", as the other upper bits D[6 and 5] changes from [00] to [01] to [10] to [11], the number of the constant current sources that are turned on in the constant current cell unit 3d changes from 2 to 3 to 4 to 5. On the other hand, when the most significant bit D[7] is "1", as the other upper bits D[6 and 5] changes from [00] to [01] to [10] to [11], the number of the constant current sources that are turned on in the constant current cell unit 5c changes from 5 to 4 to 3 to 2. Thereby, when the most significant bit D[7] is "0", the voltage Va at the node Va of the DA converter 400 is determined by a current value obtained by adding the output current βI of the constant current cell unit 3d to the output current nI of the constant current cell unit 5c and the R-2R resistance ladder 6a. Here, the number of the upper bits is m, and the number of the lower bits is n. β is an integer from {n-(2 -1-1)} to n. The voltage Va at this time is represented by the following formula (7): =Vref+(β-n)×I×2R (7) , α is an integer from 0 to (2 In this case, the output voltage Vout outputted from the DA converter 400 is represented by the following formula (8): = Va - 1 2 n × α × I × 2 R ( 8 ) ##EQU00004## When the most significant bit D[7] is "1", the voltage Va at the node Va of the DA converter 400 is determined by a current value obtained by adding the output current βI of the constant current cell unit 5c to the output current nI of the constant current cell unit 3d and the R-2R resistance ladder 6a. The voltage Va at this time is represented by the following formula (9): =Vref+(n-β)×I×2R (9) In this case, the output voltage Vout outputted from the DA converter 400 is represented by the following formula (10): = Va - 1 2 n × α × I × 2 R ( 10 ) ##EQU00005## Since digital signals of the upper 3 bits and the lower 5 bits are inputted into the DA converter 400, m is 3 and n is 5 in the above formula. FIG. 9 is a graph showing an output voltage Vout of the DA converter 400. In FIG. 9 , the horizontal axis indicates 8-bit code and the vertical axis indicates the value of the output voltage Vout. The DA converter 400 outputs an 8-bit voltage in a range shown by the following formula (11): - 127 16 IR ≦ Vout ≦ Vref + 127 16 IR ( 11 ) ##EQU00006## Next, the current flow in the DA converter 400 will be described. When one of the constant current cell unit 3d and the constant current cell unit 5c functions as a segment type DA converter, as shown in FIG. 8, a current having a value of 2I to 5I flows in the constant current cell unit 3d or the constant current cell unit 5c. When one of the constant current cell unit 3d and the constant current cell unit 5c functions as an R-2R resistance ladder DA converter, a current having a value of 5I flows in the constant current cell unit 3d or the constant current cell unit 5c. Therefore, the maximum value of the current that flows in the constant current cell unit 3d and the constant current cell unit 5c of the DA converter 400 is 5I. In further generalization, when one of the constant current cell unit 3d and the constant current cell unit 5c functions as a segment type DA converter, a current having a value from {n-(2 -1-1)}I to nI flows in the constant current cell unit 3d or the constant current cell unit 5c. When one of the constant current cell unit 3d and the constant current cell unit 5c functions as an R-2R resistance ladder DA converter, a current having a value of nI flows in the constant current cell unit 3d or the constant current cell unit 5c. Therefore, the maximum value of the current that flows in the constant current cell unit 3d and the constant current cell unit 5c is nI. Therefore, in the DA converter 400, the maximum current consumption can be reduced to nI. Thus, according to the present configuration, it is possible to further reduce the current consumption compared with the DA converters 100, 200, and 300. Here, the number of bits of a signal provided to a constant current cell unit that functions as the segment type DA converter is assumed to be N (N is an integer of 1 or more). The number of bits of a signal provided to a constant current cell unit that functions as the R-2R resistance ladder DA converter is assumed to be M (M is an integer of 1 or more). The number of bits of a digital signal provided to the DA converter 400 is assumed to be K (K=1+M+N). In this case, the number of constant current sources required for the constant current cell unit that functions as the segment type DA converter is 2 -1. The number of constant current sources required for the constant current cell unit that functions as the R-2R resistance ladder DA converter is M. In this case, it is required to satisfy M≧2 -1 for the DA converter 400 to function as a DA converter. The present invention is not limited to the above-described embodiments, but may be appropriately modified without departing from the scope of the invention. For example, in the DA converter 300 according to the third embodiment, the constant current sources of the constant current cell unit 3c can be coupled to the terminal of the resistance Rc on the side of the reference voltage Vref. Thereby, in the same manner as the DA converter 200 according to the second embodiment, the DA converter 300 can generate an output voltage whose fluctuation amplitude is constant. In the DA converter 400 according to the fourth embodiment, the constant current sources of one of the constant current cell unit 3d and the constant current cell unit 5c, which functions as the segment type DA converter, can be coupled to the terminal of the resistance Rc on the side of the reference voltage Vref. Thereby, in the same manner as the DA converter 200 according to the second embodiment, the DA converter 400 can generate an output voltage whose fluctuation amplitude is constant. Further, in the DA converter 400 according to the fourth embodiment, the replacement of the segment type DA converter with the R-2R resistance ladder DA converter and vice versa can be performed based on any bit other than the most significant bit. The resistance value of the resistance Rc of the above embodiments is not limited to 2R. The resistance value of the resistance Rc may be two times the combined resistance value of the R-2R resistance ladder circuit. The resistance value of the resistance Ra1 of the R-2R resistance ladder 6a or the resistance Ra(n+1) of the R-2R resistance ladder 6b may be R. In this case, the combined resistance value of the R-2R resistance ladder is (2/3)R. Therefore, the resistance value of the resistance Rc may be ( 4/3)R. The current value of the constant current sources included in a segment type DA converter is not limited to I, but may be a value different from the current value of the constant current sources included in an R-2R resistance ladder DA converter. For example, the current value of the constant current sources included in a segment type DA converter can be xI (x is an arbitrary positive value). In this case, the resistance value of the resistance Rc may be 1/x. In other words, it is only required that a value obtained by multiplying the current value of the constant current sources included in a segment type DA converter by the resistance value of the resistance Rc is a constant value. Thereby, the same function as that of the above described DA converters 100, 101, 200, 300, and 400 can be realized. Although, in the above embodiments, the R-2R resistance ladder 6a or 6b is used, another resistance ladder can be used. For example, a resistance ladder can be used in which the resistance Ra1 to Ran of the R-2R resistance ladder 6a or the resistances Ra1 to Ra(n+1) of the R-2R resistance ladder 6b are replaced by resistances having a resistance value of R. User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120050085","timestamp":"2014-04-18T19:55:19Z","content_type":null,"content_length":"81068","record_id":"<urn:uuid:80746fe1-53f1-40f7-8131-8b8ad3fac695>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Symmetric amenability and the nonexistence of Lie and Jordan derivations. (English) Zbl 0888.46024 The paper is devoted to some classes of Banach algebras in which Jordan and Lie derivations are reduced to (associative) derivations. A Banach algebra is symmetrically amenable if it has an approximate diagonal consisting of symmetric tensors. A Jordan derivation from a Banach algebra $U$ into a Banach $U$-bimodule $X$ is a linear map $D$ with $D\left({a}^{2}\right)=aD\left(a\right)+D\left(a\right)a$, $a\in U$. A Lie derivation $D:U\to X$ is a linear map which satisfies $D\left(ab-ba\right)=aD\left(b\right)-D\left(b\right)a+D\left(a\right)b-bD\left(a\right)$, $a,b\in U$. It is clear that if $D$ is a (ordinary) derivation (i.e. $D\left(ab\right)=aD\left(b\right)+D\left(a\right)b\right)$ then it is a Jordan and Lie derivation as well. The author proves that if $U$ is symmetrically amenable then every continuous Jordan derivation into a $U$-bimodule is a derivation. This result can be extended to other algebras, for example all $ {C}^{*}$-algebras. If the identity of $U$ is contained in a subalgebra isomorphism to the full matrix algebra ${M}_{n}$ $\left(n\ge 2\right)$ then every Jordan derivation from $U$ is a derivation. Similar results are developed for Lie derivation. In similar situations every continuous Lie derivation is the sum of an ordinary derivation and a map ${\Delta }$ from the algebra $U$ into the $U$ -bimodule $X$ with ${\Delta }\left(ab-ba\right)=0$ and $a{\Delta }\left(b\right)={\Delta }\left(b\right)a$ for all $a,b\in U$. 46H25 Normed modules and Banach modules, topological modules 46L57 Derivations, dissipations and positive semigroups in ${C}^{*}$-algebras 46H70 Nonassociative topological algebras
{"url":"http://zbmath.org/?q=an:0888.46024","timestamp":"2014-04-17T13:11:07Z","content_type":null,"content_length":"25644","record_id":"<urn:uuid:9290b50f-6c30-4e4e-940f-2cc4853afb04>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
The Unreasonable Effectiveness of Mathematics by Jonathan Witt (Dr. Witt is a Fellow of Discovery Institute and of Acton Institute) Summary:Derek Abbott's "Is Mathematics Invented or Discovered?" asks why mathematics is so effective in describing our universe, and ultimately reduces the debate to a simplistic binary of mathematics as wholly created (Abbott's position) versus the neo-Platonic idea that mathematical models can perfectly and exhaustively describe nature. Abbott overlooks the view that drove the founders of modern science: the cosmos is the product of an extraordinary mathematician but one not restricted to the mathematical. Moreover, because the founders of modern science had theological reasons for emphasizing not only the cosmic designer's surpassing intellect and freedom but also human fallibility, they emphasized the need to test their ideas empirically. In these and other ways, Judeo-Christian theism matured Platonism and, in the process, sparked the scientific revolution. Derek Abbott's recent piece in The Huffington Post, "Is Mathematics Invented or Discovered?", offers a thoughtful taxonomy of views on an issue with important metaphysical implications, but a crucial alternative possibility goes unexplored in the essay. Since Ben Wiker and I explore these issues in our book, A Meaningful World: How the Arts and Sciences Reveal the Genius of Nature, I'd like to summarize what I find useful in Abbott's piece and what I find incomplete. The Abbott essay boils down to an effort to answer a question that thinkers have wrestled with for centuries and that was nicely expressed by Albert Einstein in this way: "How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?" Abbott says there is no consensus among mathematicians and scientists, but highlights four common answers: "1) Math is innate. The reason mathematics is the natural language of science, is that the universe is underpinned by the same order. The structures of mathematics are intrinsic to nature.... 2) Math is a human construct. The only reason mathematics is admirably suited for describing the physical world is that we invented it to do just that. It is a product of the human mind and we make mathematics up as we go along to suit our purposes.... 3) Math is not so successful. Those that marvel at the ubiquity of mathematical applications have perhaps been seduced by an overstatement of their successes. Analytical mathematical equations only approximately describe the real world, and even then only describe a limited subset of all the phenomena around us.... 4) Keep calm and carry on. What matters is that mathematics produces results. Save the hot air for philosophers. This is called the 'shut up and calculate' position." One doesn't have to read very hard between the lines to quickly pick up where Derek Abbott's sympathies lie. He calls #3 the realist position. Without the question-begging term appearing in scare quotes and with only a little reflection it will dawn on the reader that Option #3 primarily functions as another way of arguing for the #2 Option (math is a human construct). Option #3 also implies a straw-man characterization of Option #1 (math is innate), suggesting as it does that leading contemporary proponents of #1 necessarily assume...what?--that planets are geometrically perfect spheres?That E=MC2 perfectly describes the movement of bodies? Or that to see nature as possessing inherent mathematical regularities is, willy nilly, to see it as exhaustively algorithmic? But no contemporary proponents of #1 that I'm acquainted with thinks any of these things. It's true that some version of Option #1 tends to be the position of theistic mathematicians and scientists, but because theists posit a designer who is free to instantiate designs that may or may not manifest mathematical regularity, a theistic proponent of Option #1 is unlikely to insist that the operations of the natural world are wholly mathematical. Modern theists tend to assume that the maker of the universe is a crackerjack Mathematician, but these same theists also assume that this maker is undoubtedly also a pretty fair Author and Artist. The biological information necessary for the first living cell, after all, is not the compressible, mathematically tractable information of the algorithm. It's information more akin to that in a book or in the software and hardware of computer technology (although almost unimaginably more sophisticated). Abbott mentions the discovery of fractals in nature, "complex patterns, such as the Mandelbrot set ... generated from simple iterative equations," but dismisses their design implications by arguing that "any set of rules has emergent properties. For example, the rules of chess are clearly a human contrivance, yet they result in a set of elegant and sometimes surprising characteristics." Here perhaps more clearly than anywhere else in the essay we are given a glimpse at the root of Abbott's confusion. With the chess example, he's confusing the act of creating a mathematical description (rather than discovering its existence in nature) with the designing act of creating the rules of chess. But a theistic (or even deistic) understanding of Option #1 would consider the invention of chess as an echo of the cosmic designer's invention of various mathematically tractable regularities that manifest themselves as beautiful patterns in the natural world (e.g., spiral galaxies or snowflake patterns). On this view, the parallel to discovering fractals in nature would be a student of chess discovering certain elegant patterns (and perhaps their concomitant strategy and tactics) in the game of chess. Abbott proceeds to the trope of infinite monkeys creating meaningful prose as they bang away at random on keyboards. "It appears miraculous when an individual monkey types a Shakespeare sonnet," Abbott writes. "But when we see the whole context, we realize all the monkeys are merely typing gibberish. In a similar way, it is easy to be seduced into thinking that mathematics is miraculously innate if we are overly focused on its successes, without viewing the complete picture." No, just the opposite: when we view the complete picture of the universe, what physicists, astronomers, and cosmologists refer to as the fine tuning problem comes sharply into focus. We now know that numerous laws and constants of physics and chemistry appear fine-tuned to an almost unimaginably precise degree to allow for an evolving cosmos where life could exist--such as the strength of gravity and the strong and weak nuclear forces. So why is the cosmos not instead one of the almost unimaginably more numerous set of theoretically possible configuration--ones that would have made complex chemistry impossible and so, too, complex life? Cosmologists allergic to theism have at least a couple strategies for explaining away the fine tuning problem. One is to shrug and say, "Well, if the universe weren't such as to allow for complex and even intelligent life, we wouldn't be here to wonder about it." That's a bit like a prisoner surviving a rain of bullets from the firing squad unscathed, and when he opens his eyes and finds a perfect bullet pattern around his body, he immediately concludes that it was pure luck rather than the merciful design of the sharpshooters. "Well after all," he tells his skeptical friends, "I wouldn't be here to wonder about my curiously good fortune if the bullets hadn't been fine tuned to miss me, now would I?" The other common dodge around the fine-tuning problem brings us closer to the rhetorical sleight of hand that Abbott indulges in during this stretch of his article. He speaks of "infinite monkeys" banging away on keyboards. Set aside for the moment that even this picture presupposes the intelligently designed machinery of keyboards and computer screens. The more immediate question is, how did an infinite slip into the party? Aren't we trying to explain something about the physical universe? Well, in order to explain the curiously fine-tuned nature of our universe, some materialists have posited multiple and even infinite universes (unseen and undetectable), ours being one of the lucky ones configured just so to allow for complex and intelligent life. Here we have the classic gambler's fallacy. A naïve man is at the roulette tables in a shady speakeasy admiring the winning streak of a guy who looks like he walked right out of central casting for The Godfather. The casino worker running the roulette wheel is sweating profusely under the occasionally threatening glare of the gambler, and the roulette wheel keeps landing on the lucky gambler's number. "Gee!" the onlooker comments, "I bet the odds of this are one in a trillion billion billion. Imagine how many gamblers must be playing roulette all over the planet right now, and here I just happen to be beside Mr. Lucky!" (The probability of a single universe just happening to be fine-tuned for intelligent life, incidentally, is astronomically higher than one in a trillion billion To his credit Abbott rejects the "fuggedaboutit" non-answer that is Option #4; but then he reduces the debate to a simplistic binary of non-Platonist/Platonist--mathematics as wholly created versus the dead-horse position that our mathematical models can perfectly and exhaustively describe nature. There's another possibility, and it's one that drove the founders of modern science by drawing them beyond an unrealistically tidy Platonism and toward the humble and searching flexibility of theism. It begins with a fundamental question that gets muddled and missed in Abbott's analysis: Why a cosmos rather than a chaos? Why a universe where highly elegant mathematical models (e.g., Kepler's laws of planetary motion, Einstein's theory of relativity) even approximate so many of the regularities of physics and chemistry, regularities that even undergird the cutting edge scientific engineering work that employ other analytical tools in addition to the purely mathematical? And more than this, why a universe where one quite accurate mathematical model (Newton's law of universal gravitation) can so compactly and elegantly describe the gravity of large bodies, and then be superseded by an even more precise and more elegant model, as if the cosmos were not only mathematical but fashioned in such a way as to allow us to progress in stair-step fashion from one model to the next? It's the question Guillermo Gonzalez and Jay Richards ask in their groundbreaking book The Privileged Planet. Why does the universe appear not only fine tuned for intelligent life but also fine tuned for that intelligent life to discover the underlying order of the cosmos? The perfect solar eclipse is the everyman's icon of this puzzler. Perfect solar eclipses have proven crucial in helping astronomers test Einstein's theory of relativity as well as unlock the nature of distant stars. But to be useful, the apparent size of the sun and moon in the sky has to be virtually identical. The sun is 400 bigger than the moon, but it just so happens that it's also 400 times further away, meaning it has virtually the same apparent size as the moon in our sky. The whole thing seems rigged to allow humans to make scientific discoveries and to evoke wonder at the elegance and beauty of it all. Here it's difficult to talk nonsense about perfect eclipses being "a human construct," and the patent absurdity of that notion can help wake us up to the only slightly more subtle absurdity of claiming that the curious effectiveness of mathematics for describing the physical universe is also purely a human construct. There is a cosmos, a meaningful world, whose existence needs explaining. The answer that the founders of modern science gave was that the cosmos was the product of a reasonable designer, one whose mathematical intelligence far exceeds that of any human, and yet because the human person (including the human mind) is made in the likeness of that designer (one capable of reason, imagination, and discovery), we have some hope of exploring and discovering some of the underlying mathematical order of that grandest of mathematicians. Moreover, because humans are finite and flawed, the theists who founded modern science--Copernicus, Galileo, Kepler, Boyle, Newton and all the rest--were well primed to realize that our assumptions about how the designer would have done things might be mistaken, thus the strong emphasis on the need to test those ideas, to put our hypotheses "in empirical harm's way," as philosopher Del Ratszch put it. Then, too, theistic scientists had in view a personal being as the source of the cosmos, leaving them more open to finding the kind of design characteristic of a painting or a book--that is, non-repeating, non-algorithmic design--thereby freeing them from the Platonist's box of the geometrical/mathematical (or at least from the box of a certain type of neo-Platonist). These elements of theism may be why the founders of modern science tended to be more empirically oriented than was characteristic of the proto-science of the ancient Greeks. The theist, you see, expects underlying order, but he expects the underlying order of nature to run deeper still, and so the drive of the founders of modern science to continue digging for deeper and deeper levels of hidden order. In these and other ways, Judeo-Christian theism can be said to have taken up and matured Platonism and, in the process, sparked the scientific revolution--which, after all, did not begin in ancient Greece or Rome, Arabia or India, but in Christian Europe.
{"url":"http://www.discoverynews.org/2013/09/unreasonable_effectiveness_of_076961.php","timestamp":"2014-04-19T07:12:00Z","content_type":null,"content_length":"37901","record_id":"<urn:uuid:fca6cbab-dc5e-4c8b-84e9-0d7df8ad5bd8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
│ Class Summary │ │ AbstractAggregator │ Abstract base class implementation of InvocableMap.EntryAggregator that supports parallel aggregation. │ │ AbstractBigDecimalAggregator │ Abstract aggregator that processes Comparable values extracted from a set of entries in a Map and returns a result in a form of a BigDecimal value. │ │ AbstractComparableAggregator │ Abstract aggregator that processes values extracted from a set of entries in a Map, with knowledge of how to compare those values. │ │ AbstractDoubleAggregator │ Abstract aggregator that processes numeric values extracted from a set of entries in a Map. │ │ AbstractLongAggregator │ Abstract aggregator that processes numeric values extracted from a set of entries in a Map. │ │ BigDecimalAverage │ Calculates an average for values of any numberic type extracted from a set of entries in a Map in a form of a BigDecimal value. │ │ BigDecimalMax │ Calculates a maximum of numeric values extracted from a set of entries in a Map in a form of a BigDecimal value. │ │ BigDecimalMin │ Calculates a minimum of numeric values extracted from a set of entries in a Map in a form of a BigDecimal value. │ │ BigDecimalSum │ Calculates an sum for values of any numberic type extracted from a set of entries in a Map in a form of a BigDecimal value. │ │ ComparableMax │ Calculates a maximum among values extracted from a set of entries in a Map. │ │ ComparableMin │ Calculates a minimum among values extracted from a set of entries in a Map. │ │ CompositeAggregator │ CompositeAggregator provides an ability to execute a collection of aggregators against the same subset of the entries in an InvocableMap, resulting in a list of │ │ │ corresponding aggregation results. │ │ CompositeAggregator.Parallel │ Parallel implementation of the CompositeAggregator. │ │ Count │ Calculates a number of values in an entry set. │ │ DistinctValues │ Return the set of unique values extracted from a set of entries in a Map. │ │ DoubleAverage │ Calculates an average for values of any numberic type extracted from a set of entries in a Map. │ │ DoubleMax │ Calculates a maximum of numeric values extracted from a set of entries in a Map. │ │ DoubleMin │ Calculates a minimum of numeric values extracted from a set of entries in a Map. │ │ DoubleSum │ Sums up numeric values extracted from a set of entries in a Map. │ │ GroupAggregator │ The GroupAggregator provides an ability to split a subset of entries in an InvocableMap into a collection of non-intersecting subsets and then aggregate them │ │ │ separately and independently. │ │ GroupAggregator.Parallel │ Parallel implementation of the GroupAggregator. │ │ LongMax │ Calculates a maximum of numeric values extracted from a set of entries in a Map. │ │ LongMin │ Calculates a minimum of numeric values extracted from a set of entries in a Map. │ │ LongSum │ Sums up numeric values extracted from a set of entries in a Map. │ │ PriorityAggregator │ PriorityAggregator is used to explicitly control the scheduling priority and timeouts for execution of EntryAggregator-based methods. │ │ QueryRecorder │ This parallel aggregator used to produce a QueryRecord object that contains an estimated or actual cost of the query execution for a given filter. │ │ QueryRecorder.RecordType │ RecordType enum specifies whether the QueryRecorder should be used to produce a QueryRecord object that contains an estimated or an actual cost of the query │ │ │ execution. │ │ ReducerAggregator │ The ReducerAggregator is used to implement functionality similar to CacheMap.getAll(Collection) API. │
{"url":"http://docs.oracle.com/cd/E24290_01/coh.371/e22843/com/tangosol/util/aggregator/package-summary.html","timestamp":"2014-04-16T14:35:55Z","content_type":null,"content_length":"17049","record_id":"<urn:uuid:a0edff57-ec93-40a7-8172-95e74f808a2b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Roll Any Point on the Sphere to Any Desired Latitude-Longitude Coordinates with One Straight-Line Roll Pick a point on a sphere (in green). This point has certain latitude-longitude coordinates. In one straight-line roll, we can move this point to any desired latitude-longitude coordinates (in red). This Demonstration shows the shortest such roll. A roll in the horizontal plane of length about an axis at angle with the axis produces the following change in orientation: We want to choose the pair that moves a point from the starting latitude-longitude pair to the desired latitude-longitude pair. First, we compute the 3D location of the point from the latitude-longitude pair : Let and be the start and end coordinates. The angle to roll along is given by , which is undefined if the coordinates overlap. If they overlap and any angle works. Otherwise, set . The length of the roll is determined by the angle between the start and ending latitude-longitude pair with respect to the intersection of the line in the - plane between the start and end points and a perpendicular line to the origin. If is the squared - distance between the start and end points, this intersection point is given by If , the intersection point is the origin. If and are the vectors to the starting and ending north poles from this intersection point, then we can use a property of the dot product to calculate the angle , which satisfies .
{"url":"http://demonstrations.wolfram.com/RollAnyPointOnTheSphereToAnyDesiredLatitudeLongitudeCoordina/","timestamp":"2014-04-17T00:50:57Z","content_type":null,"content_length":"48725","record_id":"<urn:uuid:f00226f4-4f5e-401c-864f-b21a039ff45d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
complicated p-series the "p" in this problem is not the same as the "p" in the series $\sum{\frac{1}{n^p}}$ ... it's just a variable for the exponent. i'm kind of confused because i thought that the integral test was supposed to test for convergence or divergence..but don't you already know it will be convergent somewhere, because it's a p-series? i thought that you couldn't use the evaluation of the integral to tell you where the series actually converges to.. so, should i just evaluate the limit of the improper integral (once i've integrated) and then.. how do the p values i'm supposed to consider come into play? (Thinking) integral test ... $\int_3^{\infty} \frac{1}{x\ln{x}[\ln(\ln{x})]^p} \, dx$ $u = \ln{x}$ $du = \frac{1}{x} \, dx$ $\int_{\ln{3}}^{\infty} \frac{1}{u(\ln{u})^p} \, du$ consider three cases ... p < 1, p = 1, and p > 1 complicated p-series find the values of p for which the series is convergent: $\sum_{n=3}^{\infty}\frac{1}{n ln n [ln(ln n)]^p}$ please help!(Worried) i'm kind of confused because i thought that the integral test was supposed to test for convergence or divergence..but don't you already know it will be convergent somewhere, because it's a p-series? i thought that you couldn't use the evaluation of the integral to tell you where the series actually converges to.. so, should i just evaluate the limit of the improper integral (once i've integrated) and then.. how do the p values i'm supposed to consider come into play? (Thinking) so, how can you integrate 1/u(ln u)^p with the two variables? i'm just kind of lost.. honestly, i can't claim i understand any of these 'tools' enough to be able to wield them for this kind of complicated stuff.. thanks for your help(Shake) if p=1 then i know it just comes out to be ln t...but if it's >1 or < 1 then it's the power rule... so how do you test if it's convergent? i know it's not right to just come on here and ask for answers, but i'm kind of struggling, do you think you could walk me through it? $\int \frac{1}{t^p} \, dt$ converges for $p > 1$, diverges for $p \leq 1$ thank you so much for your help this seems like an awful lot of work if this function/sequence behaves exactly the same way as the p-series(Shake)
{"url":"http://mathhelpforum.com/calculus/79977-complicated-p-series-print.html","timestamp":"2014-04-19T20:49:07Z","content_type":null,"content_length":"11063","record_id":"<urn:uuid:b5700f48-ca87-4750-82bf-60d10ab513f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Robust exponential stability of interval Cohen-Grossberg neural networks with time-varying delays. (English) Zbl 1198.34151 Summary: The problem of robust exponential stability for a class of interval Cohen-Grossberg neural networks with time-varying delays is investigated. Without assuming the boundedness and differentiability of the activation functions and any symmetry of interconnection matrices, some sufficient conditions for the existence, uniqueness, and global robust exponential stability of the equilibrium point are derived. Some comparisons between the results presented in this paper and the previous results admit that our results are the improvement and extension of the existed ones. The validity and performance of the new results are further illustrated by two simulation examples. Editorial remark: There are doubts about a proper peer-reviewing procedure of this journal. The editor-in-chief has retired, but, according to a statement of the publisher, articles accepted under his guidance are published without additional control. 34K20 Stability theory of functional-differential equations 92B20 General theory of neural networks (mathematical biology) 93D09 Robust stability of control systems
{"url":"http://zbmath.org/?q=an:1198.34151","timestamp":"2014-04-21T10:08:54Z","content_type":null,"content_length":"21128","record_id":"<urn:uuid:0c8dfd57-4acd-4a36-af38-c7a1f2c25b54>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
MATHEMATICA BOHEMICA, Vol. 127, No. 2, pp. 229-241 (2002) An introduction to hierarchical matrices Wolfgang Hackbusch, Lars Grasedyck, Steffen Börm Wolfgang Hackbusch, Max Planck Institute for Mathematics in the Sciences, Inselstrasse 22-26, 04103 Leipzig, Germany, e-mail: wh@mis.mpg.de; Lars Grasedyck, Mathematisches Seminar Bereich 2, Universität Kiel, Hermann-Rodewald-Strasse 3, 24098 Kiel, Germany; Steffen Börm, Max Planck Institute for Mathematics in the Sciences, Inselstrasse 22-26, 04103 Leipzig, Germany, e-mail: Abstract: We give a short introduction to a method for the data-sparse approximation of matrices resulting from the discretisation of non-local operators occurring in boundary integral methods or as the inverses of partial differential operators. \endgraf The result of the approximation will be the so-called {hierarchical matrices} (or short \hbox {$\Cal {H}$-matrices}). These matrices form a subset of the set of all matrices and have a data-sparse representation. The essential operations for these matrices (matrix-vector and matrix-matrix multiplication, addition and inversion) can be performed in, up to logarithmic factors, optimal complexity. Keywords: hierarchical matrices, data-sparse approximations, formatted matrix operations, fast solvers Classification (MSC2000): 65F05, 65F30, 65F50, 65N50 Full text of the article: [Previous Article] [Next Article] [Contents of this Number] © 2005 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/MB/127.2/11.html","timestamp":"2014-04-19T02:25:16Z","content_type":null,"content_length":"3222","record_id":"<urn:uuid:ef1fb3fe-5ecf-49dd-9eea-639cd404c111>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the mass (in grams) of one molecule of H2O? Express your answer using scientific notation with two decimal places. • one year ago • one year ago Best Response You've already chosen the best response. do you know avagadro's number? Best Response You've already chosen the best response. Best Response You've already chosen the best response. ok do you know the mass number of H2O? if we have a mole of molecules, the total mass is the value of the mass numbers in grams ex. a mole of helium atoms would be 4g so if we wanted to determine the mass of a single helium atom in grams, we could just divide 4g by avagadro's number Best Response You've already chosen the best response. mol of O- 16g mol of 2H- 2g mol of H20- 18g Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/504cd136e4b02980ad80fa9d","timestamp":"2014-04-20T00:50:06Z","content_type":null,"content_length":"34951","record_id":"<urn:uuid:49e3f33c-c380-49d6-8c00-eccaeaced60b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Sum of divisors [FONT=Times New Roman]I need to prove that for all positive integers n, N^2 is >=2 Last edited by miz.perfect84; August 11th 2009 at 10:21 AM. Use the fact that if $\gcd(m,n)=1,$ then $\sigma(mn)=\sigma(m)\sigma(n).$ If $n$ is odd, then $\sigma(2n)=\sigma(2)\sigma(n)=3\sigma(n)>2\sigma(n ).$ Otherwise, $n=2^rm$ where $r\ge1$ and $m$ is odd, so $\sigma(2n)\ =\ \sigma(2^{r+1})\sigma(m)$ $=\ (1+2+2^2+\cdots+2^{r+1})\sigma(m)$ $>\ (2+2^2\cdots+2^{r+1})\sigma(m)$ $=\ 2(1+2+\cdots+2^r)\sigma(m)$ $=\ 2\sigma(2^r)\sigma(m)$ $=\ 2\sigma(2^rm)$ $ =\ 2\sigma(n)$
{"url":"http://mathhelpforum.com/discrete-math/92718-sum-divisors.html","timestamp":"2014-04-18T20:01:06Z","content_type":null,"content_length":"36867","record_id":"<urn:uuid:4e805f6b-7303-49a5-b376-6e146c4a6083>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Pollock your own noncommutative space Posted by lieven on Tuesday, 19 May 2009 I really like Matilde Marcolli's idea to use some of Jackson Pollock's paintings as metaphors for noncommutative spaces. In her talk she used this painting and refered to it (as did I in my post) as : Jackson Pollock “Untitled N.3”. Before someone writes a post 'The Pollock noncommutative space hoax' (similar to my own post) let me point out that I am well aware of the controversy surrounding this painting. In fact, I've already told part of the story in Doodles worth millions (or not)? (thanks to PD1). The story involves the people on the right : from left to right, Jackson Pollock, his wife Lee Krasner, Mercedes Matter and her son Alex Matter. Alex Matter, whose father, Herbert, and mother, Mercedes, were artists and friends of Jackson Pollock, discovered after his mother died a group of small drip paintings in a storage locker in Wainscott, N.Y. which he believed to be authentic Pollocks. Read the post mentioned above if you want to know how mathematics screwed up his plan, or much better, reed the article Anatomy of the Jackson Pollock controversy by Stephen Litt. So, perhaps the painting above was not the smartest choice, but we could take any other genuine Pollock 'drip-painting', a technique he taught himself towards the end of 1946 to make an image by splashing, pouring, sloshing colors onto the canvas. Typically, such a painting consists of blops of paint, connected via thin drip-lines. What does this have to do with noncommutative geometry? Well, consider the blops as 'points'. In commutative geometry, distinct points cannot share tangent information ((technically : a commutative semi-local ring splits as the direct sum of local rings and this does no longer hold for a noncommutative semi-local ring)). In the noncommutative world though, they can!, or if you want to phrase it like this, noncommutative points 'can talk to each other'. And, that's what we cherish in those drip-lines. But then, if two points share common tangent informations, they must be awfully close to each other... so one might imagine these Pollock-lines to be strings holding these points together. Hence, it would make more sense to consider the 'Pollock-quotient-painting', that is, the space one gets after dividing out the relation 'connected by drip-lines' ((my guess is that Matilde thinks of the lines as the action of a group on the points giving a topological horrible quotient space, and thats precisely where noncommutative geometry shines)). For this reason, my own mental picture of a genuinely noncommutative space ((that is, the variety corresponding to a huge noncommutative algebra such as free algebras, group algebras of arithmetic groups or fundamental groups)) looks more like the picture below The colored blops you see are really sets of points which you might view as, say, a FacebookGroup ((technically, think of them as the connected components of isomorphism classes of finite dimensional simple representations of your favorite noncommutative algebra)). Some chatter may occur between two distinct FacebookGroups, the more chatter the thicker the connection depicted ((technically, the size of the connection is the dimension of the ext-group between generic simples in the components)). Now, there are some tiny isolated spots (say blue ones in the upper right-hand quadrant). These should really be looked at as remote clusters of noncommutative points (sharing no (tangent) information whatsoever with the blops in the foregound). If we would zoom into them beyond the Planck scale (if I'm allowed to say a bollock-word in a Pollock-post) they might reveal again a whole universe similar to the interconnected blops upfront. The picture was produced using the fabulous Pollock engine. Just use your mouse to draw and click to change colors in order to produce your very own noncommutative space! For the mathematicians still around, this may sound like a lot of Pollock-bollocks but can be made precise. See my note Noncommutative geometry and dual coalgebras for a very terse reading. Now that coalgebras are gaining popularity, I really should write a more readable account of it, including some fanshi-wanshi examples...
{"url":"http://www.neverendingbooks.org/index.php/pollock-your-own-noncommutative-space.html","timestamp":"2014-04-18T23:15:45Z","content_type":null,"content_length":"16180","record_id":"<urn:uuid:07da4d73-bfc8-499d-ba94-2bc82fc23fa3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Sudoku Brain Teaser Solves Difficult Puzzles Take the Sudoku Brain Teaser Challenge! Sudoku is the greatest brain teaser. Love logic? Try it! It's addictive! Who would have thought that solving a Sudoku puzzle by placing the numbers 1 through 9 in a row, column, and a 3x3 matrix would be such a brain teaser. But it is! You see it takes logic to solve a Sudoku puzzle. It is like a riddle. You look for clues in the given puzzle such as I explain in my article on Sudoku Tips. You use logic. Ask yourself, what numbers can I exclude from this cell? You may find a pair, two cells that contain the same two numbers. Let's say 4 and a 6. That is an easy brain tickler. Sudoku offers many brain challenges. Try this one. As you will note, all four cells contain the numbers 2 and 8. One cell of the four also contains the number 1. This brain teaser requires a little logic. If you remove the 1 from the cell that contains three numbers, you get the situation where all four cells would then contain the numbers 2 and 8. The four cells then would make the puzzle unsolvable. There would be two possible solutions for the puzzle. Since a well formed Sudoku puzzle only has one solution, we can conclude that removing the 1 is not possible. When you see this pattern, enter the stand alone number (1) in the cell that contains the three numbers. It will allow you to solve the rest of the puzzle. You will find this brain teaser in more difficult puzzles. I have encountered it only about a half dozen times. For this technique to work, the four cells must reside in only two 3x3 regions. If they are in four 3x3 regions, then they might not work. For more information about this technique, I highly recommend that you buy the book Mensa Guide to Solving Sudoku: Hundreds of Puzzles Plus Techniques to Help You Crack Them All (Mensa) by Peter Gordon. I give Peter credit for naming this technique the Gordonian Rectangle. Return to Sudoku Tips Return from Brain Teaser to Sudoku home page.
{"url":"http://www.sudokuessentials.com/brain-teaser.html","timestamp":"2014-04-20T23:49:12Z","content_type":null,"content_length":"13125","record_id":"<urn:uuid:b5f50c21-8257-4887-9a70-22bda2d177fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
[RESOLVED] Inheritance Problems November 17th, 2012, 09:34 PM #1 Junior Member Join Date Sep 2012 [RESOLVED] Inheritance Problems How come I can't get this program to run even though derived class Extended_queue inherited from the base class Queue? header: Queue with Extended_queue const int maxqueue = 10; //small value for testing enum Error_code success, fail, range_error, underflow, overflow, fatal, not_present, duplicate_error, entry_inserted, entry_found, internal_error typedef char Queue_entry; class Queue bool empty() const; Error_code serve(); Error_code append(const Queue_entry &item); Error_code retrieve(Queue_entry &item) const; protected: //use with extended classes int count; int front, rear; Queue_entry entry[maxqueue]; //Post: The Queue is initialized to be empty. count = 0; rear = maxqueue - 1; front = 0; bool Queue::empty() const //Post: Return true if the Queue is empty, otherwise return false. return count == 0; Error_code Queue::append(const Queue_entry &item) //Post: item is added to the rear of the Queue. If the Queue is full return an // Error code of overflow and leave the Queue unchanged. if (count >= maxqueue) return overflow; rear = ((rear + 1) == maxqueue) ? 0: (rear + 1); entry[rear] = item; return success; Error_code Queue::serve() //Post: The front of the Queue is removed. If the Queue is empty return an Error_code of underflow. if (count <= 0) return underflow; front = ((front + 1) == maxqueue) ? 0 : (front + 1); return success; Error_code Queue::retrieve(Queue_entry &item) const //Post: The front of the Queue retrieved to the output parameter item. If the Queue // is empty return an Error_code of underflow. if (count <= 0) return underflow; item = entry[front]; return success; class Extended_queue : public Queue //Public to allow visibility between original and extended classes. bool full() const; int size() const; void clear(); Error_code serve_and_retrieve(Queue_entry &item); int Extended_queue::size() const //Post: Return the number of entries in the Extended_queue. return count; header: Runway #include "Queue.h" #include "Plane.h" enum Runway_activity {idle, land, takeoff}; class Runway Runway(int limit); Error_code can_land (const Plane &current); Error_code can_depart (const Plane &current); Runway_activity activity (int time, Plane &moving); void shut_down(int time) const; Extended_queue landing; //uses Extended_queue class Extended_queue takeoff; //uses Extended_queue class int queue_limit; int num_land_requests; // number of planes asking to land int num_takeoff_requests; // nuumber of planes asking to take off int num_landings; // num. of planes that have taken off int num_takeoffs; // num. of planes that have taken off int num_land_accepted; // num. of planes queued to land int num_takeoff_accepted; // number of planes queued to take off int num_land_refused; // number of landing planes refused int num_takeoff_refused; // number of departing planes refused int land_wait; // total time of planes waiting to land int takeoff_wait; // total time of planes waiting to take off int idle_time; // total time runway is idle Runway::Runway(int limit) //Post: The Runway data members are initialized to record no prior Runway use // and to record the limit on queue sizes. queue_limit = limit; num_land_requests = num_takeoff_requests = 0; num_landings = num_takeoffs = 0; num_land_refused = num_takeoff_refused = 0; num_land_accepted = num_takeoff_accepted = 0; land_wait = takeoff_wait = idle_time = 0; Error_code Runway::can_land(const Plane &current) /* Post: If possible, the Plane current is added to the landing Queue; otherwise, an Error_code of overflow is returned. The Runway statistics are updated. Uses: class Extended_queue */ Error_code result; if(takeoff.size() < queue_limit) result = takeoff.append(current); result = fail; if (result != success) return result; Error_code Runway::can_depart(const Plane &current) /* Post: If possible, the Plane current is added to the takeoff Queue; otherwise, an Error_code of overflow is returned. The Runway statistics are updated. Uses: class Extended_queue */ Error_code result; if(takeoff.size() < queue_limit) result = takeoff.append(current); result = fail; if (result != success) return result; Runway_activity Runway::activity(int time, Plane &moving) /* Post: If the landing Queue has entries, its front Plane is copied to the parameter moving and a result land is returned. Otherwise, if the takeoff Queue has entries, its front Plane is copied to the parameter moving and a result takeoff is returned. Otherwise, idle is returned. Runway statistics are Uses: class Extended_queue. */ Runway_activity in_progress; land_wait += time - moving.started(); in_progress = land; else if (!takeoff.empty()) takeoff_wait += time - moving.started(); in_progress = takeoff; in_progress = idle; return in_progress; void Runway::shut_down(int time) const //Post: Runway usage statistics are summarized and printed. cout << "Simulation has concluded after " << time << " time units." << endl << "Total number of planes processed " << (num_land_requests + num_takeoff_requests) << endl << "Total number of planes asking to land " << num_land_requests << endl << "Total number of planes asking to take off " << num_takeoff_requests << endl << "Total number of planes accepted for landing " << num_land_accepted << endl << "Total number of planes accepted for takeoff " << num_takeoff_accepted << endl << "Total number of planes refused for landing " << num_land_refused << endl << "Total number of planes refused for takeoff " << num_takeoff_refused << endl << "Total number of planes that landed " << num_landings << endl << "Total number of planes that took off " << num_takeoffs << endl << "Total number of planes left in landing queue " << landing.size( ) << endl << "Total number of planes left in takeoff queue " << takeoff.size( ) << endl; cout << "Percentage of time runway idle " << 100.0 * ((float) idle_time)/((float) time) << "%" << endl; cout << "Average wait in landing queue " << ((float) land_wait)/((float) num_landings) << " time units"; cout << endl << "Average wait in takeoff queue " << ((float) takeoff_wait)/((float) num_takeoffs) << " time units" << endl; cout << "Average observed rate of planes wanting to land " << ((float) num_land_requests)/((float) time) << " per time unit" << endl; cout << "Average observed rate of planes wanting to take off " << ((float) num_takeoff_requests)/((float) time) << " per time unit" << endl; header: Plane using namespace std; enum Plane_status {null, arriving, departing}; class Plane Plane (int flt, int time, Plane_status status); void refuse() const; void land(int time) const; void fly(int time) const; int started() const; int flt_num; int clock_start; Plane_status state; Plane::Plane(int flt, int time, Plane_status status) /* Post: The Plane data members flt_num, clock_start, and state are set to the values of the parameters flt, time and status, respectively. flt_num = flt; clock_start = time; state = status; cout << "Plane number " << flt << " ready to "; if (status == arriving) cout << "land." << endl; cout << "take off." << endl; /* Post: The Plane data members flt_num, clock_start, state are set to illegal default values. flt_num = -1; clock_start = -1; state = null; void Plane:: refuse() const //Post: Processes a Plane wanting to use Runway, when the Queue is full. cout << "Plane number " << flt_num; if (state == arriving) cout << " directed to another airport" << endl; cout << " told to try to takeoff again later" << endl; void Plane::land(int time) const //Post: Processes a Plane that is landing at the specified time. int wait = time - clock_start; cout << time << ": Plane number " << flt_num << " landed after " << wait << " time unit" << ((wait == 1) ? "" : "s") << " in the takeoff queue." << endl; void Plane::fly(int time) const int wait = time - clock_start; cout << time << ": Plane number " << flt_num << " took off after " << wait << " time unit" << ((wait == 1) ? "" : "s") << " in the takeoff queue." << endl; int Plane::started() const //Post: Return the time that the Plane entered the airport system. return clock_start; header: random /* Program extracts from Appendix B of "Data Structures and Program Design in C++" by Robert L. Kruse and Alexander J. Ryba Copyright (C) 1999 by Prentice-Hall, Inc. All rights reserved. Extracts from this file may be used in the construction of other programs, but this code will not compile or execute as given here. */ #include <time.h> #include <limits.h> #include <math.h> // Section B.2: class Random { Random(bool pseudo = true); // Declare random-number generation methods here. double random_real(); int random_integer(int low, int high); int poisson(double mean); int reseed(); // Re-randomize the seed. int seed, multiplier, add_on; // constants for use in arithmetic operations // Section B.3: int Random::reseed() Post: The seed is replaced by a pseudorandom successor. seed = seed * multiplier + add_on; return seed; Random::Random(bool pseudo) Post: The values of seed, add_on, and multiplier are initialized. The seed is initialized randomly only if pseudo == false. if (pseudo) seed = 1; else seed = time(NULL) % INT_MAX; //max_int multiplier = 2743; add_on = 5923; double Random::random_real() Post: A random real number between 0 and 1 is returned. double max = INT_MAX + 1.0; //max_int double temp = reseed(); if (temp < 0) temp = temp + max; return temp / max; int Random::random_integer(int low, int high) Post: A random integer between low and high (inclusive) is returned. if (low > high) return random_integer(high, low); else return ((int) ((high - low + 1) * random_real())) + low; int Random::poisson(double mean) Post: A random integer, reflecting a Poisson distribution with parameter mean, is returned. double limit = exp(-mean); double product = random_real(); int count = 0; while (product > limit) { product *= random_real(); return count; the program: #include <iostream> #include "Runway.h" #include "Random.h" using namespace std; void initialize(int &end_time, int &queue_limit, double &arrival_rate, double &departure_rate) /* Pre: The user specifies the number of times unit in the simulation the maximal queue sizes permitted, and the expected arrival and departure rates for the airport. Post: The program prints instructions and initializes the parameters end_time, queue limit, arrival_rate, and departure_rate to the specified values. Uses: utility function user_says_yes cout << "This program simulates an airport with only one runway." << endl << "One plane can land or depart in each unit of time." << endl; cout << "Up to what number of planes can be waiting to land " << "or take off at any time? " << flush; cin >> queue_limit; cout << "How many units of time will the simulation run?" << flush; cin >> end_time; bool acceptable; cout << "Expected number of arrivals per unit time?" << flush; cin >> arrival_rate; cout << "Expected number of departures per unit time?" << flush; cout << "Expected number of departures per unit time?" << flush; cin >> departure_rate; if (arrival_rate < 0.0 || departure_rate < 0.0) cout << "These rates must be nonnegative." << endl; acceptable = true; if(acceptable && arrival_rate + departure_rate > 1.0) cout << "Safety Warning: This airport will become saturated." << endl; void run_idle(int time) //Post: The specified time is printed with a message that the runway is idle. cout << time << ": Runway is idle." << endl; int main( ) // Airport simulation program /* Pre: The user must supply the number of time intervals the simulation is to run, the expected number of planes arriving, the expected number of planes departing per time interval, and the maximum allowed size for runway Post: The program performs a random simulation of the airport, showing the status of the runway at each time interval, and prints out a summary of airport operation at the conclusion. Uses: Classes Runway, Plane, Random and functions run_idle, initialize. */ int end_time; //time to run simulation int queue_limit; //size of Runway queues int flight_number = 0; double arrival_rate, departure_rate; initialize(end_time, queue_limit, arrival_rate, departure_rate); Random variable; Runway small_airport(queue_limit); for (int current_time = 0; current_time < end_time; current_time++) { //loop over time intervals int number_arrivals = 1;//variable.poisson(arrival_rate); //current arrival requests for (int i = 0; i < number_arrivals; i++) Plane current_plane(flight_number++, current_time, arriving); if (small_airport.can_land (current_plane) != success) int number_departures = variable.poisson(departure_rate); //current dep. req. for (int j = 0; j < number_departures; j++) Plane current_plane (flight_number++, current_time, departing); if(small_airport.can_depart(current_plane) != success) Plane moving_plane; switch (small_airport.activity(current_time, moving_plane)) { //Let at most one Plane onto the Runaway at current_time. case land: case takeoff: case idle: Produced Errors: PHP Code: Runway.h||In member function 'Error_code Runway::can_land(const Plane&)':| Runway.h|51|error: no matching function for call to 'Extended_queue::append(const Plane&)'| Queue.h|42|note: candidates are: Error_code Queue::append(const Queue_entry&)| Runway.h||In member function 'Error_code Runway::can_depart(const Plane&)':| Runway.h|71|error: no matching function for call to 'Extended_queue::append(const Plane&)'| Queue.h|42|note: candidates are: Error_code Queue::append(const Queue_entry&)| Runway.h||In member function 'Runway_activity Runway::activity(int, Plane&)':| Runway.h|94|error: no matching function for call to 'Extended_queue::retrieve(Plane&)'| Queue.h|64|note: candidates are: Error_code Queue::retrieve(Queue_entry&) const| Runway.h|102|error: no matching function for call to 'Extended_queue::retrieve(Plane&)'| Queue.h|64|note: candidates are: Error_code Queue::retrieve(Queue_entry&) const| Runway.h|105|error: cannot convert 'Extended_queue' to 'Runway_activity' in assignment| ||=== Build finished: 5 errors, 0 warnings ===| Re: Inheritance Problems In Runway::can_land(const Plane &current), you have the following: result = takeoff.append(current); takeoff is an object of type Extended_queue, which inherits append() from Queue. Looking at the declaration of Queue::append(): typedef char Queue_entry; Error_code append(const Queue_entry &item); append() is expecting a const char& and you are calling it with a const Plane&. You should declare your classes in the header file and define them in the CPP source file. The only exception to this are template classes/functions or any member functions that you would prefer the compiler to inline. Re: Inheritance Problems It seems like on the main program on the part: Plane current_plane(flight_number++, current_time, arriving); if (small_airport.can_land (current_plane) != success) where current_plane is the object being passed into a Runway function that takes Plane references. The reference is passed into the Queue function append() that takes a constant char reference. Error_code Queue::append(const Queue_entry &item) //Post: item is added to the rear of the Queue. If the Queue is full return an // Error code of overflow and leave the Queue unchanged. if (count >= maxqueue) return overflow; rear = ((rear + 1) == maxqueue) ? 0: (rear + 1); entry[rear] = item; return success; That reference item, which supposedly a char, is stored into the next index of array entry. When it does, it will return the enum value of success, which I think is 1... Since the whole structure of the program is so complex, is there a way where I can convert the current_plane to a char type or somehow write on the main.cpp that can satisfy the if statement mentioned above? Re: Inheritance Problems I have solved my own problems (once again). My options were to overload the append() so that it takes Plane object ref. but then another solution came to my head and I'm happy about it. I have made a personal update log for this little hugely complicated program to show what I have learned and solved. Also special thanks to @ttrz for pointing out the problem and the coding advice. Program: Airport Simulation -Airport Sim.cpp % 11/18/2012 % -Source code -Airport Sim.cpp 88: enum Runway_activity takeoff has been confused with Extended_queue object takeoff. The enum type has been changed to RWA_takeoff. 4: The change was made to reflect the change in 88: in Airport Sim.cpp 1-2, 82: Inclusion guards have been added to prevent anymore compiling errors. - This file was copied from the original Queue.h 10: typedef char Node_entry has been removed because it doesn't match when passing in a Plane object as parameter for Extended_queue functions. - File-wide changes -- All Queue functions now take parameters of Plane object references. - The top of this file has included headers: time, limits, and math - Class member declarations were made 47, 58: max_int has been changed to a provided macro of limits.h, INT_MAX November 18th, 2012, 12:52 AM #2 Join Date Sep 2010 November 18th, 2012, 03:28 AM #3 Junior Member Join Date Sep 2012 November 18th, 2012, 05:28 PM #4 Junior Member Join Date Sep 2012
{"url":"http://forums.codeguru.com/showthread.php?529041-Studying-operator-overloading&goto=nextnewest","timestamp":"2014-04-19T08:45:59Z","content_type":null,"content_length":"106490","record_id":"<urn:uuid:f323c4f1-723c-4b7e-b84c-773c7c7eb1ef>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
NAG Library NAG Library Routine Document 1 Purpose E01SAF generates a two-dimensional surface interpolating a set of scattered data points, using the method of Renka and Cline. 2 Specification SUBROUTINE E01SAF ( M, X, Y, F, TRIANG, GRADS, IFAIL) INTEGER M, TRIANG(7*M), IFAIL REAL (KIND=nag_wp) X(M), Y(M), F(M), GRADS(2,M) 3 Description E01SAF constructs an interpolating surface $F\left(x,y\right)$ through a set of $m$ scattered data points $\left({x}_{\mathit{r}},{y}_{\mathit{r}},{f}_{\mathit{r}}\right)$, for $\mathit{r}=1,2,\dots ,m$, using a method due to Renka and Cline. In the $\left(x,y\right)$ plane, the data points must be distinct. The constructed surface is continuous and has continuous first derivatives. The method involves firstly creating a triangulation with all the data points as nodes, the triangulation being as nearly equiangular as possible (see Cline and Renka (1984) ). Then gradients in the - and -directions are estimated at node , for $\mathit{r}=1,2,\dots ,m$ , as the partial derivatives of a quadratic function of which interpolates the data value , and which fits the data values at nearby nodes (those within a certain distance chosen by the algorithm) in a weighted least squares sense. The weights are chosen such that closer nodes have more influence than more distant nodes on derivative estimates at node . The computed partial derivatives, with the values, at the three nodes of each triangle define a piecewise polynomial surface of a certain form which is the interpolant on that triangle. See Renka and Cline (1984) for more detailed information on the algorithm, a development of that by Lawson (1977) . The code is derived from Renka (1984) The interpolant can subsequently be evaluated at any point inside or outside the domain of the data by a call to . Points outside the domain are evaluated by extrapolation. 4 References Cline A K and Renka R L (1984) A storage-efficient method for construction of a Thiessen triangulation Rocky Mountain J. Math. 14 119–139 Lawson C L (1977) Software for ${C}^{1}$ surface interpolation Mathematical Software III (ed J R Rice) 161–194 Academic Press Renka R L (1984) Algorithm 624: triangulation and interpolation of arbitrarily distributed points in the plane ACM Trans. Math. Software 10 440–442 Renka R L and Cline A K (1984) A triangle-based ${C}^{1}$ interpolation method Rocky Mountain J. Math. 14 223–237 5 Parameters 1: M – INTEGERInput 2: X(M) – REAL (KIND=nag_wp) arrayInput 3: Y(M) – REAL (KIND=nag_wp) arrayInput 4: F(M) – REAL (KIND=nag_wp) arrayInput 5: TRIANG($7×{\mathbf{M}}$) – INTEGER arrayOutput 6: GRADS($2$,M) – REAL (KIND=nag_wp) arrayOutput 7: IFAIL – INTEGERInput/Output 6 Error Indicators and Warnings If on entry , explanatory error messages are output on the current error message unit (as defined by Errors or warnings detected by the routine: On entry, ${\mathbf{M}}<3$. On entry, all the (X,Y) pairs are collinear. On entry, $\left({\mathbf{X}}\left(i\right),{\mathbf{Y}}\left(i\right)\right)=\left({\mathbf{X}}\left(j\right),{\mathbf{Y}}\left(j\right)\right)$ for some $ie j$. 7 Accuracy On successful exit, the computational errors should be negligible in most situations but you should always check the computed surface for acceptability, by drawing contours for instance. The surface always interpolates the input data exactly. The time taken for a call of E01SAF is approximately proportional to the number of data points, . The routine is more efficient if, before entry, the values in are arranged so that the array is in ascending order. 9 Example This example reads in a set of data points and calls E01SAF to construct an interpolating surface. It then calls to evaluate the interpolant at a sample of points on a rectangular grid. Note that this example is not typical of a realistic problem: the number of data points would normally be larger, and the interpolant would need to be evaluated on a finer grid to obtain an accurate plot, say. 9.1 Program Text 9.2 Program Data 9.3 Program Results
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/E01/e01saf.html","timestamp":"2014-04-20T02:22:30Z","content_type":null,"content_length":"18855","record_id":"<urn:uuid:cf59c9ff-f427-4e17-a842-cb0b52d1263f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Shared Intentions I would love to see conversation get going again - I just recently discovered Stephen Perrella and think he'd have a lot more to offer if he were still with us. (found on message ## 22431… + From: "Bryan A. Alexander" + Date: Fri, 19 Jan 1996 18:57:10 -0500 (EST) This might return us to Spinoza; see Curley's SPINOZA'S GEOMETRICAL METHOD. Bryan Alexander Department of English University of Michigan On Fri, 19 Jan 1996, Friedman, Howard J. wrote: > > I would like to expand on this topic please. how geometry excludes the > > subject and then we can talk about anexact geometry in deleuzeoguattari. > > > > s.perrella > > architect > You'll have to excuse me here. I only studied Euclidean geometry, and that > was in high school (some 20 years ago). I've since read a little of > Mandlebrot(?) but not very much. And a book of Rene Thom on catastrophe > theory. > So the reason i suggested that (Euclidean) geometry excludes the "subject" > is simply because it relates to structures: lines, planes, triangles, > squares, rhombi, etc. I don't see any room here for a "subject". (Points are > also Euclidean, i think, but they don't seem to represent the main thrust of > traditional geometry) > Other mathematical notions do seem to apply, however. I'm not sure, at this > point, if i'd like to class the subject as an "unreal number" (such as the > square root of a negative number) or as a real number with an infinite > decimal. > I'm throwing this back to you because, as you can see, my ignorance is > great. Please enlighten, if you can. Thanks. > Howie Ambling about the music blogs, and then crossing over to w.a.s.t.e central, one uncovers the song Radiohead debuted in Dublin last week, "Super Collider". Via Deaf Indie Elephants and their "exclusive" (meaning them and everyone else checking the radiohead site?) post about the portishead cover. I like the latest version on w.a.s.t.e central, also on ...got it, poor guy. "Sick Love" O Love, be fed with apples while you may, And feel the sun and go in royal array, A smiling innocent on the heavenly causeway, Though in what listening horror for the cry That soars in outer blackness dismally, The dumb blind beast, the paranoiac fury: Be warm, enjoy the season, lift your head, Exquisite in the pulse of tainted blood, The shivering glory not to be despised. Take your delight in momentariness, Walk between dark and dark--a shining space With the grave's narrowness, though not its peace. A nice find: Robert Graves: The Lasting Poetic Achievement (comments on "Sick Love") I once helped co-author an article on "Baby Satyrs". It was a delightfully irreverent account of the fey equine toddlers and their drinking habits. It was quickly flagged as completely spurious bullshit (which it was). But I then spent the next day or two doing extensive research and fact checking and wrote an actual article with sources and pictures on Baby Satyrs and their role in art and myth since ancient times. it was a damn fine article, and it proudly remained standing in its own until some yutz with an overactive sense of taxonomy merged it into the "Satyr" article. So I can at least claim to be the origin of that entire section, which seems to have turned out well (the Satyr article was in sorry shape when mine got mashed in). interestingly, in an example of how the internet is becoming our infinite memory, I was able to find the history page of the original article, which ultimately produced the original content of the first drunken stab (reproduced below). But it just struck me as very cool--using the history pages, I could trace how a drunken spat of nonsense turned into a semi-authoritative encyclopedia entry edited and added to by multiple wikipedians. Not sure if this boosts or diminishes my view of Wikipedia's credibility, but it sure is neat to realize that it really works that way. Now, for some chinese rotgut-inspired mythological ranting: Revision as of 17:16, 20 January 2005 Birth and Description Baby satyrs (presatyricus horiniciae) are a subspecies of satyrs (satyricus horniciae) produced via transubstantiation during the bacchanal celebrations following severe head trauma to revelers. Generally, copious amounts of alcohol ingestion by parent are a necessary precursor to baby satyr production. Upon birth, the baby satyr will generally be at least as drunk as the individual who spawned them, perhaps due to their low body weight and high rate of metabolism. Most baby satyrs are merry drunks, and are generally expected by social convention to share from their bottomless wine jugs, which they carry upon emerging from the individual's aural canal. However, it is rumoured that some baby satyrs spawned in the orgiastic celebrations of the Yanomano tribes of South America can be very mean drunks, and while they share wine with the victorious tribesman of a recent conflict, they may repeatedly headbutt inquisitive anthropologists in the groin. Basic Principles of Baby Satyr Mutualism The vast majority of baby satyrs, aka horned babies, gladly share their lifeblood, or jug wine, with fellow revelers of all species. It is rumored that there exists a hoof-fondling procedure which will result in the complete remittance of a baby satyr's jug of wine. However, this procedure is not well understood and might possibly be an old wives' tale. The basic procedure for procuring swigs of wine from baby satyrs involves fondling their budding horn nubs. The baby satyr will reflexively lift the jug in front of him or her and slip into a trance-like state, at which time the dipsomaniac must swiftly lift both hands off the horn buds and grab the jug. The dipso must pour the wine quickly into his or her mouth before the satyr grins and grabs the jug back. If the reveler refuses to acquiesce to the satyrs' wishes at this time, she might receive a swift headbutt to the groin. Baby satyrs are merry but selfish with their booze! This explains so much about the "Trauma Center" video games! On Pink Tentacle, via Forbidden Music Ostensibly about bird song, it brings in topics such as social cues, context-driven gene expression, mating behavior, and attention. Science is yummy. However, this hypothesis predicts that directed–undirected song differences, despite their subtlety, should matter to female zebra finches, which is a question that has not been investigated. We tested female preferences for this natural variation in song in a behavioral approach assay, and we found that both mated and socially naive females could discriminate between directed and undirected song—and strongly preferred directed song. These preferences, which appeared to reflect attention especially to aspects of song variability controlled by the AFP, were enhanced by experience, as they were strongest for mated females responding to their mate's directed songs. We then measured neural activity using expression of the immediate early gene product ZENK, and found that social context and song familiarity differentially modulated the number of ZENK-expressing cells in telencephalic auditory areas. Specifically, the number of ZENK-expressing cells in the caudomedial mesopallium (CMM) was most affected by whether a song was directed or undirected, whereas the caudomedial nidopallium (NCM) was most affected by whether a song was familiar or unfamiliar. Together these data demonstrate that females detect and prefer the features of directed song and suggest that high-level auditory areas including the CMM are involved in this social Woolley SC, Doupe AJ (2008) Social Context–Induced Song Variation Affects Female Behavior and Gene Expression. PLoS Biol 6(3): e62 doi:10.1371/journal.pbio.0060062 Paul Davies, a physicist at Arizona State University, wrote an op-ed piece in yesterday's times taking physicists to task for the manner in which they have either accepted certain base assumptions in physics as being inviolate, or having constructed elaborate meta-explanations that further confuse matters. While I agree with the conclusions Davies reaches in the last two paragraphs of his essay, I took some issue with how he got there. Follows is a conversation from my news group, in which I put up an initial response which was in turn responded to, and my recent riposte. I like what I wrote, but if more comes of it, I'll post it as well. at first I thought this was interesting, but I think the arguments are a bit flawed, he makes some comments that are simply handwaving. It really seems like he is equivocating until the very last few sentences...I would have liked to hear his conclusions stated more up front. The use of phrases like "meaningless jumble of odds and ends haphazardly juxtaposed" and "just any old ragbag of rules" undermines his argument because they are essentially meaningless, or worse yet, suggest either the presence of something that has lead to this indeterminate state or a complete nihilism. They also appeal to something like a vlugar sentiment, "oh, haphazard jumbles or old ragbags are bad things that can't be studied". maybe part of the problem is being unwilling to look at those things as possibly interesting phenomena, themselves. But he says that if the laws were "any old ragbag", life would certainly not exist. If he means we shouldn't treat the laws as inviolate, inscrutable, and utterly holy, then fine. but what a weird way to express it. I also think he ignores some very significant points in philosophy, such as Hume's arguments against making such assumptions, including the assumption of causality. Ultimately, we have to accept that these assumptions are part of the limits on human reason, and when they are violated, that gives us a hint that a) there is something new and more interesting to study, and b) assumptions of causality are not necessarily going to hold because there are always possible elements of which we are not aware. really, i think davies' essay should be asking a somewhat different question - not how can we make an internally consistent set of physical laws, but rather, what is it that makes our laws take this form? as in, these are products of human reason operating on human observations, and the perspective we are working from is inevitably the human-eye view, not the god's-eye-view. the laws of physics should be treated as the human-laws-of-physics, and I think at that point, the answer to why these laws hold might become a little more clear. ultimately, i think the problem of faith in science and religion can be understood more clearly in these terms, as both are rooted in the limits of human ability. The response (left anonymous): No amount of reading Hume is going to change what basic science is. Science is science. Even if physicists were to drop everything to ask why it is that their rules are the rules that they are, at the end of the day they're still going to be stuck with some set of rules and testable hypotheses that they can get paid to test. Also, he's not asking "how can we make an internally consistent set of physical laws". He is questioning the basic assumption that science has some kind of ultimate say in the way things are. Also, I think in the context of this piece "meaningless jumble of odds and ends haphazardly juxtaposed" is perfectly meaningful. I also don't think saying "haphazard jumbles and old ragbags can't be studied" is entirely vulgar. Science is the study of the natural world and in the natural world we don't encounter the haphazard jumbles or old ragbags that Davies is referring to. Of course that could just be a bad Here's what I found interesting: one implicit assumption that Davies seems to make is that science is concerned with finding out the Truth. Depending on who you ask you will probably get different opinions, right? I personally think that science (as a thing, or methodology, or entity) is NOT concerned with the truth. The fact that you can publish papers in Nature and Science with p<.05 should be some indication of that. There's always a margin of error. And there's always the possibility that some discovery will invalidate your study. But life goes on. And my more extensive riposte: I think you may have misread my response and I, at least, have a very different reading than you of Davies' claims. First - I didn't make the claim or hope to imply that reading any Hume changes what constitutes basic science - however ever since Hume first made his case about causality in the 18th century, science has had to deal with the very serious problem he raised - that no matter how much you observe about a phenomenon, you cannot know of all the causal relationships at work in creating what you've seen. You make this same point at the end of your response - nature publishes studies with p<.05, which means there is a statistically possible alternative occurence, even if we are extremely confident that the data we find is valid and that it supports an interpretation we offer. and of course, there is the old saw of "correlation does not imply causation". This is a very important aspect of any scientific work - that no matter how much we observe and test, we are probably missing something and have to be willing to accept that possibility - it's the blessing and curse of ceteris paribus. There have been many scientific and philosophical examinations of this situation and the role it has in our ability to conduct useful and effective science. But this does not mean we should throw our hands up, bend our heads and carry on sadly accepting that "science is science". I think that is almost cynical - science is in constant flux, constantly questioning its assumptions, going through upheavals and revolutions, etc. Davies seems to insist that scientists are in fact doing just that, to their own harm - they are not trying to create "an explanation from within the universe and not involve appealing to an external agency." I think he pretty clearly states, in those last two paragraphs, that he does hope for internal consistency for physical laws. He also states pretty clearly that his issue is with scientists claiming that their explanation are free of faith - I don't really get the impression that he is calling science-as-ultimate-explanation a canard. Again, this is borne out by the last two paragraphs, which I think are mostly on the mark. Davies's use of those phrases that I took issue with seems to be rooted in his frustration with the possibility that a nihilistic streak is becoming the base assumption behind the scientific endeavor. But I also think it is incorrect to use the term "meaningless" in the first instance - I think there is a solid case to be made that the universe is not intrinsically "meaningful", but is only so with regard to organism in it that can deal with "meaning". One might think this is esoteric philsophizing, but I think it is entirely relevant to science, particularly with regard to many things, intentional phenomena and information theory to give just two examples. Also, the term haphazard implies some kind of agency engaging in a careless behavior, and the altrernative is thus a meaningful, careful arrangement of the universe by ____. This is surely not what Davies wants to suggest is the right thing for scientists to think - and he explicitly says so at the end. I think it is perfectly possible to look at the universe as a meaningless hodgepodge of things haphazardly juxtaposed because that may just be what it is - that is in fact a question science is equipped to examine and answer. When you say that in nature, we don't encounter haphazard jumbles or old ragbags, you're right because that is not a very articulate or rigorous way to refer to the things we see - so instead we use nice phrases like "emergence", "chaos", and "non-linear dynamic systems". Davies ultimately says that the laws of physics are themselves something that is subject to examination in a scientific context, so that we can work away from any turtles-all-the-way-down type explanations. I think that this is a fruitful domain of future research, where physicists will have to join with cognitive scientists (not a self-plug - this has already been happening, as in the work of Roger Penrose) to examine those rules within which they are operating, how they arise, and how they are part of the process by which physics is done (or as Davies says, are regarded with universe as "part and parcel of a unitary system"). What I think Davies ultimately wants to say is, stop trying to pull the physical laws of the universe out of the universe. Which I think is something we can all agree is an entirely appropriate thing for physics to do.
{"url":"http://shareintent.blogspot.com/","timestamp":"2014-04-17T00:48:49Z","content_type":null,"content_length":"95837","record_id":"<urn:uuid:f82f2654-b209-4c64-927c-21d8f3b44f4e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Compound Interest Table These kinds of tables for looking up compound interest for various factors are very commonly used in engineering economics. Some definitions: = effective interest rate per unit period (normally one year) = number of interest periods = a present sum of money = a future sum of money equivalent to but interest periods from the present at interest rate = an end-of-period cash receipt or disbursement in a uniform series continuing for periods with the entire series equivalent to or at interest rate = uniform period-by-period increase or decrease in cash receipts or disbursement; the arithmetic gradient A single payment is a one-time investment of compounded at interest rate for periods that will reach a value . Or you can use the inverse, a discount factor, expressing the portion of a desired final value that must be invested to grow to in periods when invested at percent. A uniform payment means that instead of a one-time investment, equal amounts are paid into a fund that compounds at interest rate for each of the periods. " given " gives the portion of the desired future accumulated amount that must be contributed at each time period to reach the desired goal. " given " shows the amount that must be paid to amortize an amount borrowed over the same period, with interest paid rather than earned on the outstanding balance—for example, a conventional mortgage. The inverses of those quantities are " given " and " given ". Notice that " given " shows the value of the accumulated sum of an amount , saved in each of periods and compounding at interest rate . On the other hand, " given " is the present value of an amount that can be paid off by making payments of amount . For example, the factor for 30 years at 5% is 15.3725, so payments of $12,000 a year would pay off a loan of 12000×15.3725 = $184,470 at that rate over that period. The factor of 66.4388 means that the same $12,000 a year contributed to, for example, a retirement savings account earning 5% would grow in 30 years to $797,265.60. The arithmetic gradient series shows accumulations and amortizations possible by using increasing (rather than level) payments, where the amount of the payment starts at zero, but changes by dollars per year. Otherwise, this part of the Demonstration should be interpreted like the level payment accumulation and amortization factor series. Note that series of payments or contributions that start above zero and then increase at a fixed rate can be calculated as the sum of two terms—a level payment series for the initial amount and a gradient series for the increases. Reference: T. G. Eschenbach, D. G. Newnan, and J. P. Lavelle, Engineering Economic Analysis , New York: Oxford Univ. Press, 2004.
{"url":"http://demonstrations.wolfram.com/CompoundInterestTable/","timestamp":"2014-04-18T18:13:03Z","content_type":null,"content_length":"48802","record_id":"<urn:uuid:8394b45d-5052-4d99-9035-7098ea6d3921>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Frustum Pyramid Volume to Design Oil Containment System I have attached a word document showing an image and some typed out simple calculations. I am working on an oil containment system for a new building my employer is going to be building soon. Its pretty simple in the middle of the floor we put a catch basin with a 24"x24" grate. This catch basin is to be 4" below the floor so the perimeter of the oil containment area will be 4" above the catch basin (obviously). Am I correct in assuming that frustum pyramid volume formula is the correct way to calculate the volume of oil or water the sloped floors could hold assuming the catch basin is all ready full to the top of the grate? Does a frustum pyramid have to have the same shape for its base and top or could it be a hexagon on the bottom and a square on top like in this case? I would appreciate some feedback on my math in the word document the formula is just a basic frustum pyramid volume formula I remember from college. My answer seems quite large to me though. Feel free to move this to the Math section if you wish, I was thinking of putting it there but it is a design questions as well.
{"url":"http://www.physicsforums.com/showthread.php?t=405843","timestamp":"2014-04-17T12:38:17Z","content_type":null,"content_length":"21310","record_id":"<urn:uuid:3aec6fc2-fc83-406a-b6bf-5b1ce4763163>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Chemistry Help: Wavelength and Energy Levels [Archive] - Bodybuilding.com Forums 02-01-2011, 09:45 AM hey guys. my chem teachers never really helped us with this as she only briefly went over this crap in my previous classes but now im asked to do these all the time and i cant figure them out for the life of me can you guys help? i try to do them but get nothing close to what the answer actually is. i realize they may be a lot of work and give big thanks and appreciation in advance to those who help and give in an effort 1. What is the wavelength in nanometers of an electromagnetic radiation which has frequency of 6.40 x 10^14/s. The speed of electromagnetic radiation is 3.00 x 10^8 (constant im guessing). Answer: 469 nm 2. Calculate the energy of a bluish green line in a hydrogen spectrum with a wavelength of 486 nm. Answer: 4.09 x 10^-19J 3. What is the wavelength in nanometers of light emitted when the electron in a hydrogen atom undergoes a transition from energy level n = 6 to n = 3? Answer: 1095nm
{"url":"http://forum.bodybuilding.com/archive/index.php/t-131399883.html","timestamp":"2014-04-17T09:51:08Z","content_type":null,"content_length":"7996","record_id":"<urn:uuid:db93d00f-f948-4fea-85dd-bb31b7e3984c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Excel/Visual Basic Array Multiplication Hi everyone, I've been given the assignment to do some matrix calculations in excel using the built-in developer VBA editor. The goal of this program is to take the "Inner Product" of two arbitrary matrices provided in the excel spreadsheet by creating a function to use. This function multiplies each cell with the corresponding cell in the second matrix. For Example: the multiplication of the following two matrices... [A , B] [E, F] [C , D] [G, H] [AE, BF] [CG, DH] The function is supposed to multiply two arbitrary matrices even if the dimensions of the matrices do not match one another. The function must take the common dimensions of each and output the products into a new block of cells as a new matrix. I had a difficult time wrapping my head around this assignment. What I decided to do is create two different arrays for the two matrices. I counted the number of rows and columns for each array and took the lesser amount of rows and columns between the two matrices. After that, I tried to multiply each cell by the corresponding cell in the second array. When all of this was over, however, I found myself looking at the dreaded #VALUE! error in excel. The function was running into some problems and I could not pinpoint at which point the function was derailed. Here is the function: Function InnerProduct(MatrixRange1, MatrixRange2) As Double Dim m As Integer, n As Integer Dim i As Integer, j As Integer Dim aRows As Integer, aCols As Integer Dim bRows As Integer, bCols As Integer Dim A() As Variant, B() As Variant, C() As Variant aRows = MatrixRange1.Rows.Count aCols = MatrixRange1.Columns.Count bRows = MatrixRange2.Rows.Count bCols = MatrixRange2.Columns.Count If aRows > bRows Then i = bRows i = aRows End If If aCols > bCols Then j = bCols j = aCols End If ReDim A(i, j) As Variant, B(i, j) As Variant, C(i, j) As Variant For m = 1 To i For n = 1 To j A(m, n) = MatrixRange1.Cells(m, n) B(m, n) = MatrixRange2.Cells(m, n) C(m, n) = A(m, n) * B(m, n) Next n Next m InnerProduct = C(m, n) End Function If anyone has any input on where I might look to fix this problem, it would be GREATLY GREATLY appreciated. My professor has given two example matrices to test out the function: [5 , -4] [1 , 3] [-7 , 1] [4 , 9] [8 , -2] * [8 , -5] [4 , 3] [3 , 6] [9 , 1] [-5 , 4] Problem #1 has also been attached
{"url":"http://www.dreamincode.net/forums/topic/330675-excelvisual-basic-array-multiplication/page__pid__1912386__st__0","timestamp":"2014-04-24T09:06:24Z","content_type":null,"content_length":"77996","record_id":"<urn:uuid:e9c371c0-adc9-405a-966a-8189a362db96>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Seamless Cube Map Filtering Modern GPUs filter seamlessly across cube map faces. This feature is enabled automatically when using Direct3D 10 and 11 and in OpenGL when using the ARB_seamless_cube_map extension. However, it’s not exposed through Direct3D 9 and it’s just not available in any of the current generation consoles. There are several solutions for this problem. Texture borders solve it elegantly, but are not available on all hardware, and only exposed through the OpenGL API (and proprietary APIs in some When textures are static a common solution is to pre-process them in an attempt to eliminate the edge seams. In a short siggraph sketch, John Isidoro proposed averaging cube map edge texels across edges and obscuring the effect of the averaging by adjusting the intensity of the nearby texels using various methods. These methods are implemented in AMD’s CubeMapGen, whose source code is now publicly available online. While this seems like a good idea, a few minutes experimenting with CubeMapGen make it obvious that it does not always work very well! Embedded Texture Borders A very simple solution that even works for dynamic cube maps is to slightly increase the FOV of the perspective projection so that the edges of adjacent faces match up exactly. Ysaneya shows that in order to achieve that, the FOV needs to be tweaked as follows: fov = 2.0 * atan(n / (n - 0.5)) where n is the resolution of the cube map. What this is essentially doing is to scale down the face images by one texel and padding them with a border of texels that is shared between adjacent faces. Since the texels at the face edges are now identical the seams are gone. In practice this is much trickier than it sounds. While the fragments at the adjacent face borders should sample the scene in the same direction, rasterization rules do not guarantee that in both cases the rasterized fragments will match. However, if we take this idea to the realm of offline cube map generation, we can easily guarantee exact results. Cube maps are often used to store directional functions. Each texel has an associated uv coordinate within the cube map face, from which we derive a direction vector that is then used to sample our directional function. Examples of such functions include expensive BRDFs that we would like to precompute, or an environment map sampled using angular extent filtering. Usually these uv coordinates are computed so that the resulting direction vectors point to the texel centers. For an integer texel coordinate x in the [0,n-1] range we map it to a floating point coordinate u in the [-1, 1] range as follows: map_1(x) = (x + 0.5) * 2 / n - 1 We then obtain the corresponding direction vector as follows: dir = normalize(faceVector + faceU * map_1(x) + faceV * map_1(y) When doing that, the texels at the borders do not map to -1 and 1 exactly, but to: map(0) = -1 + 1/n map(n-1) = 1 - 1/n In our case we want the edges of each face to match up exactly to they result in the same direction vectors. That can be achieved with a function like this: map_2(x) = 2 * x / (n - 1) - 1 If we use this map to sample our directional function, the resulting cube map is seamless, but the face images are scaled down uniformly. In the first case the slope of the map is: map_1'(x) = 2 / n but in the second case it is slightly different: map_2'(x) = 2 / (n - 1) This technique works very well at high resolutions. When n is sufficiently high, the change in slope between map_1 and map_2 becomes minimal. However, at low resolutions the stretching on the interior of the face can become noticeable. A better solution is to stretch the image only in the proximity of the edges. That can be achieved warping the uv face coordinates with a cubic polynomial of this form: warp3(x) = ax^3 + x We can compose this function with our original mapping. The result around the origin is close to a linear identity, but we can adjust a to stretch the function closer to the face edges. In our case we want the values at 1-1/n to produce 1 instead, so we can easily determine the value of a by solving: warp3(1-1/n) = ax^3 + x = 1 which gives us: a = n^2 / (n-1)^3 I implemented the linear stretch and cubic warping methods in NVTT and they often produce better results than the methods available in AMD’s CubeMapGen. However, I was not entirely satisfied. While this removed the zero-order discontinuity, it introduced a first-order discontinuity that in some cases was even more noticeable than the artifacts it was intended to remove. You can hover the cursor over the following image to show how the warp edge fixup method eliminates the discontinuities, but sometimes still results in visible artifacts: Any edge fixup method is going to force the slope of the color gradient across the edge to be zero, because it needs to duplicate the border texels. The eye seems to be very sensible to this form of discontinuity and it’s questionable whether this is better than the original artifact. Maybe other warp functions would make the discontinuity less obvious, or maybe it could be smoothed like Isidoro’s method do. At the time I implemented this I thought the remaining artifacts did not deserve more attention and moved on to other tasks. Modifed Texture Lookup However, a few days ago Sebastien Lagarde integrated these methods in AMD’s CubeMapGen. See this post for more results and comparisons against other methods. That got me thinking again about this and then I realized that the only thing that needs to be done to avoid the seams is to modify the texture coordinates at runtime the same way we modify them during the offline cube map evaluation. At first I thought that would be impractical, because it would require projecting the texture coordinates onto the cube map faces, but turns out that the resulting math is very simple. In the case of the uniform stretch that I first suggested, the transform required at runtime is just a conditional per-component multiplication: float3 fix_cube_lookup(float3 v) { float M = max(max(abs(v.x), abs(v.y)), abs(v.z)); float scale = (cube_size - 1) / cube_size; if (abs(v.x) != M) v.x *= scale; if (abs(v.y) != M) v.y *= scale; if (abs(v.z) != M) v.z *= scale; return v; One problem is that we need to know the size of the cube map face in advance, but every mipmap has a different size and we may not know what mipmap is going to be sampled in advance. So, this method only works when explicit LOD is used. Another issue is that with trilinear filtering enabled, the hardware samples from two contiguous mipmap levels. Ideally we would have to use a different scale factor for each mipmap level. That could be achieved sampling them separately and combining the result manually, but in practice, using the same scale for both levels seems to produce fairly good results. We can easily find a scale factor that works well for fractional LODs as a function of the LOD value and the size of the top level mipmap: float scale = 1 - exp2(lod) / cube_size; if (abs(v.x) != M) v.x *= scale; if (abs(v.y) != M) v.y *= scale; if (abs(v.z) != M) v.z *= scale; If you are using cube maps to store prefiltered environment maps, chances are you are computing the cube map LOD from the specular power using log2(specular_power). If that’s the case, the two transcendental instructions cancel out and the scale becomes a linear function of the specular power. The images below show the results using the warp filtering method (these were chosen to highlight the artifacts of the warp method). Hover the cursor over the images to visualize the results of the new approach: I’d like to thank Sebastien Lagarde for his valuable feedback while testing these ideas and for providing the nice images accompanying this article. Note: This article is also published at The Witness blog.
{"url":"http://www.altdevblogaday.com/2012/03/03/seamless-cube-map-filtering/","timestamp":"2014-04-20T06:09:35Z","content_type":null,"content_length":"27648","record_id":"<urn:uuid:c30e801a-e168-45b7-b221-4774d0855e2a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Publications - Adam Chlipala Refereed journal articles Refereed conference papers Adam Chlipala Mostly-Automated Verification of Low-Level Programs in Computational Separation Logic . Proceedings of the ACM SIGPLAN 2011 Conference on Programming Language Design and Implementation (PLDI'11) . June 2011. A constructive proof that automating separation logic proofs for systems code is easy, despite claims to the contrary coming from SMT solver-centric perspectives. ;-) Specifically, this paper introduced Bedrock, a Coq library for foundational verification of code at the assembly level of abstraction. A mostly-automated separation logic prover uses a modest amount of programmer annotation to drive verification of examples like imperative data structures and a cooperative threading library. Adam Chlipala A Verified Compiler for an Impure Functional Language . Proceedings of the 37th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'10) . January 2010. A case study in verifying a compiler to an idealized assembly language from an untyped source language with most of the key dynamic features of ML: functions, products, sums, mutable references, and value-carrying exceptions. Syntax is encoded with parametric higher-order abstract syntax (PHOAS), which makes it possible to avoid almost all bookkeeping having to do with binders and fresh name generation. The semantics of the object languages are encoded in a new substitution-free style. All of the proofs are automated with tactic programs that can keep working even after changing the definitions of the languages. Adam Chlipala Parametric Higher-Order Abstract Syntax for Mechanized Semantics . Proceedings of the 13th ACM SIGPLAN International Conference on Functional Programming (ICFP'08) . September 2008. A new trick for encoding variable binders in Coq, along with an exploration of its consequences: almost trivial syntax and type-theoretic semantics for languages including such features as polymorphism and complicated binding structure (e.g., ML-style pattern matching); almost trivial type preservation proofs for compiler passes that don't need intensional analysis of variables; mostly-automated semantic correctness proofs about those passes, by way of adding an axiom to make the parametricity of CIC usable explicitly in proofs; and the ability to drop down to more traditional syntactic representations for more arduous but feasible proofs of the same properties, when intensional variable analysis is needed. Adam Chlipala Modular Development of Certified Program Verifiers with a Proof Assistant . Proceedings of the 11th ACM SIGPLAN International Conference on Functional Programming (ICFP'06) . September 2006. I report on an experience using the Coq proof assistant to develop a program verification tool with a machine-checkable proof of full correctness. The verifier is able to prove memory safety of x86 machine code programs compiled from code that uses algebraic datatypes. The tool's soundness theorem is expressed in terms of the bit-level semantics of x86 programs, so its correctness depends on very few assumptions. I take advantage of Coq's support for programming with dependent types and modules in the structure of my development. The approach is based on developing a library of reusable functors for transforming a verifier at one level of abstraction into a verifier at a lower level. Using this library, it's possible to prototype a verifier based on a new type system with a minimal amount of work, while obtaining a very strong soundness theorem about the final product. Bor-Yuh Evan Chang Adam Chlipala George C. Necula A Framework for Certified Program Analysis and Its Applications to Mobile-Code Safety . Proceedings of the 7th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI'06) . January 2006. We propose a new technique in support of the construction of efficient Foundational Proof-Carrying Code systems. Instead of suggesting that pieces of mobile code come with proofs of their safety, we instead suggest that they come with executable verifiers that can attest to their safety, as in our previous work on the Open Verifier. However, in contrast to that previous work, here we do away with any runtime proof generation by these verifiers. Instead, we require that the verifier itself is proved sound. To support this, we present a novel technique for extracting proof obligations about ML programs. Using this approach, we are able to demonstrate the first foundational verification technique for Typed Assembly Language with performance comparable to that of the traditional, uncertified TAL type checker. Refereed workshop papers Adam Chlipala George C. Necula Cooperative Integration of an Interactive Proof Assistant and an Automated Prover . Proceedings of the 6th International Workshop on Strategies in Automated Deduction (STRATEGIES'06) . August 2006. We show how to combine the interactive proof assistant Coq and the Nelson-Oppen-style automated first-order theorem prover Kettle in a synergistic way. We do this with a Kettle tactic for Coq that uses theory-specific reasoning to simplify goals based on automatically chosen case analyses, returning to the user as subgoals the cases it couldn't prove automatically. The process can then be repeated recursively, using Coq's tactical language as a very expressive extension of the matching strategies found in provers like Simplify. We also discuss how to encode specialized first-order proofs efficiently in Coq using proof by reflection. Bor-Yuh Evan Chang Adam Chlipala George C. Necula Robert R. Schneck The Open Verifier Framework for Foundational Verifiers . Proceedings of the 2nd ACM SIGPLAN Workshop on Types in Language Design and Implementation (TLDI'05) . January 2005. We propose a new framework for the construction of trustworthy program verifiers. The Open Verifier architecture can be viewed as an optimized Foundational Proof-Carrying Code toolkit. Instead of proposing that code producers send proofs of safety with all of their programs, we instead suggest that they send re-usable proof-generating verifiers. The proofs are generated in an online fashion via a novel interaction scheme between the untrusted verifier and the trusted core of the system. Adam Chlipala Leaf Petersen Robert Harper Strict Bidirectional Type Checking . Proceedings of the 2nd ACM SIGPLAN Workshop on Types in Language Design and Implementation (TLDI'05) . January 2005. We present a type system that is useful in saving type annotation space in intermediate language terms expressed in the restricted form called "A-normal form" or "one-half CPS." Our approach imports ideas from strict logic, which is based on the idea of hypotheses that must be used at least once. The resulting system is relevant to the efficiency of type-preserving compilers. Refereed poster sessions Invited conference papers Technical reports Adam Chlipala Scrap Your Web Application Boilerplate, or Metaprogramming with Row Types . Technical Report UCB/EECS-2006-120. 2006. An overview of a work-in-progress functional programming language that puts dependent types and theorem proving to work to make it easier to write concise and maintainable web applications Adam Chlipala An Untrusted Verifier for Typed Assembly Language . MS Project Report. Technical Report UCB/ERL M04/41. UC Berkeley EECS Department . 2004. A summary of my experiences developing a proof-generating TAL type checker within the Open Verifier framework. In the style of Foundational PCC, the soundness of this verifier and the proofs it generates is based on no assumptions about the TAL type system. This was one of the first projects to consider the runtime performance of Foundational PCC-style verification.
{"url":"http://adam.chlipala.net/papers/","timestamp":"2014-04-21T09:36:19Z","content_type":null,"content_length":"22085","record_id":"<urn:uuid:28317f02-4bdc-426c-9096-3909482c76f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
linear indep-exams soon Hello, please see attachment and help me solve it. thanks What do you think might be a good start to this problem? Hello, Here is what i think.Please check.Btw I really like your lady gaga equation! To prove this we must show linear independence beginning by first finding scalarslambda1,lambda2 such that lamda1v1+lambda2v2=o now for linear indep-form a simultaneous eqn if and only if lamda1v1 =0 lamda2v2=0; then v1 is not equal to v2 nor is lamda1 equal to lamda2
{"url":"http://mathhelpforum.com/advanced-algebra/219632-linear-indep-exams-soon.html","timestamp":"2014-04-16T07:58:04Z","content_type":null,"content_length":"38650","record_id":"<urn:uuid:3c411ac8-d438-4a4c-910b-9013d9410c84>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
algebra high school freshman free help Google visitors came to this page yesterday by using these keyword phrases: Factorization solver, 6th grade exercises fractions, math percent proportion ppt, games on integers, hyperbola vertices formula. Math cheat sheets, TI-81 key conversion TI-83, barron's ged free worksheets and answer keys, elementary algerbra. Ti 83 factor programs, algebra II worksheets, lcd algebra, translating word phrases to linear equations power point, glencoe review answers. Difference between hyperbola and parabola, free online factoring calculator polynomials, TI 89 Differential Equation 3rd order, holt algebra 1 answers, 9th grade algebra problems and answers, order fractions from least to greatest. Glencoe online study tools chapter 7 polynomials, beginning geometry for 5th grade printable worksheets, Poem using math terms, 5th grade algebra examples, online workbook pearson prentice hall free, End 6th grade math test, Business Cards. Sample module in adding polynomials, solver zill differential equations download, pre algebra answers, simplify radical expressions multiple choice questions, squaring fraction. Printable pre-algebra math work sheets for seven grade, N.I. Model school's (Dubai) maths worksheets, free printable math test. Math work sheet jokes applied problem solving, factor quadratic calculator, Problems Involving Quadratic Functions Worksheet, basic factor problems, 2d worksheets, elementary algebra solving fraction equations, rational equation least common multiple calculator. Simplifying radicals calculator, polynomials adding and subtracting ppt, absolute values+algebra 1 box method, algebra power expansions. Aptitude question, answer sheets to algebra with pizzazz, solving 4 equations with 4 unknowns with ti-89. Rational number quiz mcdougal littell math course 2, help on math 4grade online for free, math worksheets on two step equations. Extracting the square root formula, algebraic activity first grade, complex numbers matrices ti 83, subtracting integers worksheet, middle school heath algebra 1. Boolean algebra simplifier program, trinomial factor calculator, radical expressions division. What are the answers to the algebra 1 prentice hall book, ti 83 formulas, practice on rationalizing complex numbers, eight grade worksheet to do online. Holt algebra 1 answer textbook, ordinal numbers printables, boolean algebra software, domain solve radical function, intermediate algebra one textbooks, simplifying radicals worksheet pre-algebra. Easy ways to understand cube roots, how to solve using the principles together, adding radical expressions, provide graph points to find an nonlinear equation calculator free, explain Systems of Equations - 9th grade, Algebra 1 Scientific notation test. Multipication worksheet, games or worksheets solving addition equations, 2nd order differential equation using green's function. Free worksheet number patterns highschool, dividing polynomials calculator, algebra substitution problems. Personal Loans, "convert to" "square root of pi", first grade equation game. Online factor equations, understanding degrees and angles of a triangle free printable worksheets, Find a fraction equivalent to 8/9 with a demoninator of 81, poems about algebra 1, probability second grade free florida, free online banking exams practice papers, maths ,science paper of 6th standard. Second order differential solve matlab, algebra games to play to review for math exam, solve simultaneous equation casio calculator, absolute value pie, solving nonlinear differential equations in Matlab, convert 8' 10" to decimal. Love poems using +mathimatical phrases, how to solve equations to the 3rd power, examples of math trivia questions. Matlab simultaneous equations, help with high school algebra--solving equation with rational exponents, ti84 quadratic formula program, percent to ratio solver, free printable 6th grade worksheets. Chart logarithm free + vb6, roots of real number calculator, order of fractions from least to greatest, revision ks2 printable sheets, key code holt algebra 1 teacher's edition, worksheet on direct variation and inverse variation in math, download integration software for TI-84. Mathmatics problem solvers, probability problems free practice quiz, free books for apptitude, logarithms for kids, balancing chemical reactions+laws. Credit Cards, 5th grade solving algebraic expressions, printable maths sheets for ks2 for kids, Glencoe Mathematics Course 2 (chapter 10). Pre algebra worksheets, Vacations, where can i get online aptitude question and answers, Holt worksheets, Java program trig calculator, free quadratic equations programs, free download five year question papers for +2. Maths homework helpers, lancaster, solving algebraic equasions, Patent Lawyers. Free printable 3rd grade math understanding the meaning of numbers, TRIGONOMIC BOOKLET, rationalizing the denominator worksheets, permutation and combination games, perform equations on TI 83 plus calculator, free worksheets for subtracting integers, Saxon Algebra 1 answers. TI 84 plus emulator, square roots with index worksheet, negative power of calculator conversion. Prentice Hall Mathematics Algebra 1 answers, adding and multiplying numbers, exponent radical worksheet, algebra simplify compound fractions, maths yr 8. "problem solving" mathematics solutions students textbook analysis integral, Mathmatical Fraction, activities for solving inequalities in algebra 2, symbolic polynomial solver, dicrete mathmatics, Glencoe/McGraw Hill Algebra 1 answers, practice square root story problems. How to do standard form in the calculator, 5th grade interior angle worksheets, glencoe mathematics algebra 1 answer guide, lowest common denominator in casio, elementary deferential equations. Maths formulae list, contemporary abstract algebra solution, inverse functions and log ti 89. Hardest exponent algebra problem, kids algebra, simplifying cube roots with variables, basic mathematics formula, finding sum on ti-84, reducing square roots in Algebra. Free algerbra solver, the formula of fraction magic square, glencoe mcgraw hill math worksheets, percentage equations. Reduce Fractions Answers, free worksheets for third grade, box method quadratic, loop games with simultaneous equations ks3, free downloadable TI 84 Plus calculator games, solving equtions. Exponent prealgebra lesson 4.5 homework answers, How can we use permutation or combination in real life, how to divide rational expressions with multiple variables, algebra software tutorials. 2-step equations worksheet, solve "linear equations" solution sets free tests, download kumon math question, algebra long division questions X3, free TI-83 applet, how to limit the function on a graphing calculator, glencoe Mathematics and connections course 2 assessment and evaluation masters. NEW SIXTH GRADE 6 MCDOUGAL LITTEL MIDDLE SCHOOL MATH 6th Grade, prentice hall conceptual physics chapter 9 answers, workbook for fundamentals of cost accounting, TI-89 plot polar, free printable area math sheets. Calculate square root using calculator, College Algebra Answers, finding the slope-intercept formula with different givens, how to take the 4th root calculator graphing TI, factor program TI 83, Fraction worksheet for fifth grader, simultaneous equation calculator. Printable +work +sheets, Algebra 1 Free Box Plots Worksheets with answer, "solving algebraic equations", add, subtract, multiply, divide integers PowerPoint, finding math multiples cheat. Equation game solve 6th grade, how to subtract mixed numbers with remaining, 4th grade fractions. Multiplying and Dividing Integers, quadratic math solver for polynomial equations, mcdougal- math course 2 chapter 9 lesson 4, ti 83 solver third order polynomial, linear algebra solutions-fraleigh. Sample online Level 7 math tests, solving determinants with excel, maths problems for year nine students. Solve for x online calculator, Math poems all about algebra, algebra KS2 worksheets. Algebra 1 holt, adding and subtracting integers grade 9, formula chart for mathematics, florida math pre-algebra book website 7th grade. Free download of solutions manual for A transition to advanced mathematics, prentice hall mathematics geometry workbook page 81 and 82 answers, trigonometric proplems, prentice hall mathematics pre- algebra, decimals to mixed numbers. How to solve for maximum for parabola, Personal Finances, partial-sum addition worksheets, grade nine math test, multiplying and dividing rational expression solver. KS3 exercises in excel, algebraic expression, probability combination matlab. Free maths revision sheets for year 9, rational expressions number games, worksheet on graphing linear equation, operations with scientific notation worksheet, rational expression. Online scientific calculator for roots and rational exponents, adding and subtracting integer laws, real life use of radical equations, TI-83 polynomial program, Student Loans. Printable online third grade worksheets, 5th grade worksheets improper fractions answer sheets, nth root solve for n, gcf and lcm lesson plans, least common multiple exponents. Online Fraction Calculator, conic sections printable, square root property, hard equations examples. Ks3 algebra ppt, algebra tutor software, online antiderivative calculator. LOGARITHMIC for dummies, Web Hosting, rational exponents examples, divide polynomial calculator, give example of using the distributive property for a negative monomial times a trinomal with different signs, learn calc graphically, gr8 algebra questions. Dividing polynomials generator, cheat on homework glencoe algebra 1, O level algebra exercises, practice tests for foundations for algebra year 1 volume 2, 4th grade math combinations problems, find integrals on ti84. Rudin solution chapter 8, radicals and quadratic equations, "how to turn decimals into fractions", year 8 maths tests, ti-89 binary addition program, free math exercices online, solve simultaneous equations with trig functions 89. Laplace transforms ti89, real life when you use Quadratic formulas, free worksheets 4th grade parallel perpendicular, dividing decimals practice work sheets, "ALGEBRA FORMULA". Free answers to prentice hall algebra 1 workbook, least common multiple solver, integer worksheets, complex factoring, solve system of linear equations ti-83, rational expression simplifier, 6th grade math sheets. Mathamatical, Completing the square+algebra, code cracker multiplication worksheet, prentice hall math answers, hypotenuse answer finder, general aptitude question & answer. Binomial expansion two variables, Math Trivia, FREE INTERMEDIATE ALGEBRA STEP BY STEP, convert meters on TI-84, Free saxon answers, online algebraic function solver. Visual basic quadratic equation, first grade vertex, "worksheets"+"solving inequalities", nys 6th grade math printables, Printable Ged Study Sheets, midpoint distance solver. 6th grade help with typing for free, KS3 algebra addition square, finding the lcm for algebraic equations. Using matlab for second order ODE, math practice 10th grade games, inequalities worksheets, fifth grade, middle school balancing chemical equations practice worksheet, radical exponents and expressions solver. Using matlab to solve cubic equation, why learn algebra 11, ti-89 ti-83 fractions, vertex form of a quadratic why subtraction. Pre algebra tips, maple integral calculator, rate of change formula, ti 84 plus calculator downloads, formula for interception, systems of equations worksheet, free printable graph coordinates. Algebra pdf, grade 7 algebra questions, How to solve algebra, Solving systems of equation using the calculator. Math trivia with answers, seven grade pre-algebra worksheet, how do you find a scale factor, simplifying radical quotients, math/what are all the factors of 98, first grade lesson plans with Yr 10 algebra, online algebraic solver, common denominator practice questions, Solving Math Factoring Problems, Mc-Graw Hill math 5th grade chapter 6 lesson 5 on how to add fractions with mixed numbers, mathematics for dummies. Answers to mcdougal algebra 1 books free, simultaneous fraction equations calculator, TI 83 silver solving quadratic, third grade equation solution. Improper fraction printable sheet, calculating inverse log using ti-85, 5th grade algebraic expressions, ti 84 plus emulator, glencoe pre-algebra worksheets answers. Combining like terms building number sense worksheet answers, answer key holt science and technology Texas edition grade 8, finding slope worksheets, factor calculator, When would we use quadratic equations in real life, How to convert a decimal into square root form, online graphic calculator for determinants. Automatic conversion of decimal to fraction, pearson prentice hall answers, algebra rules cheat sheet free. College algebra problems, differentiation -product rule, quotient rule, chain rule (PPT), maths aptitude question with answer, Prentice Hall Mathematics geometry tutorials pre-algebra, holes in graphs of algebraic equations. Solving rational equations calculator free, quadratic formula word problems, how to solve limit of function. Why is it important to simplify radical expressions before adding or subtracting, High School Math Test Generator McDougal Littell, multiply and divide fraction polynomials, online equation solver Algebra proofs about slope, diamond and box method of factoring, cubed root on a non scientific calculator, fractions square root font, compare algebra grade 8 work book, 10th grade math free worksheets, ti 83 split a fraction. Basic algrebra, solving systems of equations by graphing worksheet, "lineal metre", quadratic equation three variables, Glencoe Algebra 2 answer key, visual basic calculator adding two negative integers, free printable coordinate planes algebra. Solving subtraction problems that have fraction exponents, free grade seven math problems, 72812844904064, algebra expressions formula chart, calculator with pie sign, math problem solver absolute value, math ratio powerpoint free. Solving a sqaure root, Solving Caculator, function table worksheet for math, java code for linear discriminant analysis, simple steps to learn differential calculas. Online algebra 1 answer key saxon, free worksheets distributive property, Linear Extrapolation Worksheets, elementry worksheets, how to plug in absolute value in a TI-83. Proportions worksheet, contemporary abstract algebra solutions, TI-85 calculator algebra tips. Inequality free wkshts 8th grade, Ratio questions online (grade 7), free online math solvers, equation solvers, free basic chemistry questions, fundamental principle rational expression, rational numbers multiplying three signs alike, solving simultaneous equations using excel. Integration by parts claculator, scale factor computer activity, algebra 1 prentice hall, antiderivative solver. Inventor of the quadratic formula, GED math problems.com, finding the nth root on an 89, the algebra of functions solver, 1st grade printouts, finding common denominator calculator. Square root problems for 6th graders, Free Online Math for 6th through 9th grade, combined transformations worksheets, how to multiply integers, McDougal Littell algebra 2 answers 303. Helpful physics formulas cheat sheet, how to calculate GCD, print out sheets of algebra, download previous year 6 sats papers. 6th grade math word problems exponents, absolute value vertex formula, business application problems slope intercept line. Using points to find slope intercept form online calculator, 11 Plus math made easy free workbooks, Free Aptitude Test Tutorials. Glencoe algebra 1 answer key, key to algebra answers download, square root practice test, coordinate planes for third grade, Symbolic math solver. Polynomial factor calculator, systems of equations applications three variables, first steps to learn algebra, parabola equation calculators, least common denominator calc, algebra 1 poems, how to do cube root on TI-83 calculator. How to calculate rational exponents, polynomial in the power of decimal, GRADE 11 EXAMINATION PAPERS, 3 cube roots of 8, online games to help me with my maths exam for year 7, free math problem solver AND online, quadratic formula 3rd order. Subtract mixed numbers worksheet, Elementary and intermediate algebra/a unified approach, third 3rd edition, word problems "system of linear equations" sold at a price, lineal metre conversion. Program to convert a number in scientific notation+java, percent of change worksheets, factor quadratic equation program, printable ged math practice test, polynomial word problem worksheet, pictograph printable, online inequality solver free. Website search engine for 7th grade math vocabulary, general algebra review free online, quadratic augmented matrix online calculator. Language Standard Quiz- 5th graders-printables, ti 83 combinations rule, adding integers, worksheets, finding the minimum of a parabola quadratic formula, SOLVING EQUATIONS BY MULTIPLYing or Radical converter, lessons on exponents, grade 5, example of trivia, How to use fractions TI-83 Plus Calculator, calculater and math*. Divide polynomials by binomials, how to do quadratic radical expression, compound angle trigonometry problems and answers, order of operations easy worksheets 6th grade, simplify radicand fractions, basic college mathematics 5th edition worksheets, subtract polynomial calculator. Division problem solver, ti-84 plus manual - zeros polynomial, simplifying square root equations, "pythagorean theory" + "calculator". Convert linear equation, algebraic age number problem and solutions, ks3 maths test, math trivias. Grade 5 Algebra Solving Equations, free printable grade 5 exponent practice, Solving Algebra Equations. Adding polynominals free help, glencoe algebra 1/ mcgraw-hill free answers, foiling math problems. Single step algebra worksheet, Java Aptitude questions, program that solves simultaneous equations, ratio, proportion and percent practice worksheets, maclane birkhoff answer. S Ross Simulation 4th Edition solution answer forum, how to find roots in a vertex form equation, factoring equations calculator, 4th grade star testing sample, tips on solving algebraic maths problems in cat, examples of math trivia, math with pizzazz printable worksheets. Best algebra solutions and cheats, online equation solver with steps, free sample ks2 maths past paper, 6th grade pre-algebra worksheets, graph a problem online, intermediate algebra worksheets with Free help on algebra structure and method, LCM and GCF Worksheets Free, pprintable practice worksheets on integers, online function calculater. How to store programs on TI-89, writing the chemical forming, animated powerpoint presentations for 8th standard, Solutions to Intermediate Accounting 12e, free online, Debt Consolidation, glencoe algebra 1, algebra 2 by glencoe mathematics. 2x2 matric calculator algebra, pre algebra jokes, finding the slope in a table, Algebra 2 solving rational equations calculator free, texas fourth and fifth grade math and science programs, maple. Cubed root on ti-83 plus, intermediate algebra test, Algebra with pizzazz! by creative publications, google calculator permutation, algebra for grade 10. Cubed root algebra, ks3 algebraic expressions, "Step By Step Problem Solving", formula about percentage. What Is a Leading Digit in Decimals, expanding cubed brackets, trig equation solver, free linear calculators. Algebra calculator, prime factorization, 8th grade math trivia, find slope triangle worksheet. G c m math, check my algebra, marvin bittinger basic mathematics 6th ed, First grade lesson plans on graphing, equations calculator. Division elementary 3rd grade printable, nth term formulae worksheet, Challenging Problems in Algebra para baixar, how to solve reducing matrices. Ti-84 emulator download, printable samples of iowa tests for basic skills, algebra 1 cheats, factoring cubes roots, solve my aglebra, font TCI2 scientific notebook. Prentice hall answers, online chemistry equation solver, solving sideways parabola equations, 3rd order polynomials, graphing linear functions practice sheets, Prentice Hall algebra 1 book plug in Simplified square root, college algebra calculator, permutations questions online, texas ti calculator emulator, combination permutation free worksheet, how to do arcsin on a ti-84 plus. Exponents and square roots with variables, Cruises, printable sheets of math for third graders. Using matlab to solve nonlinear simultaneous equations, solving a system of equations with 3 variables, visual algebra, online vertex calculators, clep algebra syllabus. Online ti-84, square roots chart, practice math problems for non-linear equations. Free solving inequalities worksheets, algebra with powerpoint , +LEARN ALGEBRA, triginometry lesson. Fraction calculator least to greatest, square root ladder method, aptitude test paper samples, printable math papers worksheets. Ratios and proportions aptitude tutorials, clep pretest, rational number solvers, algebra sums, multiplying square routes. Algebra british method, polynominal exponent calculator, online calculator expand factor, 8th grade printable science worksheets, MATHMATICS FOR 3RD GRADE, quadratic equation "vertex form" solutions, Problem 27 Intermediate Algebra for College Students 4th. Invert matrix TI-84, balancing equations calculator, 8th grade math - combinations, combination problems on calculator, combining like terms activity, online boolean simplification, free ti 83 rom image download. Difference of square factoring flash, GCSE REVISION PRINT OUTS, matlab nonlinear equation solver, convert number base, sample question for variable and linear differential equation, free steps to solving algebra triangles, Surd solver. Factoring Cubed Polynomials, how to convert mixed numbers to a decimal, free yr 6 maths games, factor trinomials calculator. Changing radical expression to exponential expression, algebra work equations, aptitude test sample paper, ti 83 plus rom download, Solving Word Problems Involving Quadratic Equations, my son is having problems with algebra, Algebra Homework Helper. GCSE factoring, long division solver, +how to divide fractions for fifth graders, ti89 quadratic, solving rational exponents/algebra 2, kumon math worksheets, ti rom-image free download. Converting floating point to base, free algebra help substitution method with fractions, exponent multiplication worksheet. Math equation problem solver, algebra 2 notes for dummies, coordinate plane, printable, free, prentical hall mathematics: ALGEBRA 2, solve rational expression. Slope intercept made easy, fraction,percentage,decimal grid ks2 to print, 1st grade money pretest, holt modern chemistry section review answer keys, How to find slope on a TI-83 Plus, download kumon Algebra and trigonometry structure and method book 2 mcdougal littell answers, multiplying and dividing fractions practice sheets, worksheet on finding the mean 4th grade, a down words poem for Algebra solve, math understanding the liner system, write addition and subtraction expressions, yr 9 math test. How to turn fractions into decimals, nth expressions math, worksheets on adding subtracting multiplying and dividing integers. Equation solver fractions, learning algebra, multiplying with variables worksheet free, NC geometry help with glencoe geometry 2004 edition, english placement test for community college ca free Simplifying radicals calculator, relationship of exponential function and radical function, calculator solver, Real Estate Math: Explanations, Problems and Solutions ebook. Graph solve, up hill algebra problem, divide exponents+calculator, ti 83 calculator rom download. Permutation probability problems, solving simple second order differential equations in matlab, nth term calculator, factoring in fifth grade, prime.java + lewis and loftus, factoring polynomials Algerbra study, answers to algebra 1 7.3 worksheet, free college algebra calculator, help on prentice hall mathematics algebra I practice 7-4. Multiplying and dividing powers, how to find the inverse of a polynomial on a graph, ti-82 games, HOW TO USE FRACTION ON TI-83, worksheet for 6th graders, "online ti 83 graphing calculator", Formula For Square Root. Simplifying algebraic expressions with polynomials calculator, algeba college math software, advanced algebra tutor, "absolute value" "printable worksheets" free, how do you find the greatest common factor of 361 and 76, how to cheat using a ti-84. Answers to pre algebra with pizzazz, convert time to string in java, matlab "lewis structure". Algebra square root equations, calculate algebra problems, irrational and rational or nonreal complex. Answers on even numbers of algebra 1 prentice hall book, download previous question papers of state bank of india exam, algebra pie, fifth grade complex sentence worksheets. Studing tips for pre-algebra, BBC Math Tutorial-Pre Algebra, one unknown variable calculator, type in a problem and have tutors solve it. Variables worksheet, how to do arcsin on a ti-84, how to subtract 3 digit integers, algebra 1 tests for the prentice hall book, Help with Algebra 1, system of two equations graphing relations. Simplify radical on ti 84+silver edition, GRE pattern aptitude test paper, dummit, foote, solution, graph calculator x=5+y^2, free worksheets for 9th graders, teaching children about linear expressions, algebra calculator with explanations. Square roots and cube roots worksheets, Hard Algebra questions and answers, speed and distance lesson plan 6th grade science, math second grade working sheets. Extracting math from story problems algebra pre-calc, online free download of fluid dynamics books, How to Solve Parabola Equations, online trigonometry calculator, simplify square root, test in abstract algebra, solve trigonomic equations. Addition polymerization + chemical equations, radical signs free math sheets, absolute value button on ti89. Glencoe/McGraw-Hill + worksheet answers, how to solve third order equation, Worksheets for 6th Grade Algebra Negatives and Positives, factoring cubics calculator, online math problem solver, imperfect square roots, Geometry reflection tutorials for Pre-Algebra 8th grade. Hardest math problem, Grade 6 Revision papers maths, video college algebra tutor, multiply a fraction on the ti-83 plus, simplifying radical multiplication tutorials, parabola sign chart, printable finding common denominators worksheets. Holt algebra1 cheats, free math work sheet and answers for high school, answers for glencoe algebra 2, equations with fractional coefficients, kids algerbra, practice excell free, prealgebra cheat Texas math conversion sheet, learn to graph in algebra, free aptitude test papers, i need solutions to Topics in Algebra I.N. Herstein, square route solver, convert decimal to fraction calculator. Conversion between Dicimal into square foot, lesson plans on solving equations with matrices on a TI84, quadratic factorisation how do you do it, equation solver algebra. Quiz for subtracting integers, numbers, addition,subtration,multiplication and division of fraction in maths bridging course to download, factoring calculator, free basic accounting books, graphing algebra tiles, Chapter 3 chemistry workbook answers, mathematics trivia. Free math homework answers, reducing radicals with exponents tips, mathematics question and answers of class viii. Ks3 solving inequalities with the letter on both sides, 1st grade sat practice worksheets, how to convert decimal to square root fraction, download chinese radicals chart, solvig algebra in excell trig, simulink solve 2nd order differential. North carolina prentice hall mathematics geometry chapter 2 online, seventh grade trigonometry worksheets, lowest common donominator. Graphing calculator online vertex finder, least common factor, Pre Algebra book answers, table method graphing worksheets, holt algebra 2 texas homework and practice workbook. Trigonometry trivia, find 4th root ti-83, Pure Math 10 exponents interactive activity, how to solve fractions problem in year 4, dilations worksheets. Multiplying square root with x and y, college algebra clep exam, fraction common denominators calculator, algebra II solver, college algebra factoring tips, holt online alg 1 book, ratio formula. Multiplying integers, Solving simultaneous coupled ode with MATLAB, college pre algebra exam, solving binomial equations, solve by graphing, lesson plans forgeometry for third grade, common factors grade 5. Imaginary number worksheet, step by step directions for solving a three circle venn diagram, calculator for substitution method. GCSE PRATICE PAPERS, matlab solving second order differential equation, free online graphing TI-83 calculators. Holt textbook permutation powerpoints, Teacher book for Glencoe Geometry Integration Applications Connections, permutation problems for 1st grade. Trivia about fraction, Applying radical expressions, excel nonlinear equation, McDougal Littel answers to Algebra II questions, SAMPLE QUESTION PAPER OF CLASS VIII. Math lesson on adding and subtracting integers, math quize, graphing hyperbola. Basic integration rules ti-83 plus, algebra calculator steps free, free printable math grade 6. How do you do absolute value on a ti83, kumon worksheets, probability + problems +mathamatics, trigonometry charts, Heath Algebra exercise, simplifying cubed polynomials. Order of Operations with square roots+worksheets, mixed numbers and decimals for 4th grade, program to calculate the gcd, foil method algebra find the product worksheet. Comparing and scaling math book awnsers, ALGEBRA HELP AND SOLUTIONS, pre-algebra with pizzazz work sheets, steps in making a math poem, ged factoring inequalities. "difference quotient "simplification, key to algebra book 6 answers download, practise test papers for year 7. Reducing radicals with exponents + organizing, graphing calculator online, software ti89 boolean algebra, solve this polynomial equation. Math trivia free, rational expression online calculator, simplifying radical expression for addition and subtraction, glencoe algebra 1 book, free solving linear inequalities with two variables worksheets, simplifying complex rational expressions, printable worksheets for seventh graders. Answers for saxon algebra 1, mixed numbers as decimals, online solving adding subtracting integers. Basic mathematical skills with geometry 6th edition answer key, how to do long division on GED test, How many methods can you use to multiply?, online practice 8th grade algebra quiz and Algebra 2 for dummies, pre-algebra in first grade, Who invented highest common factors, 3 rd grade line of symmetry printable. Least common dinominator for 8 & 10, circle graphs percents worksheets, calculator with a fraction to a decimal key (free online), act trigonometry practice sheet. Use trig calculator online, free 5th grade lcm worksheets, how do algebra equations work, Scott Foresman Addison Wesley 6Math Workbook, free printable problem/solution worksheets for middle school. Math geometry trivia with answers, algebra revision papers, solve equation with distributive property worksheet, houghton mifflin homework cheating, answers for high school math books, positive and negative numbers- practice worksheet. Java time converter, solving quadratics on ti89, how do you know if an equation is liner, TI-83 graphing a function with a restricted domain, radicall expression solver. Riemann sum square root, how to factor quadratics calculator, sheets for basic algebra, know it notes for holt-pre algebra, differential equations solving, Sample Test on Parabolic Transformations, algebra with pizzazz answer key for page 200. Free of cost english movies download, free downloads for algebra calculators, exponents calculator, Glencoe/Mcgraw-Hill/mathematics/answer books. Beginning algebra 10th edition pratice test, pre-algebra lesson, free math worksheets one step equations, 4/5 ratios formulas. How to evaluate natural base expressions on a TI-89 calculator, download emulators for TI 84, Holt algebra 1 cheats, ALGEBRA WITH PIZZAZZ worksheets Creative Publications, greatest common factor finder, ALGEBRA "radical expressions" CAN BE USED IN daily life, Online Math Algebra 1 Generator. Fifth order polynomial root solver fortran program, Triangle Inequalities Worksheet, solve nonlinear homogeneous first order differential equation, adding integers games gr. 6, Multiply Radicals calculator, equations involving square roots worksheet. Math formalu for coverting circles to square feet, how I find y-intercept T-83, solving algebra, negative numbers ti 84, substitution method calculator. Examples of word problems with application of exponents, Fractions Least to Greatest, Business, Nelson Math Gr 6 What is a vertex, LEARN ALGEBRA. Math poems algebra 2, trig calculator download, Factoring Calculator, Expressions Worksheet, mixed number calculator, use a free calculator to add and subtract mixed numbers, algebra problem calculator with step-by-step explanations. Integers worksheets, order, how to write a mixed fraction percent as a fraction, factor fractional exponents "ti-89 titanium", ALGEBRA FOR KIDS. HOW TO SOLVE SQUARE ROOT PROBLEMS, 2nd order differential equation solver, Logical Reasoning Worksheets, solving equations by algebraic vs. graphical method, simplify cubed quantities, algebra with fraction slope. How to teach dividing integers, finding the roots of quadratic equation using matlab, real life scale factor examples, prentice hall mathematics pre-algebra, free steps and answers for math. Free math worksheets on solving equations-three step, print out test, ks3, how to subtract percent with decimals, adding & subtracting like and unlike surds, lesson on writing an algebra equation from a word problem. Write balanced equations and expressions for the dissolution of ionic compounds, multiplying decimals by 10 worksheet, "math test" 3rd grade Los Angeles, printable math sheets for grade 3 in ontario, solving nonlinear differential equation, logarithms ti-83. Balancing chemistry formulas with matrix algebra, factor polynomials online, compound interest free maths power point, free ebooks on accounting, Printable Practice Math Problems, exponents equations simplify, simplifying square roots in quadratic equations. Translation KS2 maths worksheets, solving a "first order linear nonhomogeneous" equation, addition fractions worksheet, free math book answers, transformation+Rotation + worksheet. T183 plus algebra calculator, solve quadratic equations using a table, sixth grade world book worksheets, maths is fun games taks, when was algebra invented, multipication third. Printable greatest common factor worksheets, completing and balancing chemical reactions, free worksheets test quizzes arithmetic geometric sequences questions and answers problems quizzes algebra 2, finding greatest common denominator worksheet, elementary statistics relation between permutation and combination. Common denominator worksheet, word problems in grade 5 exam paper, finding the LCD using a calculator, divisor calculator, free mathematics caculator. Homework help with beginning and intermediate algebra book, pre algebra made easy, two-step word problem printables, online form of scott foresman, diamond edition, sixth grade, faster way to learn algebra 1 for free, solving multiple equations in matlab, "texas taks test review". Algebra difference of square formula, Algebra 2 Graphing Calculator, practice hall mathematics course 2 workbook answers, free prinatble third grade algerbra, sample English Aptitude test papers, Precalculus Function Problems Solutions. Trigonometric poems, balancing chemical equations CHEATER, online science games for ninth standard. Circle graph worksheets, math practice chapter 11 section a review and prctice decimal and it 4th grade, automated math question answer generator, how to use algebra in excel, how to get slope of line on ti-84, fourth grade inequality worksheets, algebra trivia. Maths problem solver, pre algebra purple workbook, rational equation solver, answers to saxon algebra 1, decimal to fraction maple, McDougal Littell Algebra 2 note taking guide answers. TI-83 inverse log, free online math tutor 5th grade, hard maths question, factor equations online, solutions of second order differential equations with matlab, Books, equation for algebra for finding x and y. Solving equation squares worksheet, finding slope worksheet, free solving rational expressions, teaching permutations & combinations, problem solver for fractions, 3rd grade : operational mathematics, free worksheets for grades seven and eight. Ti 83 simulator download, exponents activity, how to multiply monomials in long form, rules for adding and subtracting integers, solve prealgebra equations, solving trigonomic equations. Online parabolic equation graph generator, mcdougal littell algebra 2, multiply and divide work. How to do grade 10 algebra, math quiz on paper, multiplying equations, factoring trinomial online calculator. How to do cube root on a calculator, grade 10 math algebra, adding and subtracting polynomials worksheet, factor polynomial with two variables, Order of operation math worksheet, adding and subtracting radicals with exponents. Multiplying mixed numbers with a TI-89 calculator, find solution to exercises from cost accounting, graphing quadratic in MatLab, Two Step Equations Math Worksheets, 7 grade algebra 1 glencoe worksheets, find the least common denominator algebra, adding subtracting multiplying and dividing integers worksheet. FRACTONS WORKSHEETS, calculator for adding negative , third grade math tutorial. Pre-algebra glencoe/mcgraw-hill workbook answers, standard form to vertex form, algebra least common multiple calculator, free math problems, solve helper, Multiply and Divide Rational Expressions, pattern worksheets decimal numbers. Hompack matlab, solutions to intermediate algebra textbook problems, complex numbers worksheet. Trick for solving cubed roots, algebra factoring diamond method, Simplifying Square Root Calculator, half life algebra problems, adding rational expressions calculator, free pre-algebra pretest. McDougal Littell Algebra 2 workbook answers, Algebra 1 book answers by McDougal Littell, fraction base number converter, cheat answers to algebra 1, free accounting worksheet, Domain Names. "Intermediate Algebra: An Applied Approach" 7th free download ebook, y7 maths free worksheet, math problems linear combination operations, "equation solver" inequality. Download teacher book of Holt, Rinehart and Winston Algebra 2, year 6 hard maths, polynomial equation calculator, eleven plus level 5 maths sample question, solving a nonlinear equation with one variable, learning basic algerbra, JAVA convert dec hex binary octal. Simplifying non perfect square roots, java simulation first grade, McDougal Littell Algebra 1 Concepts and Skills volume 1 answers, substitution calculator. Calculus test free printable, how to get simplified radical form, online scientific calculator with variables. Simple radical form solver, rationalizing the denominator, distributive property, sheet, latest math +trivias. Convertion of percent, convert decimals to square roots, CHEAT ON MATHS HOMEWORK CALCULATER, Free Algebra Help to Solve Problems. Activities and worksheets fractions ks2, simplify radical expressions before adding or subtracting, exponent variables worksheets. Middle School Math Printable Worksheets, annie david, radical simplifier calculator, usable online scientific calculator. Pacemaker prealgebra, mixed number to decimal table, alberta achievement testpapers, doing summation on ti-83, trigonometric graphic for dummies. How to find exponent variable, worlds hardest type of math, multiplication radical, polynomial problem solver. How to solve square roots, yr seven homework, powerpoint graph using slope intercept and t table. Graphing and simultaneous equations, how to write quadratic equation that has pair roots 3,-5, printable mental math worksheets, algebra grade 10 help, "integers" "division" worksheets, english aptitude test papers free download, gr9 fractions. Poem with math terms, equation factor solver, free online ti-83 graphing calculator. Summation proof for cubed, worksheets on integers, whole numbers and rational numbers, free mathproblemsolver, Sequences NTH Term, CHEATS FOR FIRST IN MATH, geometry how to rationalize a denominator, Adding and Subtracting Positive and Negative integers. Division of radical expressions, free printable practice scott foresman 4th grade science tests, printable solving two step equations, factor quadratics calculator, 6th grade math factoring LCM, pearson hall student access code advanced pre-algebra tools for a changing world, pre Algebra answers. Pre-algebra with pizazz, Changing radical expression to exponential, free online grade 8th math worksheets, accounting download, 7th grade math Probability homework help, hardest math problem in the Simple algebra worksheet, exponent calculator algebra, mathcad online helpprogramming, simplified radical form, writing algebraic expressions 7th grade math test, fraction to decimal formula. "balancing equations" math, physics homework solution ( thomson seventh edition ) homework, Solve the equation for x: 3x - 6y = 12, free Circles, Ellipses, Parabolas, and Hyperbolas softwares downloads, free general aptitude ebook, print a perimter worksheet for third grade. Free high school science worksheets, math/permutation and combination tips, java divisible by, rationalize denominator exercices, Dilations Worksheets Middle School. A maths test paper for grade9, free fraction worksheets, how to do algebra help, simplifying algebra calculator, lcm cheat sheet, quadratic equations +substitution. Gr.10 lesson factoring, factoring diamonds algebra, cheating ti-89, program to find Sum in JAVA, lesson plans on fractions for 1st graders. Gifts, powerpoint lectures basic college math, how to find slope with ti 83, what year was algebra invented?, alegra 2 polynomials, free math class for ninth grade. Solving single step equations power point, mathpower 8 worksheet, simplifying algebraic expressions worksheets, complex mathematics equations in visual basic 6. Maths lessons square roots, worksheets solving inequalities in two variables, common factors andfactoring by grouping, Hanna Orleans Algebra. Fraction formula, Learning Basic Algebra, mixed numbers word problems for 5th grade printouts, glencoe mathematics-algebra 1, solving equation for hyperbola, save formulas in ti-84 plus, Solving Equations Integers Addition and Subtraction. Green function adjoint operator nonhomogeneous, PROBABILITY OF PASSING CLEP TEST, online mathematical simplifier. Www.cliff notes algebraic fractions.com, free download aptitude solved question papers, rationalize denominator trigonometry, solve for multiple variables with multiple equations, worksheet for substituting value into expressions. Algebra help mult by common demoninator, high school home work sheet (wa), gcse examples, solve differential equation in matlab control systems, source code for java Fraction. How to solve algebra fraction problems free, multiplying 3 term polynomial cubed, saxon math Guided Practice worksheet, Math Trivias of Percentage, converting mixed numbers to decimals, free online Algebra ratios, permutation and combination lesson plans, free printable worksheets + 6-8 grade, civil engineering equations, Maths worksheet for class viii, linear algebra cheat, radicals decimals. Math composition solver, write x^2 in radical form, rational expressions and equations CALCULATOR, write three quadratic equations with a, b and as rational numbers, an improper fraCTION THAT CAN TURN INTO A DECIMAL, mathmatical percentage. Online Mcdougal littell math course 3 teacher's edition, college english placement test free online tutor for california, printable KS2 symmetry sheets, where can I get free printable seven grade worksheets, calculate proportions. Polynomials and Polynomial Expressions calculator, subtract integers on a Number line worksheet, simultaneous equation nonlinear multiple regression, oracle function, graphing multivariable functions online calculator, casio fx-115ms factoring, multistep equation worksheet. "middle school pre-algebra textbook", "Linear quadratic equation" + "Roots", factoring third order polynomials. What is the 3rd and 4th programing language in computer, balancing chemical equations worksheet + answers, how to convert decimal to square roots, help with 5th grade integers. Help on expressions formulae and equations, coordinate plane functions worksheets, scientific notation solver, type in and solve +eguations, pre-algebra proportion calculations free help. Algebra pdf., Example Of Math Trivia Questions, solving equations worksheet. Glencoe algebra 1 worksheets, math vertex worksheet, converter square root to radical, 3D worksheets for grade 1, notes for permutations and combinations for middle school math, sample objective papers for aptitude with answers, visual instructions for finding square inch triangle equations. Square root java whole number, what type of calculator do i use to solve algebra equations?, ALgebrator, college algebra final to print, powerpoint for teaching equations ks3, mcdougal littell math how to solve problems. Line graphs worksheets, cost accounting homework answers, slope of quadratic, calculator for radical problems, quadratic formula calculator, bpo aptitude test papers with solutions, free online calculator for adding subtracting fractions. Trig equation solver, free printable worksheets ks3, binomial expansion applet, exercise in calculating LCM, cheat your math homework with this pi calculator, how to find lowest common denominator in calculators casio. TI 84 practice problems, solving linear and nonlinear equations simultaneously, maths for class VIII, how to write the inverse of a parabola, solve quadratic equations in matlab, online expression calculator, emulator ti84 silver. Rewriting distributive property, TI84 emulator, Pre-Algebra answers, lowest common denominator fraction tool, Cubic Root on a TI-83, online graphing calculator find axis of symmetry, algebra quadratic diamond box. Combining inequalities worksheet, algebra equation rules for powers, sample math thought problems with per cent and proportions, rudin analysis ch. 7 solution. Holt mathematics problem solving workbook, scale math problems, smiplify terms worksheets, math expand a function cubed, slope excel formula. Linear equations worksheets, how algebra was invented, local literature of algebra, fraction+first+grade+printables, algebra made easy, beginners algebra equations reciprocal, quadratic equation with complex coefficients. Online practice 8th grade algebra quiz, Free Math Problems, soft math, how to cheat with a TI84 plus calculator. Search Engine users found our website yesterday by entering these keywords: • example of radical expressions • system of equation ( substitution) cheating guides • solve binomial • cube roots on ti 83 • directions for TI-83 calculator cubed roots • how to do rational expressions and their simplifications • "prentice hall algebra 1 answer key online" • simultaneous equations cheat • equation factoring program • prime factorization for ks3 • square root to the third • factoring polynomials GUI • convert decimal to fraction in matlab • math problems ( mixture, work and money problems with solutions) • examples of trogonometry functions • algebra 1 volume 2 book answer key • HOmework Help Beginning Algebra Prentice Hall • exponent worksheet • Simplifying Radical Expressions calculator • math cubed ottawa • basic algebra "rewriting formulas" • matric math • percent to algebra • printable activities for teaching probability to 3 graders • www,multiplacation.com • pre algebra definitions • sample java programs for multiply 2 numbers • solving multiple equations in multiple variables • algebra 1 Prentice-Hall • solver for real exponents and exponential functions • promotion point calculater • percent diffrence formula • challenging maths for Year6 • free stats past papers • Credit Score • help with probability • free worksheets for kids math multiplying dividing fractions • factor cubed polynomials • base convert+java+code • excel solve multiple equations • grade eight algebra problems of ontario • "simplifying radicals", "lesson plan", radicals in "denominator of a fraction" • printable math worksheets finding slope • square root on ti-83 • geometric sequences revision gcse • partial fraction lessons • how to solve cubed roots • how to solve a hyperbola • lesson plans for adding/subtracting integers • solving for a given variable worksheets • boolean algebra calculator • Statistics for beginners lesson ppt • ti-83 graphing and finding domain program • difference between functions and linear equations • grade 2 math assigment • How to solve aptitude questions • number pattern worksheets • solve algebra • aptitude questions downloads • calculating statics question with method of substitution • biginteger to convert string to int • Ti83 program cubic equation • Printable 1st grade math practice • 6th grade sat test practice • java calculator linear combination • simultaneous equations in matlab • square root property calculator • the nuber or expresion inside a radical symbol • evaluating non exponential Expression math • algebra plot and whiskers • 9TH GRADE MATH SHEET • mcdougal littell practice workbook math course 3 answers • nth term worksheet • A/L past papers pure 'applied maths solved questions • TI-85 logarithmic • boolean algebra online calculator • free ratio and proportions printouts • SOLVING FOR X WITH MULTIPLE VARIABLES • Math proportions worksheets • online maths tests ks3 • ti89 chem • www.mathalgebra.com • simplify radicals online • square root properties • third root • free online math tutor in measurement • free problem solvers on logarithms • nature of roots- quadratic discriminant ppts • basic algebra practice test • ti-84 calculator emulator • usable online fraction calculators • solving second order differential equations • synthetic division worksheet • online algebra solver • free accounting books • factoring polynomial with cubed term • Multiplication of Monomials and Binomials worksheet • adding fractions with uneven • program quadratic ti-84 • grade nine printable polynomial worksheets • algebra help story • Student Solutions Manual For Winston's Introduction To Mathematical Programming • algebra with pizazz • chapter 9 resource book algebra a • math trivias from USA • indiana 6th grade math problems • HOMEWORK HELPER.COM • Examples of Quadratic Equations • factoring cubes • ti-89 simplify complex numbers • grade 6 worksheets on decimals • Who invented the highest common factor • ontario grade seven math worksheets • math worksheets/6th grade • how convert decimal to fraction by maple • intermediate algebra quizzes • homework solver polynomial equations • percent proportion lesson plan • ti89 determinants • online t83 • Decimal into a square root • MIDDLE SCHOOL MATH WITH PIZZAZZ! bOOK D • finding the common denominator • hyperbolas in everyday life • easy trinomial calculator • download 5th grade math text book • pythagoras calculator • algebra 1 worksheets • add subtract division decimals fractions maths sheets • fraction word problems for 6th grade • word problems factoring worksheet • mathematical poem • answers for algebra 2 high school book • algebra division calculator • PreCal solver • 2nd order homogeneous difference equation • Travel • how to put pdf on ti 89 • square root free worksheets • help sequences and formulae ks4 • ppt on fluid mechanics • formula of parabola • saxon activity master rectangular coordinates page 76 • free math solver mathematics • multiplying and dividing decimals • Cheat Cheat For Introduction To Algerbra • algebra step-by-step • algebra introduction elementary worksheets • two step equations • solving equations lab activity • non perfect square roots worksheets • programming a ti-84 • working with numbers algebra answers • Algebra with Pizzazz! answers • math gragh • converting decimal to Octal in java • how to write a java conversion calculator for phone • MATH TRIVIAS • step by step steps for learning algabra • cube root, TI-83 calculator • how to use solver on ti84 plus • free aptitude downloads • how to solve compund interest fast • powers of monomials solver • square roots with exponets • Math practice book for 1st grader free online • maple equation flow • free triangle worksheets • gcse maths question bank+free • calculator for adding polynomials • fraction caculator • algerbra comparisons • practice problems for solving by elimination • LCM lesson plan • how to do algebra math • free online math test of LCM • solve the differential equation xy''' - (y')4y = 0 • find the slope calculator • mcdougal littell algebra answers • math pattern functions worksheet • cube roots ti-83 • fun ways to teach simultaneous equations • college algebra homework helper • factoring cubed numbers • maths KS2 scale questions • log in ti different base • Bankruptcy • online statistic graphing calculator • how to find the heat usage math equation • solve for least common denominator • Canada High School Algebra solving radicals • worksheet for multiplying and dividing integers • Radical form calculator • math trivia • fraction least to greatest • how to pass test for electrician exam in chicago • fourier graphing calculator java • how to subtract integers calculator • trigonometry special values • solve the equations by factorizations • rate of change (slope) on a house • ONLINE CALCULATOR WITH SQUARE UNIT • sample math problems for 7th grade • aptitude questions and answers download • online math expression simplifier • factoring quadratic equations a isn't 1 • equation solver 4 unknowns • surds on ti-84 • how do you simplify squares • algorithm worksheets • mathematics/polynomial,rational,root functions • math, hyperbola online • factoring cubed quadratics • aptitude ebook free download • "nonlinear" "simultaneous equations" "ti 89" • free grade 2 math skill sheets • how to solve aptitude questions • example problem of algebra • 16.3 Colligative Properties of Solution wkst "answers" • kumon answer books • solving integers worksheets • trig graphing, applications, free • how to do radical expressions on a calculator • how to convert a mixed radical into a number • solve for least common denominator algebra • solve a simultaneous system of equations in excel and regression • radical expression solver • learning algebra online • ration formula • graphing equations with square roots • how to sum numbers in java • algebra 2 glencoe answers • saxon algebra 1 answers • mathmatical equation • mixed review using scientific calculator math worksheet • graphing linear equations worksheet • Math Problem Answers to Algebra 2 • NEED HELP IN SIMPLIFY DIVISIN IN MY MATH • free download advanced accounting • aptitude+ebook+free+download • math question solvers • free woksheets graphing pictures • answers to even questions in algebra 2 textbook • rules for adding positive and negative numbers • finding the scale factor • trapezoid volume backwords calculations • graphing calculator with "multiple variables" • standing line worksheet for pre primary kids • prealgebra order of operations free assignments • solve math problems showing work cheat • gr.9 math practice • mathematical equation joke • square root test kids • College Algebra For Dummies • YR 8 mATHS • convert mixed number • difficult mathmatical equation • matlab code simpson's one third • hard math equations • usable online calculator • algebra 1 texas addition book online answers • math terms poem • help with repeating permutations & combinations • solvig algebra in excell • calculate lcm • glencoe algebra one • free homework solver for calculating y-intercept and slope • Prentice Hall Math Book Answers • multiplying rational expressions step by step • maths powerpoint compound interest free • lesson plan: rational expression and long division • graphing quadratic functions in vertex form • permutation and combination problems • elementary math trivia examples • interpolation program for ti 83 • THE EASIEST WAY TO FIND THE LEAST COMMON MULTIPLE • simplifying radical expressions calculator • w.w.w mathematic.com • free worksheets on adding subtracting multiplying and dividing integers • 8th grade algebra elimination method • Mifflin Vocabulary cheat • maple product into sum conversion • Free Online Math Calculator • basic geometry solver • edhelper dilation problems • using TI 83 to solve system of linear equations • math combination software free • t-tables+algebra • permutations +vba • chart examples inverse property kids • hardest algebra equation • unit rate math worksheets for middle school • online cube root calculator • Graphing calculator T183 instructions • answers to fractions turning into decimals • converting a decimal to a mixed number • adding integers formula • math ratio puzzle worksheet • math MCQS • equation solver exponents • Dividing Quadratic • solving complex number equations polar • solving quadratic equation in Matlab • Life Insurance • Simplifying root expressions • download sample aptitude questions • solve a system of equations using substitution worksheet • how do I multipy fractions • geometry teachers edition to McDougal Littell even • finding slope on Ti-83 • online square root calculator • Algebra and Trigonometry: Structure and Method, Book 2 • free printable math papers for college students • maths homework factoring and multiplying out linear • Exercises-Algebra • college algebra exercises • multiplication of radicals • second grade math/Printable Pictograph Worksheet • Exponent Rules Worksheets • solving integers multiplying,dividing • vertex algebra • GCF and LCM finder • "coefficient of variation" equation ti83 • 7th grade algebra practice test • multi-step inequality calculator • free english papers ks3 • free multi-step word problems for fourth grade • add/subtract/ fractions worksheet • adding integers worksheets • solver "3rd order polynomials" "mathematica" • houghton mifflin 3rd grade math worksheets • algebra tiles software for math • phoenix math 208 aleks quiz answers • how to create algebra fonts, exponents and angles • holt geometry cheat • product and quotient properties of logarithms lesson plans • numeracy sats questions scales • graphing linear inequalities using online graphing calculator • MATH worksheet for ks3 • synthetic division - Free printables • simple fraction worksheets • high school math aptitude free online • solving proportion worksheets printable • non homogenous differential particular solution • add, subtract, multiply, divide, whole numbers worksheets • trigonometric calculation free software • second grade math multiplication square root • solving equasions • math worksheets turning decimals fractions • free sample math test questions for the Iowa test of Basic Skills • quadratic functions of binomals • abstract algebra notes study guide online • Glencoe Math Books • how to convert percentages into reduced fractions • free algebra worksheets with solutions • india grade 8 maths problems • chicago advanced algebra • math eqautions • how to solve a cubic function ti-89 • vertex form (algebra 2) • software programs that help teach algebra • numbers that are 1 more than a multiple of 5 • decimal placement worksheets elementary • worksheet rotation maths • fraction equation • check your algebra problems free • online polynomial calculator • online calculator radical • mathmatical fractions • inequality calculator online • math solving software • fractions finding least common denominator calculator • how to convert square root units • solve simultaneous equations with exponential function • coordinate plane worksheet printable • ti-89 graphing calculator online • permutations made simple for kids • graphing worksheets for 6th grade • Addison-wesley.Polynomials • binomials problems • "percents to fractions" pre-algebra worksheet • algebraic fraction division variable fraction calculator • college algeba calculator online • math multipication problem for 4th graders • 4th grade fraction work sheets • math test and integers • TI 83+ Polynomial Long Division Program • easy way to reduce a fraction • equations fractions • evaluation of expression calculator online • cpm classwork • practice square root story problem • printable easy worksheets on quadratic equations • pre algebra calculator • turn decimals into fractions calculators • ti-89 and the impulse function • online balancing equation • order of operation assignment prealgebra • math caculators • worksheets on greatest common factors • free software to factor polynomials • cheating on long division need answers fast • iowa testing worksheets or samples • apititude question and answer • examples for dividing of integers • algebra tiles free software • free GED mathematic worksheets • examples of kumon • second hand graphic calculator • two biggest factors calculator • hands on linear equations/slope • math test year6 • indefinite integral drill worksheet • transforming formulas algebra • multiplication abstract reasoning worksheets • "systems of equations in three variables" • c aptitude question • variable worksheets • MATLAB calculate wronskian • Equations with fractional coefficeints worksheets • algebra software on my phone • ontario math workbook sheets • math b regents prep: cube roots • algebra radical problems solver online • simple fraction worksheets • help with lowest common denominator worksheets • harcourt math cheats • printable angles sheet for grade 6 • free lattice multiplication worksheet • example of math poems • cheat on your algebra homework • add polynomial calculator online • polynomial solve calculator • math worksheets on solving equations wtih inequalities • free down load of cost accounting • solving quadratics with ti-89 • Solve the equation using fractions • 6th grade adding, subtracting, comparing, and ordering decimals • pre algebra order of operation properties • texas instruments & changing decimal to a fraction • Glencoe Physics Answers • downloadable yr 1 maths for free • how to simplify radicals + tips tricks • Free online Math sums for grade 9 students • simplifying cubed radicals • Proportion worksheets • rotation worksheets math • probability middle school math with pizzazz book e • online ti 83 calculator download • solving linear systems using elimination calculator • math activity gcf lcd • math games simplifying rational expressions • how to cube root on a calculator • integers fractions subtracting • help solve equationto mixed exercises • College Algebra Tutors • second order differential equATION SOLVER • algebra equation writers • easy fun ways to learn algebra • solving high order equations • algebraic equations in real life • multivariable precalculus problems • calculator for working with integers • Algebra Solver • math tutor logarithm free • ordinary decimal notation • online math worksheets • Free Printable Aptitude Tests • mcdougal littell chapter 8 geometry worksheet answers • write a mixed fraction as a percent • Percentage formula • ti84 plus statistic • 4th degree polynomial graphing calculator • 5th grade math word problems • simplifying algebra problems with square roots • mix numbers • solving difference equations 2nd order • completing the square powerpoint • free adding and subtracting integers worksheet • T1-83 free download programs • Bittinger. Calculus and Its Applications. 9th edition Exponential functions notes • nonhomogeneous first order nonlinear differential equations • differential equation solver ti89 • how do i cheat on my math homework • G.E.D. algerbra training • quadratic formula step by step • grade 11 physics (algebra)online help • Heath Algebra 1 Syllabus • factoring fraction exponents • Aptitude question • scott foresman online free quiz • trig solver online • linear equations conversions elementary statistics • "CUBE ROOT" CALCULATOR EXCEL • free 5th grade worksheets • grade 6 math +trivias • worksheets on irregular figures • algebra structure and method (answers) • least common multiple of 46 and 27 • algebra 3 and trig problem answers • simplifying algebraic expressions with polynomials calculator online • glencoe pre-algebra answers • what does regular mean in maths ks2 • the definition of adding,subtracting,multiplying, and dividing integers • 4th grade fractions worksheets • parallelogram rule wave equation • steps to finding the line of regression using a graphing caluculator • parallel and perpendicular slopes worksheets • how to take a fraction root • Elementary Math Worksheets year 8 • Help with Algebra 1 • matlab ode45 second order first order • gnuplot + polar plot examples • algebra 2 practice 2-4 answers workbook • least common denominators with algebra • 4th grade lcm and gcf practice • chinese math poem • High Speed Internet • math worksheet year 9 • identify base of a statement algebra • evaluate algebraic expression worksheets • converting decimals to mixed number in simplest form • printable algebra problems • Free Algebra Homework Solver • solve radical expression with a 5th index • Sample test about Division of Polynomials • free download aptitude test papers • variables and expressions printables • pictograph printables • how to convert a percentage of amount • simplifying radicals with exponents • slope formulas • glencoe mcgraw-hill algebra 1 answers • 4th root solver • Second order differential equations in MatLab • free math problem solver • otto linear algebra • linear equalities • free downloaded Pre-Algebra tutorial in Flash • Free Calculators for Dividing Monomials • answers for algebra 1 book • "poem mathematics" • 3x3 for ti84 program • do a maths test online (year 8 work) • prentice hall math conversions • (solve for multiple variables) calculator • radical expressions used in your daily life • sample problems permutation • adding double negative integers worksheets • chemistry answer from prentice hall • answer to basic math tutor exercise 7 • Glencoe mcgraw-hill Algebra 2 answer book • Algebra Poems • free algebra review • first in math cheat • simple acceleration math problems worksheets • Math Problem Solver • "radical form calculator" • free math test • solving third order equations • TI program to factor quadratic equations • solving systems of equations in excel • ORDER OF OPERATIONS WORKSHEET, 5TH GRADE • free downloads of eigth grade algebra • T1-83 plus guidebook • accountancy books free • foiling' calculator • printable free two step equation worksheets • graphing linear inequalities online calculator • solve a system of two equations excel • basic algrabra • free online math problem solvers • Iowa Algebra Aptitude Test • glencoe pre-algebra worksheets • printable geometry pages for 3rd grade • sums worksheets • conversion table of square root fraction • mathematics combinations • answers for © Creative Publications • TI-89 online • solving linear fractions on the TI-89 titanium • adding integers for 7th grade worksheets with answer key • graphing linear equations on the TI-83 plus • volume of a cube model second grade homework • holt algebra 1 Interactive Answers and Solutions • worksheet for addingand subtracting mixed numbers • node/21 • fluid mechanics lesson plans • how to multiply radicals with whole numbers • worksheets + trignometry+gcse • Multiplying & dividing decimals game • free kumon • simultaneous equations calculator • RUNGE-KUTTA FOR SYSTEMS OF DIFFERENTIAL EQUATIONS matlab • gmat aptitude questions • alegrabra 1b • MATH TRIVIA QUESTIONS • grade 9 algebra adding and subtracting exercises • Prentice Hall Chemistry Answer Key • rational number exponent solver • subtracting quadratic equations • lp-problem constraint • percent equations • free math proportions worksheets • simultaneous equations games • examples of fractions in order from least to greatest • parabola printable • learning algebra poems • Exponent and square root problems • boolean simplifier program • find roots polynomial ti 83 • variable roots ti-83 • percent to ratios worksheet • polynomial tests in algebra • worksheets for adding, subtrating, multiplying, and dividing integers • pre-alg book • mcdougal littel algebra 1 chapter 3 game • multiple equation solving • rules subtracting two integers • convert decimal to hexadecimal using ti-89 • ladder method of prime factorization • aptitude papers with answers • radical expressions in real life • Answers to Algebra 2 Practice Workbook • equations 3 unknown factors • ti-86 emulator download • ti-84 programming functions • lowest common denominator app • matric calculator • lowest common denominator, exercises • how to simplify fraction equations • mixed fractions to decimal points • fourth order equation solver • printable blank polar coordinate graph • Free Algebra 2 Answers • algebra flippers • fractions and percents in order from least to greatest • balancing equation online calculator • adding and subtracting game • quadratic equasions • multipication problems math for 4th graders • simplify + algebra • finding LCM in rational expressions • ti84 programs simultaneous equations • 6th grade algebra worksheets • radical expression to exponential expression • change fractions into decimals calculator • TI-89 FREE DOWNLOADS • simplify square roots • Algebra 2 vocabulary answers • log() ti-89 • adding integers big numbers • 'transformation free worksheets' • free manual on introduction to cost accounting • online math answer books • Electrician’s Math and Basic Electrical Formulas pdf • math help for cubic relationship • free online tutorial algebra 9th -12th • Contact Lenses • Insurance • addition to 18 worksheet • scientific notationmath problems • apptitude question &answer • video information about radical expressions adding,subtracting,multiplying,divide • algebra 2 answer key glencoe • simplify to standard form polynomials • inverse laplace ti89 • factoring by cubes • radical of negative one equals i • geometry for third grade • ti-89 programs for civil engineering • simplify algebra • algebra 1 work problems • free math cheats • Education • how to solve algebra fraction problems • how do you solve algebra equasions • McDougal Littell algebra math books • Gcse vector questions to print off • lowest common denominator+calculator • first order partial differential equation • algebra parabola • solve boolean algebra online • 9th life free game • LCM answers • Algebra 1 Book answers • code for grading program in Java • free algebra problem solving worksheets • gcse maths sheets on basic algebra • 5-7 glencoe/ mcGraw rational exponents • adding and dividing integers • Complex math equations • how to solve simple quadratic equations using the square root property • algebra excel lessons • introductory and intermediate algebra free help • free synthetic and long division solver • quadratic system calculator • prentice hall mathematics: ALGEBRA 2 cheats • teachers solutions manual for Contemporary Abstract algebra by gallian • printable worksheets for area • free printable algebra1 practice sheets • factor polynomial with two variables algebra 2 • math directions • radical expressions solver • Prentice Hall algebra 1 interactive book plug in download • how to solve associative multiplication problems • online fraction calculator with exponents • addition and subtraction of fraction worksheet • LCM, GCF exponents and factoring 5th grade math • how to solve first order partial differential • why is it important to simplify radical expressions before adding or subtracting • prentice hall algebra ebook • Greatest Common Factors of Monomials calculator • free excel basketball stats • quadratic formula decomposition trinomial • trigonometric graph worksheets • adding algebraic fractions worksheets • online factorer • free algebra homework solver download • saxon algebra 1 book answers • find the answers to algebra 2 square root problems • define parabola • online calculator for common factors • harcourt math cheat sheets (4th grade multiplication) • math homework answers to square products • ti 83 quad • value logarithmic form calculator • DVD Rental • pdf ti-89 • t-86 calculator run regression • algebra word equation cheat • printable square root tables • decimal computation problems for fifth grade • free notes cost accounting • finding scale factor calculator • Casio fx-300w - Calculator Help • how to work factorial button on TI83Plus calculator • use a online calculator to convert decimals into fraction • simplifying radical calculator • TI-89 solve ODE • how to do radical expressions on a scientific calculator • Free Online Algebra Solver • sample paper for class 7 • variable exponent • cognitive tutor cheats • free printable math worksheets angles triangles • powerpoint presentation on simplifying rational expressions • application of area parabola hyperbola • adding subtracting negative numbers • factoring monomials calculator • Fundamentals of Physics 7th edition answer key • worksheets for adding and subtracting fractions • programming quadratic equation into calculator • quadratic equation generator • convert fractions to negative exponents • algebra book answers • algebra games for kids • "grade 11" "physics examples" • chemistry equation balancer ti-83 • adding positive and negative numbers worksheets • free printable real estate math formulas • free printable basic chemistry and algebra problems with answers • free worksheets over dividing decimals • ti84 silver emulator download • dividing algebraic • how to multiply and simplify square roots • adding positive and negative integers worksheet • algebra for year 8 canada • mathamatics solutions • Simple Math trivia • history of mathamatics • www.math.glencoe.com teachers book • adding and subtracting polynomials worksheets • what is the formula for figuring out square foot of a right hand triangle • aptitude question and answer • 5th grade worksheets on integers • adding, subtracting, multiplying, and dividing negatives • problem solver math • trig cheat for ti 83 plus • algebra with pizzazz creative publications • geometry help/dilation • solve boolean algebra • mathmatics formulas • grade 11 algebra+fraction review • online texas graphing calculator • how to substract fractions • TRANSFORMATION+ ROTATION + wORK SHEET • trivias in math • Free Math Problem Solver • why is it important to simplify radical expressions before adding? • how to multiply a percent by a decimal number • enter graph points to find an equation calculator • ADVANCED ALGEBRA HOLT • mathematica second order linear equations • WRITE IMPROPER FRACTION AS DECIMAL • multiplying expressions calculator • finding square root of variables • hard maths equation • online scientific calculator & progression • worksheet on slope • solving systems using substitution printable worksheet NO FRACTIONS • free math answer • quadratic calculator • multiplying trinomials cubed • order of operations solvers • Aptitude questions based on ratios and proportions • online usable graphing calculator • factorising hard algebra equations • factor equation calculator • solver differential equation SYMBOLIC first order • log on TI89 • ti calculators free online • application of Radical expression • explain step by step completing the square - quadratic equations GCSE • how to add subtract multiply divide teaching resources • what does a small number represents before the symbol of a square root • how to slove radical expression with the 5th root • quadratic equation roots finder • solve for the limit of a function with a radical in the denominator • square and cube root charts • iowa algebra test • free maths test year 8 • radical expressions for dummies • solving inequalities algebraically • grade 8 Spelling worksheets • fraction equation • multiplying,dividing,and combining integers worksheets • ti-84 programs trig functions • physics problem solver cheat • grouping equivalent fractions • partial differential equations neumann uniqueness • heat transfer worksheet for kids • scale factor games • Algebra: Structure and Method : Solution Key • how to multiply square roots calculator • Fitness • calculater games • ebook laplace transformation.pdf • MATH WORKSHEETS FOR THIRD GRADERS • convert parabola parametric linear equation • rudin solutions chapter 8 • dividing fractional exponents ti-89 • factoring binomials worksheet • McGraw-hill Mathematics answer key online for free • dividing polynomials show steps • multiplying fractions worksheet KS3 • free calculus problem solver • cost accounting for dummies • ti 83 plus rom download • sample modules in english for grade six • excel third order polynomial • algebraic equations of quadratic type with factions in exponent • printable GED math exercises • CALUCLATOR, ALGEBRA 1, SIMPLIFY RADICAL EXPRESSIONS, DIVIDE • how to factor equations with a TI 84 • online calculator with square root • balancing equations worksheets • Why is it important to simplify radical expressions before adding or subtracting? How is adding radical expressions similar to adding polynomial expressions? How is it different? • practice for subtracting integers for fifth grade • dividing radicals calculator • factoring practise sheet • Algebra Solver softmath free • online scientific calculator for algebra • TI-83 PLUS log instructions • aglebra for dummies • grade 5 math - add/subtract positive and negative integers • prealgebra scale • ontario textbooks • MAth Trivias • ks2 maths graph worksheets • variables worksheets • combining like terms worksheets • 7th grade algebra1 books that are used in maryland • bbc test paper math • algebra with pizzazz worksheet answers • year 6 worksheets • kumon download • key to algebra answers • solving square roots with exponents • algebra textbooks for louisiana state • adding and subtracting fractions • polynomial factoring with square root in equation • cost accounting free study materials • how to find the gcf l method • TI-83 complex math programs • algebra factoring game • grouping calculator • factorization calculator • Basic Math Study Worksheets with Solutions • coordinate plane worksheets • rationalize denominator worksheets • free online algebra calculations • Discrete Mathematics - Absolute Value Function (.ppt) • balance equations solver • beginner fractions Yahoo visitors came to this page today by using these keyword phrases: │ │solving systems using │ │ │adding 2 digits integers & │ │online algebra calculator │substitution printable │ti 83 emulator download │octal worksheets │subtract &multiply&divide │ │ │worksheet │ │ │ │ │list integers from least to │cpm online books math 10th │answers for algebra 1 │All Operations with signed numbers+worksheets │adding postive and negative │ │greatest in C programming │grade geometry │ │ │numbers worksheets │ │ │permutation and combination │mathematics +vectors+free │ │adding and subtracting postive │ │FREE ALGEBRA TUTORING WEBSITES │problems for elementary │exercises+grade10 │Algebra Exponent Activities │and negative numbers chart │ │ │students │ │ │ │ │hyperbola tutorial │changing log base in ti │chapter 1 test form 3 glencoe │getting answers to rational exponents online │how to teach basic algebraic │ │ │ │algebra 2 │ │equations │ │radical expressions calculator │Legal Help │probability worksheets │elimination using multiplication worksheet for fun │chemistry lessons power points │ │ │ │elementary │ │ │ │free online problem solvers on │combining like terms │algerbra help │adding or subtracting integers worksheet examples │HOW TO CALCULATE THE INTERCEPT │ │logarithms │worksheet │ │ │OF A GRAPH GCSE │ │test quizzes arithmetic geometric │greatest common factor │free picture calculator │ │pythagorean theorem with ti 84 │ │sequences questions and answers │worksheet │downloads │rocket equations matlab │calculator │ │problems quizzes algebra 2 │ │ │ │ │ │math cheats │quadratic formula 4 kids │online free history test │McDougal Littel Powerpoints │chicago algebra teachers edition│ │ │ │ │ │free │ │simplify square roots rationalize │free applied mathematics │casio fx-115ms convert to │+"blank circle graph" +printable │trigonomic problems │ │the denominator │practice tests │different base │ │ │ │algibra 2 │Singles │divisible worksheet │permutation combination probability │solve equation two variables │ │area using inscribed rectangles in │solve math problems on │how do i find cubes to a │ │ │ │calculus │computer step by step │number using a ti-30x │mcDougal Littell Algebra 1 chapter 8 Resource Book online │FACTORING CALCULATOR │ │ │ │calculator │ │ │ │topics in algebra I.N. Herstein │nonlinear equation solver │math challenging exam grades │TI 83 silver quadratic │radical equation solver free │ │solutions │ │5th and 6th │ │ │ │ │ │3rd grade permutations and │ │answer for practice chapter 11 │ │cost accounting books │math substitution │combinations worksheets │Online Tutors USA │section a review and prctice │ │ │ │ │ │decimal and it 4th grade │ │grade 8 math worksheets on gcf and │hardest maths riddle ever A │excel math review answer │step by step instructions on how to solve a trig idenity │Problem solving: Percentage/ │ │lcm │phones B │sheets grade 4 lesson 83 │ │Ratio 12 grade │ │instructions for finding square │Solve My Algebra For Me │boolean algebra practice │LCM and GCF 4th grade lessons │addition and subtration of │ │inch equations/ math │ │ │ │fraction story problems │ │elimination using multiplication │college algebra │online fraction calculator │suare meters to square feet conversion │the algebrator │ │worksheet │ │that shows work │ │ │ │do laplace transforms on a ti 89 │linear algebra done right │T1-83 help │free printable worksheets, number theory for middle school │consumer math vocabulary │ │titanium │solution manual │ │ │worksheet │ │math problems on probability for │Financial Planning │revision maths printouts │triangle worksheets │how to rationalize and simplify │ │3rd grade │ │ │ │square roots │ │download Linear algebra and its │solve system of simultaneous│trivia printouts 1st grade │Basic Math Tutor Answer Key Exercise 7 │probability worksheets for first│ │application Day │equations in matlab │ │ │grade │ │how to use a ti-86 graphing │Square Root Math Charts │math answers from book │simplify roots worksheet │7th grade math worksheets linear│ │calculator │ │ │ │equations │ │square root free │how do I solve a radical │simply square root on a ti 83 │how to a decimal space in "texas instrument calculator" │ti-83 imaginary equations │ │worksheets-geometry │expression │ │ │ │ │ │ │math for 6th grade free │Why is it important to simplify radical expressions before adding or│algebra 2 McDougal Littell │ │"rudin chapter 8" solutions │6th grade math explained │printable │subtracting? How is adding radical expressions similar to adding │e-book │ │ │ │ │polynomial expressions? How is it different? │ │ │Algebra 2 Chapter 6 Resource Book │difference quotient formula │yr 11 math │Times Tables print out sheets-Maths │mcdougal little practice work │ │Answers │ │ │ │book middle school │ │word problems using radical │rational roots solver │math test paper for 11th │linear equation in two variable │algebrator free download │ │expressions │ │ │ │ │ │Loans │free algebra 1 answer key │adding mixed numbers with │algebra homework │INTRODUCING EQUIVALENT FRACTIONS│ │ │saxon │different numerators │ │WORKSHEET GRADE 4 │ │alkanes combustion reactions │graphing coordinate pairs │scientific notation test │year 8 maths algebra questions │aptitude questions with ans pdf │ │worksheets │worksheets │questions 7th gr │ │ │ │factoring trinomials worksheet with│solving equations using TI │lesson plan about monomials │foil/algebra │sample paper, sixth grade │ │coeffiient of one │83 plus calculator │for first year │ │ │ │aptitude questions pdf │answers to ALEKS │exponent rule and free │algebra square worksheet │simplifying square roots │ │ │ │worksheet │ │calculator │ │permutation combination drill │Aptitude Questions and │Cardano matlab │printable integer sheets │How to convert a decimal into │ │ │Answers + pdf │ │ │radical form │ │ │linear equation standard │arithmetic progression & │ │saxon trigonometry teachers │ │math notes for dummies │form │"Texas Instruments" & │conversion math printouts │edition │ │ │ │calculator │ │ │ │scale pre-algebra problems │free algebra problem solver │solving systems using │simultaneous equations 2 knowns │steps to solving algebra │ │ │ │subtraction │ │ │ │KS2 algebra worksheets │4th grade algebraic │MI algebra test bank │glencoe practice workbook answers │math worksheets/ordered pairs │ │ │expressions │ │ │ │ │lowest terms ratio worksheet │fraction computation │free revision sheets for 12 │"rational expression" │order of operations worksheet │ │ │worksheets │years old │ │ │ │fractions from least to greatest │algerbra formula │writing equations in vertex │tutoring "abstract algebra" │grade 10 Algebra │ │ │ │form │ │ │ │how to store formulas in ti83 │online fraction calculator │square root brunsviga method │glencoe math test answers │Multiplying/Adding/Subtracting │ │ │ │ │ │Polynomials problems │ │multiplying and dividing decimal │math 31 common factoring │Algebrator free │glencoe test answer │ordered pairs graph equation ppt│ │worksheets │worksheet │ │ │ │ │Hanna Orleans Algebra sample │subtracting integers, │keymath Algebra chapter 7 │conversion 3rd grade math sheets │calculator online squaring │ │questions │worksheets │section 2 answers │ │ │ │glencoe mathematics of inegualites │How Do I Factor Cubed │6th grade math puzzles on area│Table of Special Trig Values │free FOIL ti84 │ │help │Equations? │ │ │ │ │online 8th grade math test on │gcse exam trig questions │exponents worksheets grade 9 │simplify square root of 15 │5th grade Evaluating algebraic │ │algebra polynomials │ │ │ │expressions │ │free printable math properties │permutation and combination │+simplyfing radical │quadratic cheat formula │diamond math problems cpm │ │worksheets │activities │expressions │ │ │ │online, mcdougal littell, algebra │vertex inequality system │factoring equation examples │linear number sequence worksheet │algebra long division printable │ │1, practice workbook │algebraically │grade 10 │ │worksheet │ │GCD +c# │how accounting works pdf │learning logarithmic tables │free secrets of math gre │Algebra Question worksheet │ │ │ │for multiplying and dividing │ │ │ │"free online graphing calculator │exponent variable worksheets│fraction problem solvers │how to solve equations with fractions as exponents │ti-89 quadratic equation │ │ti-83" │ │ │ │ │ │ged test answers cheat │ti-84 downloads │worksheets addition equations │creative publications answer │basic math for dummies │ │how to find the greatest common │www.cliff notes algebraic │3rd grade grammer worksheet │multiplying and dividing equation rules │how do you do factor using the │ │factor of 2y, 6y │fractions │ │ │diffrence of two squares │ │math subtracting and adding │logarithm game │exponents lessons │holt algebra 1 │math + grade 10 + exponents + │ │fractions cheat (type in problem) │ │ │ │worksheet │ │math scale │how to solve ratio of the 6 │problem with adding 4th sheet │algebra percent calculations │identifying conics worksheet │ │ │grade │to workbook in c# │ │ │ │Holt Science & Technology Life │Precalculus graphing and │range equation parabola how to│ │ │ │Science Chapter 11 Review Answers │data analysis prentice hall │find │adding integer worksheets with answers │higher quadratic factorising │ │ │answer key │ │ │ │ │trinomial solver │lesson plan grade4 fractions│logarithm inequalities │movies flash the pythagorean identities │square root algebra │ │Pre-algebra definitions │signed numbers calculator │"inverse function matlab" │math scale worksheet │Probate Lawyers │ │algebra with pizzazz answers │polynomial equations with │8th grade math sheets │college algebra/trig questions │how to do algerbra │ │ │negative exponents │ │ │ │ │basic agebra common demoninator │ontario grade 11 physics how│algebraic equations percentage│balancing equation solver │year 8 algebra quiz │ │ │to convert units │total percentage known │ │ │ │Simplifying Fractions Calculator │ti 89 triangle solver │solving multiple variable │gcse maths algebra worksheets │trivias on math │ │ │ │problems │ │ │ │topics in finite mathematics cheat │trigonomic integrals │Scott Foresman Math Workbook │Algebra Calculators to solve quadratic equations │worksheet free percent consumer │ │sheet │ │Grade 6 site │ │ │ │solving second order differential │percentage formula │radical third root excel │combination permutation math 6th grade │1 lineal metre │ │equations in matlab │ │function │ │ │ │how resolve graphing inequalities │Heath Algebra 1 │c language aptitude questions │free printable worksheets for algebra with answers │easiest way to find the lcm │ │apptitude question and answer │help i need to get my math │permutations 6th grade │how to find lowest common denominator in calculators │least common denominator │ │ │ged │ │ │worksheet │ │multiplying and dividing integers │+fractins percents division │factor TI-83 │adding and subtracting integrals │how to do logarithmic problems │ │hands on │ │ │ │on a TI-83 │ │ │least common denominator in │how do you enter matrices │ │calculator system of equations │ │writing formulae algebra ks3 │algebra │equations in college algebra │algebra pie examples │square │ │ │ │solver │ │ │ │simplest form worksheets for 5th │glencoe algebra 1 workbook │formula for cubed root │number games with algebra and age │Vitamins │ │grade with adding and subtracting │ │ │ │ │ │ninth grade math canada │Grade 10 Maths Questions │Slope Field Programs │boolean algebra download on ti-89 │TI-89 quadratic equation │ │how to convert decimals to mixed │prentice hall pre-algebra │dividing polynomials + │solving 3rd order equations │9th +graded world history vocab │ │numbers │answers │worksheets │ │ │ │partial fraction solver │algebra problems │factoring polynomials made │simplify the equation w squared / to the 5th power │how to change radical expression│ │ │ │easy │ │to exponential expression │ │Affiliate Programs │textbook answer keys │graphing activities for │figuring out square roots │how do you fractions with t1-83 │ │ │Prentice hall precalculus │coordinate plane worksheet │ │ │ │third grade algeraic equation │aptitude questions │free help with multipling │when to use absolute values radicals │solving basic quadratic │ │worksheets │ │polynominals │ │equations worksheet │ │printable worksheets on pie charts │directions on how to solve │glencoe answers │basic college mathematics 5th edition work sheets │trig+ellipse++equation for │ │ │algebra │ │ │ellipse │ │Math Factor Sheets │free math exercices online │Algebra Problem Solvers for │standard FORM calculator │using conjugates to solve │ │ │ │Free │ │equations │ │Simplifying rational expression │ged exams samples ontario │worksheet: positive and │elementary algebra poems │coordinate plane and worksheets │ │calculator │canada │negative fractions │ │and pictures │ │3rd grade worksheets homework │standard form to vertex form│free book bank exam │lessons for exponents and multiplication │trivia questions for ninth grade│ │ │by factoring │ │ │ │ │algebra 2 calculators │Math and solving one-step │pre test for variable │advanced quadratic equation calculator │free ratio worksheets │ │ │inequalities │expressions 8th grade math │ │ │ │aptitude papers with explanations │Solve Your Algebra Problems │Free Online Algebra 1 Problems│type in my algebra 2 problem │download algebrator for free │ │solutions │Online │ │ │ │ │free kumon worksheets │"california standards test" │lcd calculator │Introduction to Slope Worksheets │4th grade algebra tutoring │ │ │+ "test prep" │ │ │ │ │quadratic factorization made simple│holt + permutation │apttitude questions with │daily algebra │KS2 maths activities temperature│ │ │powerpoints │proper solutions │ │ │ │algebra worksheet │Middle School Math With │solve any algebra problem │convert base decimal in java example │pascals triangle ks2 │ │ │Pizzazz! Book E │ │ │ │ │graphing negative postive integers │matlab solving non-linear │11th grade McDougal Littell │mathamatics │expanding cubed functions │ │ │equations │vocabulary list │ │ │ │boolean algebra questions and │how to do modular division │aptitude questions with │interactive Solving a linear equation: Problem type 3 │Advanced Algebra II by Prentice │ │answers │in calculator │procedure for answers │ │Hall │ │ks3 printable maths worksheets │rational exponents │free grade 6 maths quiz │9th grade math graphing quadratic functions │simplifying algebra equations │ │ │worksheets │ │ │ │ │ │examples of adding and │sample chater test for fluid │ │T1-83 graphing calculator │ │expanding cubed polynomials │subtracting radicals with │mechanics │glencoe chapter 7 unit 7 science answer key │download │ │ │exponents │ │ │ │ │convert fractions or mixed numbers │ │ │ │ │ │to decimals that may have a bar │saxon mathanswersheets │diamond problems algebra │algebra systems using substitution calculator │radical equations calculator │ │notation. │ │ │ │ │ │factor third order polynomials │worksheet percent of change │radical expression and perfect│Conics - Circles solver │eighth grade math worksheets for│ │ │ │numbers │ │california │ │algebra and 5th grade worksheets │evaluate indefinite integral│worksheets probability simple │boolean expression simplifier │square roots with variables │ │ │in maple │ │ │ │ │Free Algebra Solver │elementary Algebra Second │math order of operation │elimination method for dummies │best algebra tutor │ │ │Edition math book: answers │radican │ │ │ │permutation combination symmetry │problem solving worksheets │computing sums in algebra │Alegbra solver │solve graph │ │ │adding 9 │ │ │ │ │Ti-84 programming lessons │lcm finder │solving equations with │making worksheets.com │Primary 5 printable test papers │ │ │ │matrices on a TI84 │ │for Singapore │ │practice workbook prentice hall │free download algebra 1 │math elimination calculator │how to find the y-intercept using a graphing calculator │aptitute questions.pdf │ │advanced algebra teachers book │ │ │ │ │ │Algebra 1 answers │how to simplify using │How to solve two variable │mathematical exam y9 trial sats │4 Types of balancing chemical │ │ │radicals │equations with fractions │ │equation │ │question papers for 10th standard │Free Math Solver │beginner statistics on the │free free worksheets and answers finding area of a circle │Do my Algebra homework │ │ │ │casio calculator │ │ │ │Home Insurance │javascript Math.decimal │Algebra 2 explanation free │trinomial calculator │solve simultaneous equations │ │ │function │ │ │with trig functions │ │ │texas instruments, pocket │ │ │ │ │Colligative Properties+animation │calculator, simplify radical│worksheet algebra 7-3 │"integrated chinese workbook" "answer key" │college algebra help │ │ │expressions │ │ │ │ │how to covert from polar to │distributive law practice │simple steps to solve algebra │free printable grade 5 probability as a fraction │trigonometry solomon answers │ │rectangular by ti92 │problems │ │ │worksheet │ │ │factoring third order │practice bank, ALGEBRA 1: │ │ │ │math +trivias │complex polynomial │EXPLORATIONS AND APPLICATIONS │changing percents to fractions worksheets │Algebra Helper │ │ │ │McDougal Littell answers │ │ │ │formula convert decimal to fraction│aptitude questions on 'c' │8th grade equations in pre │solving a system of quadratic equations with 4 variables ti89 │answers to math homework free │ │ │programming │algebra │ │ │ │calculator for simplifying │trigonometric equations on │finding the slope questions │algebra with pizzazz.com │free algebra calculator for mac │ │exponents │calc │and answers │ │os x │ │percentages formulas │how to get the square root │integration by parts solver │algebra homework help problem solvers │algebra substitution method │ │ │using a ti-83 calculator │ │ │ │ │matlab, calculate grade point │free online mcdougal algebra│sample questions on the iowa │proportions worksheets │printable worksheet on how to │ │ │2 textbook │algebra aptitude test │ │find slope │ │scale fators │parabolas applications real │ti84 basic thermochemistry │RSA demo java applet │aptitude online tests questions │ │ │life │program │ │free download │ │ │rules for adding, │finding slope of quadratic │ │ │ │radicals problem solver │subtracting and multiplying │graph │worksheet on finding a common denominator to add fractions │CPM math homework answers │ │ │negatives │ │ │ │ │Liner relationships algebra and │ │Adding and Subtracting │ │ │ │tutoring │math practice scale factor │fractions with dif │glencoe mathematics algebra 1 teachers addition │Florida OnLine Algebra II │ │ │ │denominators │ │ │ │binomial theory │malaysia algebra 1 │Area worksheets for kids │write the simplest polynomial with the given roots │solving two-step inequalities │ │ │ │ │ │hands-on-activity │ │rate and ratio worksheet │radical calculators │ti89 laplace │simplifying complex numbers │free balancing equations program│ │math percentage formula │simple subtraction worksheet│McDougal littell free online │Exponent and square root word problems 9th grade │free common denominator fraction│ │ │borrowing │algebra 2 textbook questions │ │subtraction worksheets │ │third grade worksheet │holt algebra 1 book │what is the reciprocal of │ │foiling fractions │ │ │ │0.375 in fraction? │ │ │ │solving equation in terms of │Ti-83 graphing pictures │matlab hexadecimal octal │Greatest Common Factors for 5th grade │online polynomial factoring │ │another variable │ │binary │ │ │ │free printable worksheets for 7th │algebra for beginners │free 9th grade worksheets │coordinate points worksheet grade 4 │least common factor fraction │ │grade │ │ │ │calculator │ │shade in the decimal worksheet │Algabra toutering │ti-84 plus emulator │dividing monomials calculator │Programming quadratic equation │ │ │ │ │ │into TI89 │ │mcdougal littell algebra 1 math │"thinking │exponents and alegbra │Easy Algebra Questions │sample aptitude test papers │ │ │mathematically"+ebook │ │ │ │ │root word for tutor tut │mathematical aptitude │solving a binomial problem │heath algebra I chapter │square root and cube roots │ │ │questions and answers │ │ │worksheet │ │synthetic +divison calculator │symmetry activities for 6th │how to learn algebra fast │algebra solver │grammer 60 common usage │ │ │grade math--free worksheets │ │ │worksheet │ │free sats ks3 papers downloads │1st grade printables │free taks science worksheets │printable math property worksheets free │free download for fluid │ │ │ │ │ │mechanics book │ │ │math help limits │calculator for turning │ │ │ │college algebra trinomials │rationalizing numerators to │fraction into decimals │4 SIMULTANEOUS equation solver │prime factoring ti-84 │ │ │obtain equivalent expression│ │ │ │ │ │solving simultaneous system │graphing calculators ti 83 │ │ │ │Calculators for finding slope │of equations with excel │instructions trace y │KS3 Maths Worksheets Printable │algebra 1 answers │ │ │solver │ │ │ │ │calculators for factoring equations│biology online exam │free algebra tutor programs │rom image emulator ti │maths workwsheet for grade 7 │ │ │ │download │ │ │ │McDougal Littell Pre- Algebra │least common denominator + │keystroke guide for scatter │''Will learning Kumon style of doing Maths help with British GCSE ? │Pre-Algebra lesson │ │Answer Sheets │calculator │plot graphing for T1-89 │'' │ │
{"url":"http://softmath.com/math-com-calculator/adding-matrices/algebra-high-school-freshman.html","timestamp":"2014-04-19T01:53:59Z","content_type":null,"content_length":"174771","record_id":"<urn:uuid:c1715887-7297-4c9e-a651-7914f161b62d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
The Correct spelling is: minimum Common misspellings of the word minimum are: How do you spell minimum?. It is not minumum n., pl. -mums or -ma (-mÉ™). a. The least possible quantity or degree. b. The lowest degree or amount reached or recorded; the lower limit of variation. 2. A lower limit permitted by law or other authority. 3. A sum of money set by a nightclub or restaurant as the least amount each patron must spend on food and drink. 4. Mathematics. a. The smallest number in a finite set of numbers. b. A value of a function that is less than any other value of the function over a specific interval. Of, consisting of, or representing the lowest possible amount or degree permissible or attainable. [Latin, from neuter of minimus, least.]
{"url":"http://www.how-do-you-spell.com/minimum","timestamp":"2014-04-18T05:34:10Z","content_type":null,"content_length":"6431","record_id":"<urn:uuid:e13531ed-bfc0-4c81-a79c-a55305f26d89>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalised Quaternion Algebra over K - Dauns Section 1-5 no 19 August 15th 2013, 04:56 PM Generalised Quaternion Algebra over K - Dauns Section 1-5 no 19 In Dauns book "Modules and Rings", Exercise 19 in Section 1-5 reads as follows: (see attachment) Let K be any ring with 1∈K whose center is a field and $0 e x, 0 e y \in$ center K are any elements. Let I, J, and IJ be symbols not in K. Form the set K[I, J] = K + KI + KJ + KIJ of all K linear combinations of {1, I, J, IJ}. The following multiplication rules apply: (These also apply in my post re Ex 18!) $I^2 = x, J^2 = y, IJ = -JI, cI = Ic, cIJ = JIc$ for all $c \in K$ Prove that the ring K[I, J] is isomorphic to a ring of $2 \times 2$ matrices as follows: $a + bJ \rightarrow \begin{pmatrix} a & by \\ \overline{b} & \overline{a} \end{pmatrix}$ for all $a,b \in K[I]$ I am not sure how to go about this ... indeed I am confused by the statement of the problem. My issue is the following: Elements of K[I, J] are of the form r = a + bI + cJ + dIJ so we would expect an isomorphism of K[I, J] to specify how elements of this form are mapped into another ring, but we are only told how elements of the form s = a + bJ are mapped. ??? Can someone please clarify this issue and help me to get started on this exercise?
{"url":"http://mathhelpforum.com/advanced-algebra/221214-generalised-quaternion-algebra-over-k-dauns-section-1-5-no-19-a-print.html","timestamp":"2014-04-19T20:38:30Z","content_type":null,"content_length":"5897","record_id":"<urn:uuid:2572f2b6-66bd-4e16-b14f-f9cbc5102e9e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
A Higher Stacky Perspective On Chern-Simons Theory Posted by Urs Schreiber We are finalizing a contribution for a book on mathematical aspects of quantum field theory: Domenico Fiorenza, Hisham Sati, Urs Schreiber, A higher stacky perspective on Chern-Simons theory Abstract. This text is a gentle exposition of some basic constructions and results in the extended prequantum theory of Chern-Simons-type gauge field theories. We explain in some detail how the action functional of ordinary 3d Chern-Simons theory is naturally localized (“extended”, “mutli-tiered”) to a map on the universal moduli stack of principal connections, a map that itself modulates a circle-principal 3-connection on that moduli stack, and how the iterated transgressions of this extended Lagrangian unify the action functional with its prequantum bundle and with the WZW-functional. At the end we provide a brief review and outlook of the higher prequantum field theory of which this is a first example. This includes a higher geometric description of supersymmetric Chern-Simons theory, generalized geometry, higher Spin-structures, anomaly cancellation and various other aspects of quantum field theory. Comments are welcome! (See the pdf at the above link.) Posted at December 28, 2012 3:19 PM UTC Re: A Higher Stacky Perspective On Chern-Simons Theory Some typos: p. 35 “arbitrary codimenion” p. 38 “Its differntial refinement” p. 39 “see also the exposition is in [79].” p. 39 “is constrained to be by the restriction” p. 40 “for ease discussion” p. 43 “the mdouli stack” p. 43 “background gaule field” What’s motivating this comment we will attempt to dissipate the false belief that higher toposes are an esoteric discipline whose secret rites are reserved to initiates? Do you feel some prejudice is blocking its reception from both the mathematics and physics wings of mathematical physics? I guess, as ever, the solution is to resolve some problem that people who don’t know about higher toposes would like to see resolved. Posted by: David Corfield on January 2, 2013 12:05 PM | Permalink | Reply to this
{"url":"https://golem.ph.utexas.edu/category/2012/12/a_higher_stacky_perspective_on.html","timestamp":"2014-04-16T14:33:21Z","content_type":null,"content_length":"12730","record_id":"<urn:uuid:a4abf2bc-2aa1-4ed3-9e8c-b5cafbeff920>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
News 2012 27th November 2012 Martin Hairer wins Leverhulme Research Leadership Award Professor Martin Hairer has won a 5-year Leverhulme Research Leadership Award for his proposal: Singular Stochastic Partial Differential Equations. Most of the funding (approx £1M) will be for the support of postdocs. The proposed research objective is to develop a theory of modelled distributions and to explore its applications to several long-standing problems in mathematical physics, mathematical biology, and stochastic analysis. Upon successful completion, a range of idealised physical, biological and mathematical systems will be amenable to rigorous mathematical analysis for the very first time. This will provide a rock-solid foundation for a more detailed understanding of these models and their approximations, which will be used to explore their fine properties. Martin has also recently accepted an invitation to join the Scientific Committee for one of the world's leading research centres, the Mathematisches Forschungsinstitut Oberwolfach. 26th November 2012 Ian Melbourne wins an ERC Advanced Grant The European Research Council (ERC) has awarded a 5-year Advanced Grant to Professor Ian Melbourne to study Stochasticity in Spatially Extended Deterministic Systems and via Homogenization of Deterministic Fast-Slow Systems. Ergodic theory is the analysis of probabilistic or statistical aspects of deterministic systems. Roughly speaking, deterministic systems are those that evolve without any randomness. Nevertheless, the probabilistic approach is appropriate since specific trajectories are unpredictable in "chaotic" systems. At the other extreme, stochastic systems evolve in a random manner by assumption. One of the main topics of this proposal is to investigate how separation of time scales can cause a fast-slow deterministic system to converge to a stochastic differential equation (SDE). This is called homogenization; the fast variables are averaged out and the limiting SDE is generally of much lower dimension than the original system. The focus is mainly on situations where the SDE limit is driven by Brownian motion, but SDEs driven by stable Lévy processes are also of interest. Homogenization is reasonably well-understood when the underlying fast-slow system is itself stochastic. However there are very few results for deterministic fast-slow systems. The aim is to make homogenization rigorous in a very general setting, and as a byproduct to determine how the stochastic integrals in the SDE are to be interpreted. A second main topic is to explore the idea that anomalous diffusion in the form of a superdiffusive Lévy process arises naturally in odd dimensions but not in even dimensions. The context is pattern formation in spatially extended systems with Euclidean symmetry, and this dichotomy can be seen as an extension of the classical Huygens principle that sound waves propagate in odd but not even dimensions. For anisotropic systems (where there are translation symmetries only), the situation is simpler: chaotic dynamics leads to Brownian motion and weakly chaotic dynamics (of intermittent type) leads to a Lévy process. However in the isotropic case (rotations and translations), anomalous diffusion is suppressed in even dimensions in favour of Brownian motion. 22nd November 2012 Christoph Ortner wins Philip Leverhulme Prize Philip Leverhulme Prizes are awarded to outstanding scholars who have made a substantial and recognised contribution to their particular field of study, recognised at an international level, and where the expectation is that their greatest achievement is yet to come. Christoph Ortner's work during his fellowship will include the analysis of atomistic models for crystalline defects and their numerical simulation. This is an exciting new area of research for applied mathematics and numerical analysis. Given a crystal lattice with a localised defect (dislocation, vacancy, interstitial, . . .), one would like to quantify its influence on the crystalline environment, by computing the defect geometry and the defect core energy (as efficiently as possible). Already formulating a well-posed mathematical description for defects is challenging for realistic atomistic models. Furthermore, these challenges lead into new questions of regularity and approximation theory for discrete deformation fields. List of 2012 Philip Leverhulme Prize winners (PDF 21st November 2012 Peter Topping wins EPSRC award for Singularities of Geometric PDEs Professor Peter Topping has been awarded an EPSRC Programme Grant of £1.5M for the study of Singularities of Geometric Partial Differential Equations (PDEs) for the period 2013–2018. The award will be largely used to fund post-doctoral positions and is a joint investigation with Professor M Dafermos (Cambridge) and Dr A Neves (Imperial). Partners include Igor Rodnianski (MIT), Fernando Coda Marques (IMPA), Sigurd Angenent (University of Wisconsin-Madison), and Camillo De Lellis (Universität Zürich). The project combines the expertise at Warwick, Imperial and Cambridge to tackle a suite of inter-related problems in Geometric Flows, Minimal Surfaces, Mathematical Relativity and neighbouring areas. A key goal is to understand the singularities that arise in the geometric PDE lying behind these topics. Singularities of Geometric PDEs Website EPSRC grant summary 4th October 2012 David Loeffler awarded Royal Society University Research Fellowship David Loeffler has won a prestigious 5 year Royal Society URF (University Research Fellowship) starting October 2012, for his proposal L-functions and Iwasawa Theory. “My research is in number theory, an area of pure mathematics which explores the properties of whole numbers (integers): this is one of the oldest branches of mathematics, with roots going back to the ancient Greeks. My current project focusses on the study of so-called L-functions, analytic functions whose values are conjectured to encode many deep properties of arithmetical objects (such as elliptic curves). The aim of the project is to study L-functions and their special values using a mathematical tool called an ‘Euler system’. Euler systems are very important objects but very hard to construct – only a handful of examples are known – and constructing a new Euler system will have many important applications in number theory and beyond.” Royal Society announcement Warwick press release 21st September 2012 The Beautiful Zeeman Building According to the Daily Telegraph, the Zeeman Building is one of the reasons that the University of Warwick qualifies for its place in the list of Britain's most beautiful universities. See frame 11 of their slideshow. 2nd August 2012 Samuel Brand and Mikolaj Sierzega win EPSRC Doctoral Prizes Two Warwick PhD students, Samuel Brand (Complexity Centre) and Mikolaj Sierzega (Mathematics) have won EPSRC Doctoral Prize Fellowships for 2012–13. Samuel Brand, whose thesis is titled “Spatial and Stochastic Epidemics: Theory, Simulation and Control” will work on “New Mathematical Methods for Optimal Control of Epidemics” with Michael Mikolaj Sierzega, whose thesis is titled “Topics in the theory of semilinear heat equations” will work on “Classical solutions to semilinear parabolic equations” with Jose Rodrigo. 2nd August 2012 Dr Alex Bartel awarded three year Research Fellowship Royal Commission for the Exhibition of 1851 to study “Cohen–Lenstra heuristics for Galois modules and rational representations of finite groups”. The project will consist of two parts. Firstly, to determine which rational representations of finite groups are virtual permutation representations, thereby settling a 60 year old representation theoretic problem. Secondly, to explain distributions of Galois modules in families by adopting the Cohen–Lenstra heuristic to this setting. Dr Bartel writes: One of the fundamental problems in Pure Mathematics is to understand and measure symmetries. Classically, the word “symmetry” was applied to geometric shapes, e.g. referring to rotations and reflections of regular polygons or solids. However, after the ground breaking contributions of Évariste Galois in the 19th century, we have learned to understand symmetries in a much wider sense, and the notion of symmetry has been put on a powerful rigorous footing by group theory, and later by representation theory. One aim of the proposed project is to answer an important and long standing question in representation theory, which, vaguely speaking, asks how to compare symmetries of finite sets with symmetries of vector spaces. Concretely, I want to determine, in joint work with Tim Dokchitser, which rational representations of a finite group are virtual permutation representations. The oldest branch of mathematics is the area called number theory, the biggest open problems today going back to the ancient Greeks. The second part of the proposed project lives at the intersection of representation theory and number theory. The aim is to study symmetry groups of certain number theoretic objects called Galois modules. In my past work, I have established links between the structure of Galois modules and other important number theoretic invariants. In this project, I plan to understand Galois modules from a statistical perspective, determining, roughly speaking, how often a given Galois module occurs in nature. To this end, I am planning to adapt the so-called Cohen–Lenstra heuristic, which has been incredibly successful at explaining distributions of finite modules, to the infinite case. I am going to apply my findings to different Galois modules, ranging from the arithmetic of number fields to elliptic curves. Those are some of the most fascinating and mysterious objects in number theory. 26th July 2012 Dr Andras Mathe awarded the 4th Banach Prize and Leverhulme Fellowship Dr Andras Mathe received the International Stefan Banach Prize on the 5th of July in Kraków. The official award presentation took place during a special session of 6th European Congress of The award-winning dissertation entitled “The isomorphism problem of Hausdorff measures and Hoelder restrictions of functions” was written under the supervision of Professor Miklós Laczkovich of the Institute of Mathematics at the Eövös Loránd University in Budapest, Hungary. In his thesis Mathe solved an open problem in geometric measure theory about a fundamental property of Hausdorff measures, showing that these measures of various dimensions are essentially different. Dr Andras Mathe has also won a Leverhulme Trust Early Career Fellowship at Warwick for three years for his research in “Combinatorial aspects of geometric measure theory”. The proposed research focuses on the interplay between geometric measure theory and other fields of mathematics including combinatorics and ergodic theory. It also aims to develop new techniques and constructions in geometric measure theory by investigating and transferring existing theories in combinatorics. 25th July 2012 Oleg Pikhurko awarded 5 year ERC Fellowship The European Research Council has awarded a 5-year Starting Grant to Oleg Pikhurko for his proposal “Extremal Combinatorics”. A typical problem of extremal combinatorics is to maximise or minimise a certain parameter given some combinatorial restrictions. The project will concentrate on problems of this type, with the main directions being the Turan function (maximising the size of a hypergraph without some fixed forbidden subgraphs), the Rademacher–Turan problem (minimising the density of F-subgraphs given the edge density), and Ramsey numbers (quantitative bounds on the maximum size of a monochromatic substructure that exists for every colouring). These are fundamental and general questions that go back at least as far as the 1940s, many of which remain wide open despite decades of active attempts. 18th June 2012 Robert MacKay wins Royal Society Wolfson Research Merit Award Robert MacKay has received a 5-year Royal Society Wolfson Research Merit Award to develop mathematics to understand, predict, control and design complex systems, with a particular emphasis on their statistical behaviour. Royal Society Wolfson Research Merit Awards scheme provides universities with additional support to enable them to recruit or retain respected scientists of outstanding achievement and potential to the UK. The scheme is jointly funded by the Wolfson Foundation and the Royal Society. 14th June 2012 Top 5 placings in league tables The UK press's universities league table season is upon us once again. Whilst variations in methodology can lead to some surprises, it always seems preferable to be nearer the top than the bottom. Several recently updated tables place Warwick in the top five: The Guardian table ranks departments based on factors related to the choice of degree course for incoming students such as student satisfaction, student/staff ratios and employability after a successful completion. The Times Good Universities Guide (requires a subscription to access) ranks us third. The Complete University Guide also includes performance in the periodic research assessments, of importance when choosing a place to study for an advanced degree. 19th March 2012 David Preiss awarded Ostrowski Prize The Ostrowski Foundation has announced that Professor David Preiss FRS has been awarded the prestigious Ostrowski Prize for 2011 which he shares with Ib Madsen and Kannan Soundararajan. The Ostrowski Prize is an award for outstanding achievements in pure mathematics and the foundations of numerical mathematics given every other year by the Ostrowski Foundation. Recipients are selected by an international jury from the universities of Basel, Jerusalem, Waterloo and the academies of Denmark and the Netherlands. Alexander Ostrowski, a longtime professor at the University of Basel, left his estate to the foundation in order to establish a prize. Previous winners include Ben Green FRS, Richard Taylor FRS and Sir Andrew Wiles FRS. 16th February 2012 Robert MacKay interviewed as President of the IMA Professor Robert MacKay FRS FInstP FIMA took up his two-year Presidency of the Institute of Mathematics and its Applications (IMA) in January 2012. The IMA is the UK's learned and professional society for mathematics and its applications. “I hope to contribute to resolving the controversial issues of impact and resource allocation for UK mathematics and to the UK's involvement in the Mathematics for Planet Earth 2013 initiative and to contribute to increasing skills in mathematics and appreciation for mathematics at school.” He was interviewed for the February issue of Mathematics Today, the membership magazine of the IMA, and the text is available as a PDF from the IMA website. 10th January 2012 David Preiss awarded ERC Advanced Fellowship The European Research Council has awarded a 5-year Advanced Fellowship to Professor David Preiss to study “Local Structure of Sets, Measures and Currents”. The objective of the research is to develop new methods to answer a number of fundamental questions generated by the recent development of modern analysis. The questions we are interested in are specifically related to the study of local structure of sets and functions in the classical Euclidean setting, in infinite dimensional Banach spaces and in the modern setting of analysis on metric spaces. The main areas of study will be: (a) Structure of null sets and representation of (singular) measures, one of the key motivations being the differentiability of Lipschitz functions in finite dimensional spaces. (b) Nonlinear geometric functional analysis, with particular attention to the differentiability of Lipschitz functions in infinite dimensional Hilbert spaces and Banach spaces with separable (c) Foundations of analysis on metric spaces, the key problems here being representation results for Lipschitz differentiability spaces and spaces satisfying the Poincaré inequality. (d) Uniqueness of tangent structure in various settings, where the ultimate goal is to contribute to the fundamental problem whether minimal surfaces (in their geometric measure theoretic model as area minimizing integral currents) have a unique behaviour close to any point. University News and Events:
{"url":"http://www2.warwick.ac.uk/fac/sci/maths/general/news/2012/","timestamp":"2014-04-18T13:59:43Z","content_type":null,"content_length":"49487","record_id":"<urn:uuid:0713b1e7-688f-44ea-b2f5-55f17ffbab4d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Fort Lauderdale SAT Math Tutor Find a Fort Lauderdale SAT Math Tutor ...As a high-school student, I qualified for and participated in the National Math Olympics in Romania and other Mathematics contests as well. I currently teach Mathematics at Broward College with 9 years teaching experience. Also, I work as a Lab Assistant providing support and tutoring college students. 22 Subjects: including SAT math, calculus, physics, statistics ...Zeros become a key focus for much of the topic, as well as learning to solve real world problems using algebraic equations as a basis for the solution. Advanced functions such as Ln and Exponential functions are also explained in the subject. The focus on differences become crucial when dealing with advanced mathematics. 23 Subjects: including SAT math, English, chemistry, physics I have been tutoring for over 20 years at the high school level (mainly private tutoring and 6 years at the University level). I am extremely passionate about my students success and will go the extra mile to ensure their learning. My greatest reward in teaching is not the salary, but the success.... 11 Subjects: including SAT math, statistics, geometry, algebra 2 ...I can assist students in writing school papers and studying for exams in the sciences especially, integrating techniques for speakers of other languages. As a career high school science teacher I teach Biology/Anatomy/Zoology on a daily basis. My undergraduate studies were in Biology with a Masters Degree in Science Education. 27 Subjects: including SAT math, reading, physics, ESL/ESOL ...I work with each child as an individual finding the teaching techniques that work best with each student, and then capitalize on them. Always working with positive feedback. I have the ability to switch from one subject to another in a scheduled format for the benefit of the student. 19 Subjects: including SAT math, English, reading, writing Nearby Cities With SAT math Tutor Cooper City, FL SAT math Tutors Dania SAT math Tutors Dania Beach, FL SAT math Tutors Davie, FL SAT math Tutors Hollywood, FL SAT math Tutors Lauderdale Lakes, FL SAT math Tutors Lauderhill, FL SAT math Tutors Lazy Lake, FL SAT math Tutors North Lauderdale, FL SAT math Tutors Oakland Park, FL SAT math Tutors Plantation, FL SAT math Tutors Pompano Beach SAT math Tutors Sunrise, FL SAT math Tutors Tamarac, FL SAT math Tutors Wilton Manors, FL SAT math Tutors
{"url":"http://www.purplemath.com/fort_lauderdale_fl_sat_math_tutors.php","timestamp":"2014-04-16T07:29:33Z","content_type":null,"content_length":"24527","record_id":"<urn:uuid:39c74b21-bd04-4b25-90bd-95e55a4c7dc3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about chemistry on Azimuth Programming with Chemical Reaction Networks 23 March, 2014 There will be a 5-day workshop on Programming with Chemical Reaction Networks: Mathematical Foundation at BIRS from Sunday, June 8 to Friday June 13, 2014 It’s being organized by • Anne Condon (University of British Columbia) • David Doty (California Institute of Technology) • Chris Thachuk (University of Oxford). BIRS is the Banff International Research Station, in the mountains west of Calgary, in Alberta, Canada. Here’s the workshop proposal on the BIRS website. It’s a pretty interesting proposal, especially if you’ve already read Luca Cardelli’s description of computing with chemical reaction networks, at the end of our series of posts on chemical reaction networks. The references include a lot of cool papers, so I’ve created links to those to help you get ahold of them. This workshop will explore three of the most important research themes concerning stochastic chemical reaction networks (CRNs). Below we motivate each theme and highlight key questions that the workshop will address. Our main objective is to bring together distinct research communities in order to consider new problems that could not be fully appreciated in isolation. It is also our aim to determine commonalities between different disciplines and bodies of research. For example, research into population protocols, vector addition systems, and Petri networks provide a rich body of theoretical results that may already address contemporary problems arising in the study of CRNs. Computational power of CRNs Before designing robust and practical systems, it is useful to know the limits to computing with a chemical soup. Some interesting theoretical results are already known for stochastic chemical reaction networks. The computational power of CRNs depend upon a number of factors, including: (i) is the computation deterministic, or probabilistic, and (ii) does the CRN have an initial context — certain species, independent of the input, that are initially present in some exact, constant count. In general, CRNs with a constant number of species (independent of the input length) are capable of Turing universal computation [17], if the input is represented by the exact (unary) count of one molecular species, some small probability of error is permitted and an initial context in the form of a single-copy leader molecule is used. Could the same result hold in the absence of an initial context? In a surprising result based on the distributed computing model of population protocols, it has been shown that if a computation must be error-free, then deterministic computation with CRNs having an initial context is limited to computing semilinear predicates [1], later extended to functions outputting natural numbers encoded by molecular counts [5]. Furthermore, any semilinear predicate or function can be computed by that class of CRNs in expected time polylogarithmic in the input length. Building on this result, it was recently shown that by incurring an expected time linear in the input length, the same result holds for “leaderless” CRNs [8] — CRNs with no initial context. Can this result be improved to sub-linear expected time? Which class of functions can be computed deterministically by a CRN without an initial context in expected time polylogarithmic in the input length? While (restricted) CRNs are Turing-universal, current results use space proportional to the computation time. Using a non-uniform construction, where the number of species is proportional to the input length and each initial species is present in some constant count, it is known that any S(n) space-bounded computation can be computed by a logically-reversible tagged CRN, within a reaction volume of size poly(S(n)) [18]. Tagged CRNs were introduced to model explicitly the fuel molecules in physical realizations of CRNs such as DNA strand displacement systems [6] that are necessary to supply matter and energy for implementing reactions such as X → X + Y that violate conservation of mass and/or energy. Thus, for space-bounded computation, there exist CRNs that are time-efficient or are space-efficient. Does there exist time- and space-efficient CRNs to compute any space-bounded function? Designing and verifying robust CRNs While CRNs provide a concise model of chemistry, their physical realizations are often more complicated and more granular. How can one be sure they accurately implement the intended network behaviour? Probabilistic model checking has already been employed to find and correct inconsistencies between CRNs and their DNA strand displacement system (DSD) implementations [9]. However, at present, model checking of arbitrary CRNs is only capable of verifying the correctness of very small systems. Indeed, verification of these types of systems is a difficult problem: probabilistic state reachability is undecidable [17, 20] and general state reachability is EXPSPACE-hard [4]. How can larger systems be verified? A deeper understanding of CRN behaviour may simplify the process of model checking. As a motivating example, there has been recent progress towards verifying that certain DSD implementations correctly simulate underlying CRNs [16, 7, 10]. This is an important step to ensuring correctness, prior to experiments. However, DSDs can also suffer from other errors when implementing CRNs, such as spurious hybridization or strand displacement. Can DSDs and more generally CRNs be designed to be robust to such predictable errors? Can error correcting codes and redundant circuit designs used in traditional computing be leveraged in these chemical computers? Many other problems arise when implementing CRNs. Currently, unique types of fuel molecules must be designed for every reaction type. This complicates the engineering process significantly. Can a universal type of fuel be designed to smartly implement any reaction? Energy efficient computing with CRNs Rolf Landauer showed that logically irreversible computation — computation as modeled by a standard Turing machine — dissipates an amount of energy proportional to the number of bits of information lost, such as previous state information, and therefore cannot be energy efficient [11]. However, Charles Bennett showed that, in principle, energy efficient computation is possible, by proposing a universal Turing machine to perform logically-reversible computation and identified nucleic acids (RNA/DNA) as a potential medium to realize logically-reversible computation in a physical system [2]. There have been examples of logically-reversible DNA strand displacement systems — a physical realization of CRNs — that are, in theory, capable of complex computation [12, 19]. Are these systems energy efficient in a physical sense? How can this argument be made formally to satisfy both the computer science and the physics communities? Is a physical experiment feasible, or are these results merely theoretical footnotes? [1] D. Angluin, J. Aspnes, and D. Eisenstat. Stably computable predicates are semilinear. In PODC, pages 292–299, 2006. [2] C. H. Bennett. Logical reversibility of computation. IBM Journal of Research and Development, 17 (6):525–532, 1973. [3] L. Cardelli and A. Csikasz-Nagy. The cell cycle switch computes approximate majority. Scientific Reports, 2, 2012. [4] E. Cardoza, R. Lipton, A. R. Meyer. Exponential space complete problems for Petri nets and commutative semigroups (Preliminary Report). Proceedings of the Eighth Annual ACM Symposium on Theory of Computing, pages 507–54, 1976. [5] H. L. Chen, D. Doty, and D. Soloveichik. Deterministic function computation with chemical reaction networks. DNA Computing and Molecular Programming, pages 25–42, 2012. [6] A. Condon, A. J. Hu, J. Manuch, and C. Thachuk. Less haste, less waste: on recycling and its limits in strand displacement systems. Journal of the Royal Society: Interface Focus, 2 (4):512–521, 2012. [7] Q. Dong. A bisimulation approach to verification of molecular implementations of formal chemical reaction network. Master’s thesis. SUNY Stony Brook, 2012. [8] D. Doty and M. Hajiaghayi. Leaderless deterministic chemical reaction networks. In Proceedings of the 19th International Meeting on DNA Computing and Molecular Programming, 2013. [9] M. R. Lakin, D. Parker, L. Cardelli, M. Kwiatkowska, and A. Phillips. Design and analysis of DNA strand displacement devices using probabilistic model checking. Journal of The Royal Society Interface, 2012. [10] M. R. Lakin, D. Stefanovic and A. Phillips. Modular Verification of Two-domain DNA Strand Displacement Networks via Serializability Analysis. In Proceedings of the 19th Annual conference on DNA computing, 2013. [11] R. Landauer. Irreversibility and heat generation in the computing process. IBM Journal of research and development, 5 (3):183–191, 1961. [12] L. Qian, D. Soloveichik, and E. Winfree. Efficient Turing-universal computation with DNA polymers (extended abstract) . In Proceedings of the 16th Annual conference on DNA computing, pages 123–140, 2010. [13] L. Qian and E. Winfree. Scaling up digital circuit computation with DNA strand displacement cascades. Science, 332 (6034):1196–1201, 2011. [14] L. Qian, E. Winfree, and J. Bruck. Neural network computation with DNA strand displacement cascades. Nature, 475 (7356):368–372, 2011. [15] G. Seelig, D. Soloveichik, D.Y. Zhang, and E. Winfree. Enzyme-free nucleic acid logic circuits. Science, 314 (5805):1585–1588, 2006. [16] S. W. Shin. Compiling and verifying DNA-based chemical reaction network implementations. Master’s thesis. California Insitute of Technology, 2011. [17] D. Soloveichik, M. Cook, E. Winfree, and J. Bruck. Computation with finite stochastic chemical reaction networks. Natural Computing, 7 (4):615–633, 2008. [18] C. Thachuk. Space and energy efficient molecular programming. PhD thesis, University of British Columbia, 2012. [19] C. Thachuk and A. Condon. Space and energy efficient computation with DNA strand displacement systems. In Proceedings of the 18th Annual International Conference on DNA computing and Molecular Programming, 2012. [20] G. Zavattaro and L. Cardelli. Termination Problems in Chemical Kinetics. In Proceedings of the 2008 Conference on Concurrency Theory, pages 477–491, 2008. Network Theory II 12 March, 2014 Chemists are secretly doing applied category theory! When chemists list a bunch of chemical reactions like C + O₂ → CO₂ they are secretly describing a ‘category’. That shouldn’t be surprising. A category is simply a collection of things called objects together with things called morphisms going from one object to another, often written f: x → y The rules of a category say: 1) we can compose a morphism f: x → y and another morphism g: y → z to get an arrow gf: x → z, 2) (hg)f = h(gf), so we don’t need to bother with parentheses when composing arrows, 3) every object x has an identity morphism 1ₓ: x → x that obeys 1ₓ f = f and f 1ₓ = f. Whenever we have a bunch of things (objects) and processes (arrows) that take one thing to another, we’re likely to have a category. In chemistry, the objects are bunches of molecules and the arrows are chemical reactions. But we can ‘add’ bunches of molecules and also add reactions, so we have something more than a mere category: we have something called a symmetric monoidal category. My talk here, part of a series, is an explanation of this viewpoint and how we can use it to take ideas from elementary particle physics and apply them to chemistry! For more details try this free • John Baez and Jacob Biamonte, A Course on Quantum Techniques for Stochastic Mechanics. as well as this paper on the Anderson–Craciun–Kurtz theorem (discussed in my talk): • John Baez and Brendan Fong, Quantum techniques for studying equilibrium in reaction networks. You can also see the slides of this talk. Click on any picture in the slides, or any text in blue, and get more information! Lyapunov Functions for Complex-Balanced Systems 7 January, 2014 guest post by Manoj Gopalkrishnan A few weeks back, I promised to tell you more about a long-standing open problem in reaction networks, the ‘global attractor conjecture’. I am not going to quite get there today, but we shall take one step in that direction. Today’s plan is to help you make friends with a very useful function we will call the ‘free energy’ which comes up all the time in the study of chemical reaction networks. We will see that for complex-balanced systems, the free energy function decreases along trajectories of the rate equation. I’m going to explain this statement, and give you most of the proof! The point of doing all this work is that we will then be able to invoke Lyapunov’s theorem which implies stability of the dynamics. In Greek mythology, Sisyphus was cursed to roll a boulder up a hill only to have it roll down again, so that he had to keep repeating the task for all eternity. When I think of an unstable equilibrium, I imagine a boulder delicately balanced on top of a hill, which will fall off if given the slightest push: or, more abstractly: On the other hand, I picture a stable equilibrium as a pebble at the very bottom of a hill. Whichever way a perturbation takes it is up, so it will roll down again to the bottom: Lyapunov’s theorem guarantees stability provided we can exhibit a nice enough function $V$ that decreases along trajectories. ‘Nice enough’ means that, viewing $V$ as a height function for the hill, the equilibrium configuration should be at the bottom, and every direction from there should be up. If Sisyphus had dug a pit at the top of the hill for the boulder to rest in, Lyapunov’s theorem would have applied, and he could have gone home to rest. The moral of the story is that it pays to learn dynamical systems theory! Because of the connection to Lyapunov’s theorem, such functions that decrease along trajectories are also called Lyapunov functions. A similar situation is seen in Boltzmann’s H-theorem, and hence such functions are sometimes called H-functions by physicists. Another reason for me to talk about these ideas now is that I have posted a new article on the arXiv: • Manoj Gopalkrishnan, On the Lyapunov function for complex-balanced mass-action systems. The free energy function in chemical reaction networks goes back at least to 1972, to this paper: • Friedrich Horn and Roy Jackson, General mass action kinetics, Arch. Rational Mech. Analysis 49 (1972), 81–116. Many of us credit Horn and Jackson’s paper with starting the mathematical study of reaction networks. My paper is an exposition of the main result of Horn and Jackson, with a shorter and simpler proof. The gain comes because Horn and Jackson proved all their results from scratch, whereas I’m using some easy results from graph theory, and the log-sum inequality. We shall be talking about reaction networks. Remember the idea from the network theory series. We have a set $S$ whose elements are called species, for example $S = \{ \mathrm{H}_2\mathrm{O}, \mathrm{H}^+, \mathrm{OH}^- \}$ A complex is a vector of natural numbers saying how many items of each species we have. For example, we could have a complex $(2,3,1).$ But chemists would usually write this as $2 \mathrm{H}_2\mathrm{O} + 3 \mathrm{H}^+ + \mathrm{OH}^-$ A reaction network is a set $S$ of species and a set $T$ of transitions or reactions, where each transition $\tau \in T$ goes from some complex $m(\tau)$ to some complex $n(\tau).$ For example, we could have a transition $\tau$ with $m(\tau) = \mathrm{H}_2\mathrm{O}$ $n(\tau) = \mathrm{H}^+ + \mathrm{OH}^-$ In this situation chemists usually write $\mathrm{H}_2\mathrm{O} \to \mathrm{H}^+ + \mathrm{OH}^-$ but we want names like $\tau$ for our transitions, so we might write $\tau : \mathrm{H}_2\mathrm{O} \to \mathrm{H}^+ + \mathrm{OH}^-$ $\mathrm{H}_2\mathrm{O} \stackrel{\tau}{\longrightarrow} \mathrm{H}^+ + \mathrm{OH}^-$ As John explained in Part 3 of the network theory series, chemists like to work with a vector of nonnegative real numbers $x(t)$ saying the concentration of each species at time $t.$ If we know a rate constant $r(\tau) > 0$ for each transition $\tau,$ we can write down an equation saying how these concentrations change with time: $\displaystyle{ \frac{d x}{d t} = \sum_{\tau \in T} r(\tau) (n(\tau) - m(\tau)) x^{m(\tau)} }$ This is called the rate equation. It’s really a system of ODEs describing how the concentration of each species change with time. Here an expression like $x^m$ is shorthand for the monomial ${x_1}^ {m_1} \cdots {x_k}^{m_k}.$ John and Brendan talked about complex balance in Part 9. I’m going to recall this definition, from a slightly different point of view that will be helpful for the result we are trying to prove. We can draw a reaction network as a graph! The vertices of this graph are all the complexes $m(\tau), n(\tau)$ where $\tau \in T.$ The edges are all the transitions $\tau\in T.$ We think of each edge $\tau$ as directed, going from $m(\tau)$ to $n(\tau).$ We will call the map that sends each transition $\tau$ to the positive real number $r(\tau) x^{m(\tau)}$ the flow $f_x(\tau)$ on this graph. The rate equation can be rewritten very simply in terms of this flow as: $\displaystyle{ \frac{d x}{d t} = \sum_{\tau \in T}(n(\tau) - m(\tau)) \, f_x(\tau) }$ where the right-hand side is now a linear expression in the flow $f_x.$ Flows of water, or electric current, obey a version of Kirchhoff’s current law. Such flows are called conservative flows. The following two lemmas from graph theory are immediate for conservative Lemma 1. If f is a conservative flow then the net flow across every cut is zero. A cut is a way of chopping the graph in two, like this: It’s easy to prove Lemma 1 by induction, moving one vertex across the cut at a time. Lemma 2. If a conservative flow exists then every edge $\tau\in T$ is part of a directed cycle. Why is Lemma 2 true? Suppose there exists an edge $\tau : m \to n$ that is not part of any directed cycle. We will exhibit a cut with non-zero net flow. By Lemma 1, this will imply that the flow is not conservative. One side of the cut will consist of all vertices from which $m$ is reachable by a directed path in the reaction network. The other side of the cut contains at least $n,$ since $m$ is not reachable from $n,$ by the assumption that $\tau$ is not part of a directed cycle. There is flow going from left to right of the cut, across the transition $\tau.$ Since there can be no flow coming back, this cut has nonzero net flow, and we’re done. ▮ Now, back to the rate equation! We can ask if the flow $f_x$ is conservative. That is, we can ask if, for every complex $n$: $\displaystyle{ \sum_{\tau : m \to n} f_x(m,n) = \sum_{\tau : n \to p} f_x(n,p). }$ In words, we are asking if the sum of the flow through all transitions coming in to $n$ equals the sum of the flow through all transitions going out of $n.$ If this condition is satisfied at a vector of concentrations $x = \alpha,$ so that the flow $f_\alpha$ is conservative, then we call $\alpha$ a point of complex balance. If in addition, every component of $\alpha$ is strictly positive, then we say that the system is complex balanced. Clearly if $\alpha$ is a point of complex balance, it’s an equilibrium solution of the rate equation. In other words, $x(t) = \alpha$ is a solution of the rate equation, where $x(t)$ never changes. I’m using ‘equilibrium’ the way mathematicians do. But I should warn you that chemists use ‘equilibrium’ to mean something more than merely a solution that doesn’t change with time. They often also mean it’s a point of complex balance, or even more. People actually get into arguments about this at conferences. Complex balance implies more than mere equilibrium. For starters, if a reaction network is such that every edge belongs to a directed cycle, then one says that the reaction network is weakly reversible. So Lemmas 1 and 2 establish that complex-balanced systems must be weakly reversible! From here on, we fix a complex-balanced system, with $\alpha$ a strictly positive point of complex balance. Definition. The free energy function is the function $g_\alpha(x) = \sum_{s\in S} x_s \log x_s - x_s - x_s \log \alpha_s$ where the sum is over all species in $S.$ The whole point of defining the function this way is because it is the unique function, up to an additive constant, whose partial derivative with respect to $x_s$ is $\log x_s/\alpha_s.$ This is important enough that we write it as a lemma. To state it in a pithy way, it is helpful to introduce vector notation for division and logarithms. If $x$ and $y$ are two vectors, we will understand $x /y$ to mean the vector $z$ such that $z_s = x_s/ y_s$ coordinate-wise. Similarly $\log x$ is defined in a coordinate-wise sense as the vector with coordinates $(\log x)_s = \log x_s.$ Lemma 3. The gradient $abla g_\alpha(x)$ of $g_\alpha(x)$ equals $\log(x/\alpha).$ We’re ready to state our main theorem! Theorem. Fix a trajectory $x(t)$ of the rate equation. Then $g_\alpha(x(t))$ is a decreasing function of time $t.$ Further, it is strictly decreasing unless $x(t)$ is an equilibrium solution of the rate equation. I find precise mathematical statements reassuring. You can often make up your mind about the truth value from a few examples. Very often, though not always, a few well-chosen examples are all you need to get the general idea for the proof. Such is the case for the above theorem. There are three key examples: the two-cycle, the three-cycle, and the figure-eight. The two-cycle. The two-cycle is this reaction network: It has two complexes $m$ and $n$ and two transitions $\tau_1 = m\to n$ and $\tau_2 = n\to m,$ with rates $r_1 = r(\tau_1)$ and $r_2 = r(\tau_2)$ respectively. Fix a solution $x(t)$ of the rate equation. Then the flow from $m$ to $n$ equals $r_1 x^m$ and the backward flow equals $r_2 x^n.$ The condition for $f_\alpha$ to be a conservative flow requires that $f_\alpha = r_1 \alpha^m = r_2 \alpha^n.$ This is one binomial equation in at least one variable, and clearly has a solution in the positive reals. We have just shown that every two-cycle is complex The derivative $d g_\alpha(x(t))/d t$ can now be computed by the chain rule, using Lemma 3. It works out to $f_\alpha$ times $\displaystyle{ \left((x/\alpha)^m - (x/\alpha)^n\right) \, \log\frac{(x/\alpha)^n}{(x/\alpha)^m} }$ This is never positive, and it’s zero if and only if $(x/\alpha)^m = (x/\alpha)^n$ Why is this? Simply because the logarithm of something greater than 1 is positive, while the log of something less than 1 is negative, so that the sign of $(x/\alpha)^m - (x/\alpha)^n$ is always opposite the sign of $\log \frac{(x/\alpha)^n}{(x/\alpha)^m}.$ We have verified our theorem for this example. (Note that $(x/\alpha)^m = (x/\alpha)^n$ occurs when $x = \alpha,$ but also at other points: in this example, there is a whole hypersurface consisting of points of complex balance.) In fact, this simple calculation achieves much more. Definition. A reaction network is reversible if for every transition $\tau : m \to n$ there is a transition $\tau' : m \to n$ going back, called the reverse of $\tau.$ Suppose we have a reversible reaction network and a vector of concentrations $\alpha$ such that the flow along each edge equals that along the edge going back: $f_\alpha(\tau) = f_\alpha(\tau')$ whenever $\tau'$ is the reverse $\tau.$ Then we say the reaction network is detailed balanced, and $\alpha$ is a point of detailed balance. For a detailed-balanced system, the time derivative of $g_\alpha$ is a sum over the contributions of pairs consisting of an edge and its reverse. Hence, the two-cycle calculation shows that the theorem holds for all detailed balanced systems! This linearity trick is going to prove very valuable. It will allow us to treat the general case of complex balanced systems one cycle at a time. The proof for a single cycle is essentially contained in the example of a three-cycle, which we treat next: The three-cycle. The three-cycle is this reaction network: We assume that the system is complex balanced, so that $f_\alpha(m_1\to m_2) = f_\alpha(m_2\to m_3) = f_\alpha(m_3\to m_1)$ Let us call this nonnegative number $f_\alpha.$ A small calculation employing the chain rule shows that $d g_\alpha(x(t))/d t$ equals $f_\alpha$ times $\displaystyle{ (x/\alpha)^{m_1}\, \log\frac{(x/\alpha)^{m_2}}{(x/\alpha)^{m_1}} \; + }$ $\displaystyle{ (x/\alpha)^{m_2} \, \log\frac{(x/\alpha)^{m_3}}{(x/\alpha)^{m_2}} \; + }$ $\displaystyle{ (x/\alpha)^{m_3}\, \log\frac{(x/\alpha)^{m_1}}{(x/\alpha)^{m_3}} }$ We need to think about the sign of this quantity: Lemma 3. Let $a,b,c$ be positive numbers. Then $a \log b/a + b\log c/b + c\log a/c$ is less than or equal to zero, with equality precisely when $a=b=c.$ The proof is a direct application of the log sum inequality. In fact, this holds not just for three numbers, but for any finite list of numbers. Indeed, that is precisely how one obtains the proof for cycles of arbitrary length. Even the two-cycle proof is a special case! If you are wondering how the log sum inequality is proved, it is an application of Jensen’s inequality, that workhorse of convex analysis. The three-cycle calculation extends to a proof for the theorem so long as there is no directed edge that is shared between two directed cycles. When there are such edges, we need to argue that the flows $f_\alpha$ and $f_x$ can be split between the cycles sharing that edge in a consistent manner, so that the cycles can be analyzed independently. We will need the following simple lemma about conservative flows from graph theory. We will apply this lemma to the flow $f_\alpha.$ Lemma 4. Let $f$ be a conservative flow on a graph $G.$ Then there exist directed cycles $C_1, C_2,\dots, C_k$ in $G,$ and nonnegative real ‘flows’ $f_1,f_2,\dots,f_k \in [0,\infty]$ such that for each directed edge $e$ in $G,$ the flow $f(e)$ equals the sum of $f_i$ over $i$ such the cycle $C_i$ contains the edge $e.$ Intuitively, this lemma says that conservative flows come from constant flows on the directed cycles of the graph. How does one show this lemma? I’m sure there are several proofs, and I hope some of you can share some of the really neat ones with me. The one I employed was algorithmic. The idea is to pick a cycle, any cycle, and subtract the maximum constant flow that this cycle allows, and repeat. This is most easily understood by looking at the example of the figure-eight: The figure-eight. This reaction network consists of two three-cycles sharing an edge: Here’s the proof for Lemma 4. Let $f$ be a conservative flow on this graph. We want to exhibit cycles and flows on this graph according to Lemma 4. We arbitrarily pick any cycle in the graph. For example, in the figure-eight, suppose we pick the cycle $u_0\to u_1\to u_2\to u_0.$ We pick an edge in this cycle on which the flow is minimum. In this case, $f(u_0\to u_1) = f(u_2\to u_0)$ is the minimum. We define a remainder flow by subtracting from $f$ this constant flow which was restricted to one cycle. So the remainder flow is the same as $f$ on edges that don’t belong to the picked cycle. For edges that belong to the cycle, the remainder flow is $f$ minus the minimum of $f$ on this cycle. We observe that this remainder flow satisfies the conditions of Lemma 4 on a graph with strictly fewer edges. Continuing in this way, since the lemma is trivially true for the empty graph, we are done by infinite descent. Now that we know how to split the flow $f_\alpha$ across cycles, we can figure out how to split the rates across the different cycles. This will tell us how to split the flow $f_x$ across cycles. Again, this is best illustrated by an example. The figure-eight. Again, this reaction network looks like Suppose as in Lemma 4, we obtain the cycles $C_1 = u_0\to u_1\to u_2\to u_0$ with constant flow $f_\alpha^1$ $C_2 = u_3\to u_1\to u_2\to u_3$ with constant flow $f_\alpha^2$ such that $f_\alpha^1 + f_\alpha^2 = f_\alpha(u_1\to u_2)$ Here’s the picture: Then we obtain rates $r^1(u_1\to u_2)$ and $r^2(u_1\to u_2)$ by solving the equations $f^1_\alpha = r^1(u_1\to u_2) \alpha^{u_1}$ $f^2_\alpha = r^2(u_1\to u_2) \alpha^{u_2}$ Using these rates, we can define non-constant flows $f^1_x$ on $C_1$ and $f^2_x$ on $C_2$ by the usual formulas: $f^1_x(u_1\to u_2) = r^1(u_1\to u_2) x^{u_1}$ and similarly for $f^2_x.$ In particular, this gives us $f^1_x(u_1\to u_2)/f^1_\alpha = (x/\alpha)^{u_1}$ and similarly for $f^2_x.$ Using this, we obtain the proof of the Theorem! The time derivative of $g_\alpha$ along a trajectory has a contribution from each cycle $C$ as in Lemma 4, where each cycle is treated as a separate system with the new rates $r^C,$ and the new flows $f^C_\alpha$ and $f^C_x.$ So, we’ve reduced the problem to the case of a cycle, which we’ve already done. Let’s review what happened. The time derivative of the function $g_\alpha$ has a very nice form, which is linear in the flow $f_x.$ The reaction network can be broken up into cycles. Th e conservative flow $f_\alpha$ for a complex balanced system can be split into conservative flows on cycles by Lemma 4. This informs us how to split the non-conservative flow $f_x$ across cycles. By linearity of the time derivative, we can separately treat the case for every cycle. For each cycle, we get an expression to which the log sum inequality applies, giving us the final result that $g_\ alpha$ decreases along trajectories of the rate equation. Now that we have a Lyapunov function, we will put it to use to obtain some nice theorems about the dynamics, and finally state the global attractor conjecture. All that and more, in the next blog 29 November, 2013 Over a year ago, I wrote here about ice. It has 16 known forms with different crystal geometries. The most common form on Earth, hexagonal ice I, is a surprisingly subtle blend of order and Liquid water is even more complicated. It’s mainly a bunch of molecules like this jostling around: The two hydrogens are tightly attached to the oxygen. But accidents do happen. On average, for every 555 million molecules of water, one is split into a negatively charged OH⁻ and a positively charged H⁺. And this actually matters a lot, in chemistry. It’s the reason we say water has pH 7. Why? By definition, pH 7 means that for every liter of water, there’s 10^-7 moles of H⁺. That’s where the 7 comes from. But there’s 55.5 moles of water in every liter, at least when the water is cold so its density is almost 1 kilogram/liter. So, do the math and you see one that for 555 million molecules of water, there’s only one H⁺. Acids have a lot more. For example, lemon juice has one H⁺ per 8800 water molecules. But let’s think about this H⁺ thing. What is it, really? It’s a hydrogen atom missing its electron: a proton, all by itself! But what happens when you’ve got a lone proton in water? It doesn’t just sit there. It quickly attaches to a water molecule, forming H₃O⁺. This is called a hydronium ion, and it looks like this: But hydronium is still positively charged, so it will attract electrons in other water molecules! Different things can happen. Here you see a hydronium ion water molecule surrounded by three water molecules in a symmetrical way: This is called an Eigen cation, with chemical formula H₉O₄⁺. I believe it’s named after the Nobel-prize-winning chemist Manfred Eigen—not his grandfather Günther, the mathematician of ‘eigenvector’ And here you see a hydronium ion at lower right, attracted to water molecule at left: The is a Zundel cation, with chemical formula H₅O₂⁺. It’s named after Georg Zundel, the German expert on hydrogen bonds. The H⁺ in the middle looks more tightly connected to the water at right than the water at left. But it should be completely symmetrical—at least, that’s the theory of how a Zundel cation works. But the Eigen and Zundel cations are still positively charged, so they attract more water molecules, making bigger and bigger structures. Nowadays chemists are studying these using computer simulations, and comparing the results to experiments. In 2010, Evgenii Stoyanov, Irina Stoyanova and Christopher Reed used infrared spectroscopy to argue that a lone proton often attaches itself to 6 water molecules, forming H⁺(H₂O)₆, or H₁₃O₆⁺, like this: As you can see, this forms when each hydrogen in a Zundel cation attracts an extra water molecule. Even this larger structure attracts more water molecules: But the positive charge, they claim, stays roughly within the dotted line. Wait. Didn’t I say the lone proton was right in the middle? Isn’t that what the picture shows—the H in the middle? Well, the picture is a bit misleading! First, everything is wiggling around a lot. And second, quantum mechanics says we don’t know the position of that proton precisely! Instead, it’s a ‘probability cloud’ smeared over a large region, ending roughly at the dashed line. (You can’t say precisely where a cloud ends.) It seems that something about these subtleties makes the distance between the two oxygen nuclei at the center is surprisingly large. In an ordinary water molecule, the distance between the hydrogen and oxygen is a bit less than 100 pm—that’s 100 picometers, or 100 × 10^-12 meters, or one angstrom (Å) in chemist’s units: In ordinary ice, there are also weaker bonds called hydrogen bonds that attach neighboring water molecules. These bonds are a bit longer, as shown in this picture by Stephen Lower, who also drew that great picture of ice: But the distance between the two central oxygens in H₁₃O₆⁺ is about 2.57 angstroms, or twice 1.28: Stoyanov, Stoyanova and Reed put the exclamation mark here. I guess the big distance came as a big surprise! I should emphasize that this work is new and still controversial. There’s some evidence, which I don’t understand, that 20 is a ‘magic number’: a lone proton is happiest when accompanied by 20 water molecules, forming H⁺(H₂O)₂₀. One possibility is that the proton is surrounded by a symmetrical cage of 20 water molecules shaped like a dodecahedron! But in 2005, a team of scientists did computer simulations and arrived at a different geometry, like this: This is not symmetrical: there’s a Zundel cation highlighted at right, together with 20 water molecules. Of course, in reality a number of different structures may predominate, in a rapidly changing and random way. Computer simulations should eventually let us figure this out. We’ve known the relevant laws of nature for over 80 years. But running them on a computer is not easy! Kieron Taylor did his PhD work on simulating water, and he wrote: It’s a most vexatious substance to simulate in useful time scales. Including the proton exchange or even flexible multipoles requires immense computation. It would be very interesting if the computational complexity of water were higher, in some precise sense, than many other liquids. It’s weird in other ways. It takes a lot of energy to heat water, it expands when it freezes, and its molecules have a large ‘dipole moment’—meaning the electric charge is distributed in a very lopsided way, thanks to the ‘Mickey Mouse’ way the two H’s are attached to the O. I’ve been talking about the fate of the H⁺ when a water molecule splits into H⁺ and OH⁻. I should add that in heavy water, H⁺ could be something other than a lone proton. It could be a deuteron: a proton and a neutron stuck together. Or it could be a triton: a proton and two neutrons. For this reason, while most chemists call H⁺ simply a ‘proton’, the pedantically precise ones call it a hydron , which covers all the possibilities! But what about the OH⁻? This is called a hydroxide ion: But this, too, attracts other water molecules. First it grabs one and forms a bihydroxide ion, which is a chain like this: with chemical formula H₃O₂⁻. And then the bihydroxide ion attracts other water molecules, perhaps like this: Again, this is a guess—and certainly a simplified picture of a dynamic, quantum-mechanical system. References and digressions For more, see: • Evgenii S. Stoyanov, Irina V. Stoyanova, Christopher A. Reed, The unique nature of H⁺ in water, Chemical Science 2 (2011), 462–472. Abstract: The H⁺(aq) ion in ionized strong aqueous acids is an unexpectedly unique H₁₃O₆⁺ entity, unlike those in gas phase H⁺(H₂O)n clusters or typical crystalline acid hydrates. IR spectroscopy indicates that the core structure has neither H₉O₄⁺ Eigen-like nor typical H₅O₂⁺ Zundel-like character. Rather, extensive delocalization of the positive charge leads to a H₁₃O₆⁺ ion having an unexpectedly long central OO separation of 2.57 Å and four conjugated OO separations of 2.7 Å. These dimensions are in conflict with the shorter OO separations found in structures calculated by theory. Ultrafast dynamic properties of the five H atoms involved in these H-bonds lead to a substantial collapse of normal IR vibrations and the appearance of a continuous broad absorption (cba) across the entire IR spectrum. This cba is distinguishable from the broad IR bands associated with typical low-barrier H-bonds. The solvation shell outside of the H₁₃O₆⁺ ion defines the boundary of positive charge delocalization. At low acid concentrations, the H₁₃O₆⁺ ion is a constituent part of an ion pair that has contact with the first hydration shell of the conjugate base anion. At higher concentrations, or with weaker acids, one or two H₂O molecules of H₁₃O₆⁺ cation are shared with the hydration shell of the anion. Even the strongest acids show evidence of ion pairing. Unfortunately this paper is not free, and my university doesn’t even subscribe to this journal. But I just discovered that Evgenii Stoyanov and Irina Stoyanova are here at U. C. Riverside! So, I may ask them some questions. This picture: came from here: • Srinivasan S. Iyengar, Matt K. Petersen, Tyler J. F. Day, Christian J. Burnham, Virginia E. Teige and Gregory A. Voth, The properties of ion-water clusters. I. The protonated 21-water cluster, J. Chem. Phys. 123 (2005), 084309. Abstract. The ab initio atom-centered density-matrix propagation approach and the multistate empirical valence bond method have been employed to study the structure, dynamics, and rovibrational spectrum of a hydrated proton in the “magic” 21 water cluster. In addition to the conclusion that the hydrated proton tends to reside on the surface of the cluster, with the lone pair on the protonated oxygen pointing “outwards,” it is also found that dynamical effects play an important role in determining the vibrational properties of such clusters. This result is used to analyze and complement recent experimental and theoretical studies. This paper is free online! We live in a semi-barbaric age where science is probing the finest details of matter, space and time—but many of the discoveries, paid for by taxes levied on the hard-working poor, are snatched, hidden, and sold by profiteers. Luckily, a revolution is afoot… There are other things in ‘pure water’ beside what I’ve mentioned. For example, there are some lone electrons! Since these are light, quantum mechanics says their probability cloud spreads out to be quite big. This picture by Michael Tauber shows what you should imagine: He says: Schematic representation of molecules in the first and second coordination shells around the solvated electron. First shell molecules are shown hydrogen bonded to the electron. Hydrogen bonds between molecules of 1st and 2nd shells are disrupted. Autocatalysis in Reaction Networks 11 October, 2013 guest post by Manoj Gopalkrishnan Since this is my first time writing a blog post here, let me start with a word of introduction. I am a computer scientist at the Tata Institute of Fundamental Research, broadly interested in connections between Biology and Computer Science, with a particular interest in reaction networks. I first started thinking about them during my Ph.D. at the Laboratory for Molecular Science. My fascination with them has been predominantly mathematical. As a graduate student, I encountered an area with rich connections between combinatorics and dynamics, and surprisingly easy-to-state and compelling unsolved conjectures, and got hooked. There is a story about Richard Feynman that he used to take bets with mathematicians. If any mathematician could make Feynman understand a mathematical statement, then Feynman would guess whether or not the statement was true. Of course, Feynman was in a habit of winning these bets, which allowed him to make the boast that mathematics, especially in its obsession for proof, was essentially irrelevant, since a relative novice like himself could after a moment’s thought guess at the truth of these mathematical statements. I have always felt Feynman’s claim to be unjust, but have often wondered what mathematical statement I would put to him so that his chances of winning were no better than random. Today I want to tell you of a result about reaction networks that I have recently discovered with Abhishek Deshpande. The statement seems like a fine candidate to throw at Feynman because until we proved it, I would not have bet either way about its truth. Even after we obtained a short and elementary proof, I do not completely ‘see’ why it must be true. I am hoping some of you will be able to demystify it for me. So, I’m just going to introduce enough terms to be able to make the statement of our result, and let you think about how to prove it. John and his colleagues have been talking about reaction networks as Petri nets in the network theory series on this blog. As discussed in part 2 of that series, a Petri net is a diagram like this: Following John’s terminology, I will call the aqua squares ‘transitions’ and the yellow circles ‘species’. If we have some number #rabbit of rabbits and some number #wolf of wolves, we draw #rabbit many black dots called ‘tokens’ inside the yellow circle for rabbit, and #wolf tokens inside the yellow circle for wolf, like this: Here #rabbit = 4 and #wolf = 3. The predation transition consumes one ‘rabbit’ token and one ‘wolf’ token, and produces two ‘wolf’ tokens, taking us here: John explained in parts 2 and 3 how one can put rates on different transitions. For today I am only going to be concerned with ‘reachability:’ what token states are reachable from what other token states. John talked about this idea in part 25. By a complex I will mean a population vector: a snapshot of the number of tokens in each species. In the example above, (#rabbit, #wolf) is a complex. If $y, y'$ are two complexes, then we write $y \to y'$ if we can get from $y$ to $y'$ by a single transition in our Petri net. For example, we just saw that $(4,3)\to (3,4)$ via the predation transition. Reachability, denoted $\to^*$, is the transitive closure of the relation $\to$. So $y\to^* y'$ (read $y'$is reachable from $y$) iff there are complexes $y=y_0,y_1,y_2,\dots,y_k =y'$ such that $y_0\to y_1\to\cdots\to y_{k-1}\to y_k.$ For example, here $(5,1) \to^* (1, 5)$ by repeated predation. I am very interested in switches. After all, a computer is essentially a box of switches! You can build computers by connecting switches together. In fact, that’s how early computers like the Z3 were built. The CMOS gates at the heart of modern computers are essentially switches. By analogy, the study of switches in reaction networks may help us understand biochemical circuits. A siphon is a set of species that is ‘switch-offable’. That is, if there are no tokens in the siphon states, then they will remain absent in future. Equivalently, the only reactions that can produce tokens in the siphon states are those that require tokens from the siphon states before they can fire. Note that no matter how many rabbits there are, if there are no wolves, there will continue to be no wolves. So {wolf} is a siphon. Similarly, {rabbit} is a siphon, as is the union {rabbit, wolf}. However, when Hydrogen and Oxygen form Water, {Water} is not a siphon. For another example, consider this Petri net: The set {HCl, NaCl} is a siphon. However, there is a conservation law: whenever an HCl token is destroyed, an NaCl token is created, so that #HCl + #NaCl is invariant. If both HCl and NaCl were present to begin with, the complexes where both are absent are not reachable. In this sense, this siphon is not ‘really’ switch-offable. As a first pass at capturing this idea, we will introduce the notion of ‘critical set’. A conservation law is a linear expression involving numbers of tokens that is invariant under every transition in the Petri net. A conservation law is positive if all the coefficients are non-negative. A critical set of states is a set that does not contain the support of a positive conservation law. For example, the support of the positive conservation law #HCl + #NaCl is {HCl, NaCl}, and hence no set containing this set is critical. Thus {HCl, NaCl} is a siphon, but not critical. On the other hand, the set {NaCl} is critical but not a siphon. {HCl} is a critical siphon. And in our other example, {Wolf, Rabbit} is a critical siphon. Of particular interest to us will be minimal critical siphons, the minimal sets among critical siphons. Consider this example: Here we have two transitions: $X \to 2Y$ $2X \to Y$ The set $\{X,Y\}$ is a critical siphon. But so is the smaller set $\{X\}.$ So, $\{X,Y\}$ is not minimal. We define a self-replicable set to be a set $A$ of species such that there exist complexes $y$ and $y'$ with $y\to^* y'$ such that for all $i \in A$ we have $y'_i > y_i$ So, there are transitions that accomplish the job of creating more tokens for all the species in $A.$ In other words: these species can ‘replicate themselves’. We define a drainable set by changing the $>$ to a $<$. So, there are transitions that accomplish the job of reducing the number of tokens for all the species in $A.$ These species can ‘drain away’. Now here comes the statement: Every minimal critical siphon is either drainable or self-replicable! We prove it in this paper: • Abhishek Deshpande and Manoj Gopalkrishnan, Autocatalysis in reaction networks. But first note that the statement becomes false if the critical siphon is not minimal. Look at this example again: The set $\{X,Y\}$ is a critical siphon. However $\{X,Y\}$ is neither self-replicable (since every reaction destroys $X$) nor drainable (since every reaction produces $Y$). But we’ve already seen that $\{X,Y\}$ is not minimal. It has a critical subsiphon, namely $\{X\}.$ This one is minimal—and it obeys our theorem, because it is drainable. Checking these statements is a good way to make sure you understand the concepts! I know I’ve introduced a lot of terminology here, and it takes a while to absorb. Anyway: our proof that every minimal critical siphon is either drainable or self-replicable makes use of a fun result about matrices. Consider a real square matrix with a sign pattern like this: $\left( \begin{array}{cccc} <0 & >0 & \cdots & > 0 \\ >0 & <0 & \cdots &> 0 \\ \vdots & \vdots & <0 &> 0 \\ >0 & >0 & \cdots & <0 \end{array} \right)$ If the matrix is full-rank then there is a positive linear combination of the rows of the matrix so that all the entries are nonzero and have the same sign. In fact, we prove something stronger in Theorem 5.9 of our paper. At first, we thought this statement about matrices should be equivalent to one of the many well-known alternative statements of Farkas’ lemma, like Gordan’s theorem. However, we could not find a way to make this work, so we ended up proving it by a different technique. Later, my colleague Jaikumar Radhakrishnan came up with a clever proof that uses Farkas’ lemma twice. However, so far we have not obtained the stronger result in Theorem 5.9 with this proof technique. My interest in the result that every minimal critical siphon is either drainable or self-replicable is not purely aesthetic (though aesthetics is a big part of it). There is a research community of folks who are thinking of reaction networks as a programming language, and synthesizing molecular systems that exhibit sophisticated dynamical behavior as per specification: • International Conference on DNA Computing and Molecular Programming. • Foundations of Nanoscience: Self-Assembled Architectures and Devices. • Molecular Programming Architectures, Abstractions, Algorithms and Applications. Networks that exhibit some kind of catalytic behavior are a recurring theme among such systems, and even more so in biochemical circuits. Here is an example of catalytic behavior: $A + C \to B + C$ The ‘catalyst’ $C$ helps transform $A$ to $B.$ In the absence of $C,$ the reaction is turned off. Hence, catalysts are switches in chemical circuits! From this point of view, it is hardly surprising that they are required for the synthesis of complex behaviors. In information processing, one needs amplification to make sure that a signal can propagate through a circuit without being overwhelmed by errors. Here is a chemical counterpart to such $A + C \to 2C$ Here the catalyst $C$ catalyzes its own production: it is an ‘autocatalyst’, or a self-replicating species. By analogy, autocatalysis is key for scaling synthetic molecular systems. Our work deals with these notions on a network level. We generalize the notion of catalysis in two ways. First, we allow a catalyst to be a set of species instead of a single species; second, its absence can turn off a reaction pathway instead of a single reaction. We propose the notion of self-replicable siphons as a generalization of the notion of autocatalysis. In particular, ‘weakly reversible’ networks have critical siphons precisely when they exhibit autocatalytic behavior. I was led to this work when I noticed the manifestation of this last statement in many examples. Another hope I have is that perhaps one can study the dynamics of each minimal critical siphon of a reaction network separately, and then somehow be able to answer interesting questions about the dynamics of the entire network, by stitching together what we know for each minimal critical siphon. On the synthesis side, perhaps this could lead to a programming language to synthesize a reaction network that will achieve a specified dynamics. If any of this works out, it would be really cool! I think of how abelian group theory (and more broadly, the theory of abelian categories, which includes categories of vector bundles) benefits from a fundamental theorem that lets you break a finite abelian group into parts that are easy to study—or how number theory benefits from a special case, the fundamental theorem of arithmetic. John has also pointed out that reaction networks are really presentations of symmetric monoidal categories, so perhaps this could point the way to a Fundamental Theorem for Symmetric Monoidal Categories. And then there is the Global Attractor Conjecture, a long-standing open problem concerning the long-term behavior of solutions to the rate equations. Now that is a whole story by itself, and will have to wait for another day. Coherence for Solutions of the Master Equation 10 July, 2013 guest post by Arjun Jain I am a master’s student in the physics department of the Indian Institute of Technology Roorkee. I’m originally from Delhi. Since some time now, I’ve been wanting to go into Mathematical Physics. I hope to do a PhD in that. Apart from maths and physics, I am also quite passionate about art and music. Right now I am visiting John Baez at the Centre for Quantum Technologies, and we’re working on chemical reaction networks. This post can be considered as an annotation to the last paragraph of John’s paper, Quantum Techniques for Reaction Networks, where he raises the question of when a solution to the master equation that starts as a coherent state will remain coherent for all times. Remember, the ‘master equation’ describes the random evolution of collections of classical particles, and a ‘coherent state’ is one where the probability distribution of particles of each type is a Poisson If you’ve been following the network theory series on this blog, you’ll know these concepts, and you’ll know the Anderson-Craciun-Kurtz theorem gives many examples of coherent states that remain coherent. However, all these are equilibrium solutions of the master equation: they don’t change with time. Moreover they are complex balanced equilibria: the rate at which any complex is produced equals the rate at which it is consumed. There are also non-equilibrium examples where coherent states remain coherent. But they seem rather rare, and I would like to explain why. So, I will give a necessary condition for it to happen. I’ll give the proof first, and then discuss some simple examples. We will see that while the condition is necessary, it is not sufficient. First, recall the setup. If you’ve been following the network theory series, you can skip the next section. Reaction networks Definition. A reaction network consists of: • a finite set $S$ of species, • a finite set $K$ of complexes, where a complex is a finite sum of species, or in other words, an element of $\mathbb{N}^S,$ • a graph with $K$ as its set of vertices and some set $T$ of edges. You should have in mind something like this: where our set of species is $S = \{A,B,C,D,E\},$ the complexes are things like $A + E,$ and the arrows are the elements of $T,$ called transitions or reactions. So, we have functions $s , t : T \to K$ saying the source and target of each transition. Definition. A stochastic reaction network is a reaction network together with a function $r: T \to (0,\infty)$ assigning a rate constant to each reaction. From this we can write down the master equation, which describes how a stochastic state evolves in time: $\displaystyle{ \frac{d}{dt} \Psi(t) = H \Psi(t) }$ Here $\Psi(t)$ is a vector in the stochastic Fock space, which is the space of formal power series in a bunch of variables, one for each species, and $H$ is an operator on this space, called the From now on I’ll number the species with numbers from $1$ to $k,$ so $S = \{1, \dots, k\}$ Then the stochastic Fock space consists of real formal power series in variables that I’ll call $z_1, \dots, z_k.$ We can write any of these power series as $\displaystyle{\Psi = \sum_{\ell \in \mathbb{N}^k} \psi_\ell z^\ell }$ $z^\ell = z_1^{\ell_1} \cdots z_k^{\ell_k}$ We have annihilation and creation operators on the stochastic Fock space: $\displaystyle{ a_i \Psi = \frac{\partial}{\partial z_i} \Psi }$ $\displaystyle{ a_i^\dagger \Psi = z_i \Psi }$ and the Hamiltonian is built from these as follows: $\displaystyle{ H = \sum_{\tau \in T} r(\tau) \, ({a^\dagger}^{t(\tau)} - {a^\dagger}^{s(\tau)}) \, a^{s(\tau)} }$ John explained this here (using slightly different notation), so I won’t go into much detail now, but I’ll say what all the symbols mean. Remember that the source of a transition $\tau$ is a complex, or list of natural numbers: $s(\tau) = (s_1(\tau), \dots, s_k(\tau))$ So, the power $a^{s(\tau)}$ is really an abbreviation for a big product of annihilation operators, like this: $\displaystyle{ a^{s(\tau)} = a_1^{s_1(\tau)} \cdots a_k^{s_k(\tau)} }$ This describes the annihilation of all the inputs to the transition $\tau.$ Similarly, we define $\displaystyle{ {a^\dagger}^{s(\tau)} = {a_1^\dagger}^{s_1(\tau)} \cdots {a_k^\dagger}^{s_k(\tau)} }$ $\displaystyle{ {a^\dagger}^{t(\tau)} = {a_1^\dagger}^{t_1(\tau)} \cdots {a_k^\dagger}^{t_k(\tau)} }$ The result Here’s the result: Theorem. If a solution $\Psi(t)$ of the master equation is a coherent state for all times $t \ge 0,$ then $\Psi(0)$ must be complex balanced except for complexes of degree 0 or 1. This requires some explanation. First, saying that $\Psi(t)$ is a coherent state means that it is an eigenvector of all the annihilation operators. Concretely this means $\Psi (t) = \displaystyle{\frac{e^{c(t) \cdot z}}{e^{c_1(t) + \cdots + c_k(t)}}}$ $c(t) = (c_1(t), \dots, c_k(t)) \in [0,\infty)^k$ $z = (z_1, \dots, z_k)$ It will be helpful to write $\mathbf{1}= (1,1,1,...)$ so we can write $\Psi (t) = \displaystyle{ e^{c(t) \cdot (z - \mathbf{1})} }$ Second, we say that a complex has degree $d$ if it is a sum of exactly $d$ species. For example, in this reaction network: the complexes $A + C$ and $B + E$ have degree 2, while the rest have degree 1. We use the word ‘degree’ because each complex $\ell$ gives a monomial $z^\ell = z_1^{\ell_1} \cdots z_k^{\ell_k}$ and the degree of the complex is the degree of this monomial, namely $\ell_1 + \cdots + \ell_k$ Third and finally, we say a solution $\Psi(t)$ of the master equation is complex balanced for a specific complex $\ell$ if the total rate at which that complex is produced equals the total rate at which it’s destroyed. Now we are ready to prove the theorem: Proof. Consider the master equation $\displaystyle { \frac{d \Psi (t)}{d t} = H \psi (t) }$ Assume that $\Psi(t)$ is a coherent state for all $t \ge 0.$ This means $\Psi (t) = \displaystyle{ e^{c(t) \cdot (z - \mathbf{1})} }$ For convenience, we write $c(t)$ simply as $c,$ and similarly for the components $c_i$. Then we have $\displaystyle{ \frac{d\Psi(t)}{dt} = (\dot{c} \cdot (z - \mathbf{1})) \, e^{c \cdot (z - \mathbf{1})} }$ On the other hand, the master equation gives $\begin{array}{ccl} \displaystyle {\frac{d\Psi(t)}{dt}} &=& \displaystyle{ \sum_{\tau \in T} r(\tau) \, ({a^\dagger}^{t(\tau)} - {a^\dagger}^{s(\tau)}) \, a^{s(\tau)} e^{c \cdot (z - \mathbf{1})} } \ \ \\ &=& \displaystyle{\sum_{\tau \in T} c^{t(\tau)} r(\tau) \, ({z}^{t(\tau)} - {z}^{s(\tau)}) e^{c \cdot (z - \mathbf{1})} } \end{array}$ $\displaystyle{ (\dot{c} \cdot (z - \mathbf{1})) \, e^{c \cdot (z - \mathbf{1})} =\sum_{\tau \in T} c^{t(\tau)} r(\tau) \, ({z}^{t(\tau)} - {z}^{s(\tau)}) e^{c \cdot (z - \mathbf{1})} }$ As a result, we get $\displaystyle{ \dot{c}\cdot z -\dot{c}\cdot\mathbf{1} = \sum_{\tau \in T} c^{s(\tau)} r(\tau) \, ({z}^{t(\tau)} - {z}^{s(\tau)}) }.$ Comparing the coefficients of all $z^\ell,$ we obtain the following. For $\ell = 0,$ which is the only complex of degree zero, we get $\displaystyle { \sum_{\tau: t(\tau)=0} r(\tau) c^{s(\tau)} - \sum_{\tau\;:\; s(\tau)= 0} r(\tau) c^{s(\tau)} = -\dot{c}\cdot\mathbf{1} }$ For the complexes $\ell$ of degree one, we get these equations: $\displaystyle { \sum_{\tau\;:\; t(\tau)=(1,0,0,\dots)} r(\tau) c^{s(\tau)} - \sum_{\tau \;:\;s(\tau)=(1,0,0,\dots)} r(\tau) c^{s(\tau)}= \dot{c_1} }$ $\displaystyle { \sum_{\tau\; :\; t(\tau)=(0,1,0,\dots)} r(\tau) c^{s(\tau)} - \sum_{\tau\;:\; s(\tau)=(0,1,0,\dots)} r(\tau) c^{s(\tau)} = \dot{c_2} }$ and so on. For all the remaining complexes $\ell$ we have $\displaystyle { \sum_{\tau\;:\; t(\tau)=\ell} r(\tau) c^{s(\tau)} = \sum_{\tau \;:\; s(\tau)=\ell} r(\tau) c^{s(\tau)} }.$ This says that the total rate at which this complex is produced equals the total rate at which it’s destroyed. So, our solution of the master equation is complex balanced for all complexes $\ell$ of degree greater than one. This is our necessary condition. █ To illustrate the theorem, I’ll consider three simple examples. The third example shows that the condition in the theorem, though necessary, is not sufficient. Note that our proof also gives a necessary and sufficient condition for a coherent state to remain coherent: namely, that all the equations we listed hold, not just initially but for all times. But this condition seems a bit Introducing amoebae into a Petri dish Suppose that there is an inexhaustible supply of amoebae, randomly floating around in a huge pond. Each time an amoeba comes into our collection area, we catch it and add it to the population of amoebae in the Petri dish. Suppose that the rate constant for this process is 3. So, the Hamiltonian is $3(a^\dagger -1).$ If we start with a coherent state, say $\displaystyle { \Psi(0)=\frac{e^{cz}}{e^c} }$ $\displaystyle { \Psi(t) = e^{3(a^\dagger -1)t} \; \frac{e^{cz}}{e^c} = \frac{e^{(c+3t)z}}{e^{c+3t}} }$ which is coherent at all times. We can see that the condition of the theorem is satisfied, as all the complexes in the reaction network have degree 0 or 1. Amoebae reproducing and competing This example shows a Petri dish with one species, amoebae, and two transitions: fission and competition. We suppose that the rate constant for fission is 2, while that for competition is 1. The Hamiltonian is then $H= 2({a^\dagger}^2-a^\dagger)a + (a^\dagger-{a^\dagger}^2)a^2$ If we start off with the coherent state $\displaystyle{\Psi(0) = \frac{e^{2z}}{e^2}}$ we find that $\displaystyle {\Psi(t)=e^{2(z^2-z)2+(z-z^2)4} \; \Psi(0)}=\Psi(0)$ which is coherent. It should be noted that the chosen initial state $\displaystyle{ \frac{e^{2z}}{e^2}}$ was a complex balanced equilibrium solution. So, the Anderson–Craciun–Kurtz Theorem applies to this case. Amoebae reproducing, competing, and being introduced This is a combination of the previous two examples, where apart from ongoing reproduction and competition, amoebae are being introduced into the dish with a rate constant 3. As in the above examples, we might think that coherent states could remain coherent forever here too. Let’s check that. Assuming that this was true, if $\displaystyle{\Psi(t) = \frac{e^{c(t)z}}{e^{c(t)}} }$ then $c(t)$ would have to satisfy the following: $\dot{c}(t) = c(t)^2 + 3 -2c(t)$ Using the second equation, we get $\dot{c}(t) = 3 \Rightarrow c = 3t+ c_0$ But this is certainly not a solution of the second equation. So, here we find that initially coherent states do not remain remain coherent for all times. However, if we choose $\displaystyle{\Psi(0) = \frac{e^{2z}}{e^2}}$ then this coherent state is complex balanced except for complexes of degree 1, since it was in the previous example, and the only new feature of this example, at time zero, is that single amoebas are being introduced—and these are complexes of degree 1. So, the condition of the theorem does hold. So, the condition in the theorem is necessary but not sufficient. However, it is easy to check, and we can use it to show that in many cases, coherent states must cease to be coherent. The Large-Number Limit for Reaction Networks (Part 1) 1 July, 2013 Waiting for the other shoe to drop. This is a figure of speech that means ‘waiting for the inevitable consequence of what’s come so far’. Do you know where it comes from? You have to imagine yourself in an apartment on the floor below someone who is taking off their shoes. When you hear one, you know the next is coming. There’s even an old comedy routine about this: A guest who checked into an inn one night was warned to be quiet because the guest in the room next to his was a light sleeper. As he undressed for bed, he dropped one shoe, which, sure enough, awakened the other guest. He managed to get the other shoe off in silence, and got into bed. An hour later, he heard a pounding on the wall and a shout: “When are you going to drop the other When we were working on math together, James Dolan liked to say “the other shoe has dropped” whenever an inevitable consequence of some previous realization became clear. There’s also the mostly British phrase the penny has dropped. You say this when someone finally realizes the situation they’re in. But sometimes one realization comes after another, in a long sequence. Then it feels like it’s raining shoes! I guess that’s a rather strained metaphor. Perhaps falling like dominoes is better for these long chains of realizations. This is how I’ve felt in my recent research on the interplay between quantum mechanics, stochastic mechanics, statistical mechanics and extremal principles like the principle of least action. The basics of these subjects should be completely figured out by now, but they aren’t—and a lot of what’s known, nobody bothered to tell most of us. So, I was surprised to rediscover that the Maxwell relations in thermodynamics are formally identical to Hamilton’s equations in classical mechanics… though in retrospect it’s obvious. Thermodynamics obeys the principle of maximum entropy, while classical mechanics obeys the principle of least action. Wherever there’s an extremal principle, symplectic geometry, and equations like Hamilton’s equations, are sure to follow. I was surprised to discover (or maybe rediscover, I’m not sure yet) that just as statistical mechanics is governed by the principle of maximum entropy, quantum mechanics is governed by a principle of maximum ‘quantropy’. The analogy between statistical mechanics and quantum mechanics has been known at least since Feynman and Schwinger. But this basic aspect was never explained to me! I was also surprised to rediscover that simply by replacing amplitudes by probabilities in the formalism of quantum field theory, we get a nice formalism for studying stochastic many-body systems. This formalism happens to perfectly match the ‘stochastic Petri nets’ and ‘reaction networks’ already used in subjects from population biology to epidemiology to chemistry. But now we can systematically borrow tools from quantum field theory! All the tricks that particle physicists like—annihilation and creation operators, coherent states and so on—can be applied to problems like the battle between the AIDS virus and human white blood cells. And, perhaps because I’m a bit slow on the uptake, I was surprised when yet another shoe came crashing to the floor the other day. Because quantum field theory has, at least formally, a nice limit where Planck’s constant goes to zero, the same is true for for stochastic Petri nets and reaction networks! In quantum field theory, we call this the ‘classical limit’. For example, if you have a really huge number of photons all in the same state, quantum effects sometimes become negligible, and we can describe them using the classical equations describing electromagnetism: the classical Maxwell equations. In stochastic situations, it makes more sense to call this limit the ‘large-number limit’: the main point is that there are lots of particles in each state. In quantum mechanics, different observables don’t commute, so the so-called commutator matters a lot: $[A,B] = AB - BA$ These commutators tend to be proportional to Planck’s constant. So in the limit where Planck’s constant $\hbar$ goes to zero, observables commute… but commutators continue to have a ghostly existence, in the form of Poisson bracket: $\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }$ Poisson brackets are a key part of symplectic geometry—the geometry of classical mechanics. So, this sort of geometry naturally shows up in the study of stochastic Petri nets! Let me sketch how it works. I’ll start with a section reviewing stuff you should already know if you’ve been following the network theory series. The stochastic Fock space Suppose we have some finite set $S$. We call its elements species, since we think of them as different kinds of things—e.g., kinds of chemicals, or kinds of organisms. To describe the probability of having any number of things of each kind, we need the stochastic Fock space. This is the space of real formal power series in a bunch of variables, one for each element of $S.$ It won’t hurt to simply say $S = \{1, \dots, k \}$ Then the stochastic Fock space is $\mathbb{R}[[z_1, \dots, z_k ]]$ this being math jargon for the space of formal power series with real coefficients in some variables $z_1, \dots, z_k,$ one for each element of $S.$ We write $n = (n_1, \dots, n_k) \in \mathbb{N}^S$ and use this abbreviation: $z^n = z_1^{n_1} \cdots z_k^{n_k}$ We use $z^n$ to describe a state where we have $n_1$ things of the first species, $n_2$ of the second species, and so on. More generally, a stochastic state is an element $\Psi$ of the stochastic Fock space with $\displaystyle{ \Psi = \sum_{n \in \mathbb{N}^k} \psi_n \, z^n }$ $\psi_n \ge 0$ $\displaystyle{ \sum_{n \in \mathbb{N}^k} \psi_n = 1 }$ We use $\Psi$ to describe a state where $\psi_n$ is the probability of having $n_1$ things of the first species, $n_2$ of the second species, and so on. The stochastic Fock space has some important operators on it: the annihilation operators given by $\displaystyle{ a_i \Psi = \frac{\partial}{\partial z_i} \Psi }$ and the creation operators given by $\displaystyle{ a_i^\dagger \Psi = z_i \Psi }$ From these we can define the number operators: $N_i = a_i^\dagger a_i$ Part of the point is that $N_i z^n = n_i z^n$ This says the stochastic state $z^n$ is an eigenstate of all the number operators, with eigenvalues saying how many things there are of each species. The annihilation, creation, and number operators obey some famous commutation relations, which are easy to check for yourself: $[a_i, a_j] = 0$ $[a_i^\dagger, a_j^\dagger] = 0$ $[a_i, a_j^\dagger] = \delta_{i j}$ $[N_i, N_j ] = 0$ $[N_i , a_j^\dagger] = \delta_{i j} a_j^\dagger$ $[N_i , a_j] = - \delta_{i j} a_j^\dagger$ The last two have easy interpretations. The first of these two implies $N_i a_i^\dagger \Psi = a_i^\dagger (N_i + 1) \Psi$ This says that if we start in some state $\Psi,$ create a thing of type $i,$ and then count the things of that type, we get one more than if we counted the number of things before creating one. $N_i a_i \Psi = a_i (N_i - 1) \Psi$ says that if we annihilate a thing of type $i$ and then count the things of that type, we get one less than if we counted the number of things before annihilating one. Introducing Planck’s constant Now let’s introduce an extra parameter into this setup. To indicate the connection to quantum physics, I’ll call it $\hbar,$ which is the usual symbol for Planck’s constant. However, I want to emphasize that we’re not doing quantum physics here! We’ll see that the limit where $\hbar \to 0$ is very interesting, but it will correspond to a limit where there are many things of each kind. We’ll start by defining $A_i = \hbar \, a_i$ $C_i = a_i^\dagger$ Here $A$ stands for ‘annihilate’ and $C$ stands for ‘create’. Think of $A$ as a rescaled annihilation operator. Using this we can define a rescaled number operator: $\widetilde{N}_i = C_i A_i$ So, we have $\widetilde{N}_i = \hbar N_i$ and this explains the meaning of the parameter $\hbar.$ The idea is that instead of counting things one at time, we count them in bunches of size $1/\hbar.$ For example, suppose $\hbar = 1/12.$ Then we’re counting things in dozens! If we have a state $\Psi$ with $N_i \Psi = 36 \Psi$ then there are 36 things of the ith kind. But this implies $\widetilde{N}_i \Psi = 3 \Psi$ so there are 3 dozen things of the ith kind. Chemists don’t count in dozens; they count things in big bunches called moles. A mole is approximately the number of carbon atoms in 12 grams: Avogadro’s number, 6.02 × 10^23. When you count things by moles, you’re taking $\hbar$ to be 1.66 × 10^-24, the reciprocal of Avogadro’s number. So, while in quantum mechanics Planck’s constant is ‘the quantum of action’, a unit of action, here it’s ‘the quantum of quantity’: the amount that corresponds to one thing. We can easily work out the commutation relations of our new rescaled operators: $[A_i, A_j] = 0$ $[C_i, C_j] = 0$ $[A_i, C_j] = \hbar \, \delta_{i j}$ $[\widetilde{N}_i, \widetilde{N}_j ] = 0$ $[\widetilde{N}_i , C_j] = \hbar \, \delta_{i j} C_j$ $[\widetilde{N}_i , A_j] = - \hbar \, \delta_{i j} A_j$ These are just what you see in quantum mechanics! The commutators are all proportional to $\hbar.$ Again, we can understand what these relations mean if we think a bit. For example, the commutation relation for $\widetilde{N}_i$ and $C_i$ says $N_i C_i \Psi = C_i (N_i + \hbar) \Psi$ This says that if we start in some state $\Psi,$ create a thing of type $i,$ and then count the things of that type, we get $\hbar$ more than if we counted the number of things before creating one. This is because we are counting things not one at a time, but in bunches of size $1/\hbar.$ You may be wondering why I defined the rescaled annihilation operator to be $\hbar$ times the original annihilation operator: $A_i = \hbar \, a_i$ but left the creation operator unchanged: $C_i = a_i^\dagger$ I’m wondering that too! I’m not sure I’m doing things the best way yet. I’ve also tried another more symmetrical scheme, taking $A_k = \sqrt{\hbar} \, a_k$ and $C_k = \sqrt{\hbar} a_k^\dagger.$ This gives the same commutation relations, but certain other formulas become more unpleasant. I’ll explain that some other day. Next, we can take the limit as $\hbar \to 0$ and define Poisson brackets of operators by $\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }$ To make this rigorous it’s best to proceed algebraically. For this we treat $\hbar$ as a formal variable rather than a specific number. So, our number system becomes $\mathbb{R}[\hbar],$ the algebra of polynomials in $\hbar$. We define the Weyl algebra to be the algebra over $\mathbb{R}[\hbar]$ generated by elements $A_i$ and $C_i$ obeying $[A_i, A_j] = 0$ $[C_i, C_j] = 0$ $[A_i, C_j] = \hbar \, \delta_{i j}$ We can set $\hbar = 0$ in this formalism; then the Weyl algebra reduces to the algebra of polynomials in the variables $A_i$ and $C_i.$ This algebra is commutative! But we can define a Poisson bracket on this algebra by $\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }$ It takes a bit of work to explain to algebraists exactly what’s going on in this formula, because it involves an interplay between the algebra of polynomials in $A_i$ and $C_i,$ which is commutative, and the Weyl algebra, which is not. I’ll be glad to explain the details if you want. But if you’re a physicist, you can just follow your nose and figure out what the formula gives. For example: $\begin{array}{ccl} \{A_i, C_j\} &=& \displaystyle{ \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A_i, C_j] } \\ \\ &=& \displaystyle{ \lim_{\hbar \to 0} \; \frac{1}{\hbar} \, \hbar \, \delta_{i j} } \\ \\ &=& \delta_{i j} \end{array}$ Similarly, we have: $\{ A_i, A_j \} = 0$ $\{ C_i, C_j \} = 0$ $\{ A_i, C_j \} = \delta_{i j}$ $\{ \widetilde{N}_i, \widetilde{N}_j \} = 0$ $\{ \widetilde{N}_i , C_j \} = \delta_{i j} C_j$ $\{ \widetilde{N}_i , A_j \} = - \delta_{i j} A_j$ I should probably use different symbols for $A_i, C_i$ and $\widetilde{N}_i$ after we’ve set $\hbar = 0,$ since they’re really different now, but I don’t have the patience to make up more names for Now, we can think of $A_i$ and $C_i$ as coordinate functions on a 2k-dimensional vector space, and all the polynomials in $A_i$ and $C_i$ as functions on this space. This space is what physicists would call a ‘phase space’: they use this kind of space to describe the position and momentum of a particle, though here we are using it in a different way. Mathematicians would call it a ‘symplectic vector space’, because it’s equipped with a special structure, called a symplectic structure, that lets us define Poisson brackets of smooth functions on this space. We won’t need to get into that now, but it’s important—and it makes me happy to see it here. There’s a lot more to do, but not today. My main goal is to understand, in a really elegant way, how the master equation for a stochastic Petri net reduces to the rate equation in the large-number limit. What we’ve done so far is start thinking of this as a $\hbar \to 0$ limit. This should let us borrow ideas about classical limits in quantum mechanics, and apply them to stochastic mechanics. Stay tuned! Quantum Techniques for Reaction Networks 11 June, 2013 Fans of the network theory series might like to look at this paper: • John Baez, Quantum techniques for reaction networks. and I would certainly appreciate comments and corrections. This paper tackles a basic question we never got around to discussing: how the probabilistic description of a system where bunches of things randomly interact and turn into other bunches of things can reduce to a deterministic description in the limit where there are lots of things! Mathematically, such systems are given by ‘stochastic Petri nets’, or if you prefer, ‘stochastic reaction networks’. These are just two equivalent pictures of the same thing. For example, we could describe some chemical reactions using this Petri net: but chemists would use this reaction network: C + O[2] → CO[2] CO[2] + NaOH → NaHCO[3] NaHCO[3] + HCl → H[2]O + NaCl + CO[2] Making either of them ‘stochastic’ merely means that we specify a ‘rate constant’ for each reaction, saying how probable it is. For any such system we get a ‘master equation’ describing how the probability of having any number of things of each kind changes with time. In the class I taught on this last quarter, the students and I figured out how to derive from this an equation saying how the expected number of things of each kind changes with time. Later I figured out a much slicker argument… but either way, we get this Theorem. For any stochastic reaction network and any stochastic state $\Psi(t)$ evolving in time according to the master equation, then $\displaystyle{ \frac{d}{dt} \langle N \Psi(t) \rangle } = \displaystyle{\sum_{\tau \in T}} \, r(\tau) \, (s(\tau) - t(\tau)) \; \left\langle N^{\underline{s(\tau)}}\, \Psi(t) \right\rangle$ assuming the derivative exists. Of course this will make no sense yet if you haven’t been following the network theory series! But I explain all the notation in the paper, so don’t be scared. The main point is that $\langle N \Psi (t) \rangle$ is a vector listing the expected number of things of each kind at time $t.$ The equation above says how this changes with time… but it closely resembles the ‘rate equation’, which describes the evolution of chemical systems in a deterministic way. And indeed, the next big theorem says that the master equation actually implies the rate equation when the probability of having various numbers of things of each kind is given by a product of independent Poisson distributions. In this case $\Psi(t)$ is what people in quantum physics call a ‘coherent state’. So: Theorem. Given any stochastic reaction network, let $\Psi(t)$ be a mixed state evolving in time according to the master equation. If $\Psi(t)$ is a coherent state when $t = t_0,$ then $\langle N \Psi(t) \rangle$ obeys the rate equation when $t = t_0.$ In most cases, this only applies exactly at one moment of time: later $\Psi(t)$ will cease to be a coherent state. Then we must resort to the previous theorem to see how the expected number of things of each kind changes with time. But sometimes our state $\Psi(t)$ will stay coherent forever! For one case where this happens, see the companion paper, which I blogged about a little while ago: • John Baez and Brendan Fong, Quantum techniques for studying equilibrium in reaction networks. We wrote this first, but logically it comes after the one I just finished now! All this material will get folded into the book I’m writing with Jacob Biamonte. There are just a few remaining loose ends that need to be tied up. Quantum Techniques for Studying Equilibrium in Reaction Networks 16 May, 2013 The summer before last, I invited Brendan Fong to Singapore to work with me on my new ‘network theory’ project. He quickly came up with a nice new proof of a result about mathematical chemistry. We blogged about it, and I added it to my book, but then he became a grad student at Oxford and got distracted by other kinds of networks—namely, Bayesian networks. So, we’ve just now finally written up this result as a self-contained paper: • John Baez and Brendan Fong, Quantum techniques for studying equilibrium in reaction networks. Check it out and let us know if you spot mistakes or stuff that’s not clear! The idea, in brief, is to use math from quantum field theory to give a somewhat new proof of the Anderson–Craciun–Kurtz theorem. This remarkable result says that in many cases, we can start with an equilibrium solution of the ‘rate equation’ which describes the behavior of chemical reactions in a deterministic way in the limit of a large numbers of molecules, and get an equilibrium solution of the ‘master equation’ which describes chemical reactions probabilistically for any number of molecules. The trick, in our approach, is to start with a chemical reaction network, which is something like this: and use it to write down a Hamiltonian describing the time evolution of the probability that you have various numbers of each kind of molecule: A, B, C, D, E, … Using ideas from quantum mechanics, we can write this Hamiltonian in terms of annihilation and creation operators—even though our problem involves probability theory, not quantum mechanics! Then we can write down the equilibrium solution as a ‘coherent state’. In quantum mechanics, that’s a quantum state that approximates a classical one as well as possible. All this is part of a larger plan to take tricks from quantum mechanics and apply them to ‘stochastic mechanics’, simply by working with real numbers representing probabilities instead of complex numbers representing amplitudes! I should add that Brendan’s work on Bayesian networks is also very cool, and I plan to talk about it here and even work it into the grand network theory project I have in mind. But this may take quite a long time, so for now you should read his paper: • Brendan Fong, Causal theories: a categorical perspective on Bayesian networks. Network Theory (Part 29) 23 April, 2013 I’m talking about electrical circuits, but I’m interested in them as models of more general physical systems. Last time we started seeing how this works. We developed an analogy between electrical circuits and physical systems made of masses and springs, with friction: │Electronics │Mechanics │ │charge: $Q$ │position: $q$ │ │current: $I = \dot{Q}$ │velocity: $v = \dot{q}$ │ │flux linkage: $\lambda$ │momentum: $p$ │ │voltage: $V = \dot{\lambda}$ │force: $F = \dot{p}$ │ │inductance: $L$ │mass: $m$ │ │resistance: $R$ │damping coefficient: $r$│ │inverse capacitance: $1/C$ │spring constant: $k$ │ But this is just the first of a large set of analogies. Let me list some, so you can see how wide-ranging they are! More analogies People in system dynamics often use effort as a term to stand for anything analogous to force or voltage, and flow as a general term to stand for anything analogous to velocity or electric current. They call these variables $e$ and $f.$ To me it’s important that force is the time derivative of momentum, and velocity is the time derivative of position. Following physicists, I write momentum as $p$ and position as $q.$ So, I’ll usually write effort as $\dot{p}$ and flow as $\dot{q}$. Of course, ‘position’ is a term special to mechanics; it’s nice to have a general term for the thing whose time derivative is flow, that applies to any context. People in systems dynamics seem to use displacement as that general term. It would also be nice to have a general term for the thing whose time derivative is effort… but I don’t know one. So, I’ll use the word momentum. Now let’s see the analogies! Let’s see how displacement $q$, flow $\dot{q},$ momentum $p$ and effort $\dot{p}$ show up in several subjects: │ │displacement: $q$│flow: $\dot q$ │momentum: $p$ │effort: $\dot p$ │ │Mechanics: translation │position │velocity │momentum │force │ │Mechanics: rotation │angle │angular velocity│angular momentum │torque │ │Electronics │charge │current │flux linkage │voltage │ │Hydraulics │volume │flow │pressure momentum │pressure │ │Thermal Physics │entropy │entropy flow │temperature momentum│temperature │ │Chemistry │moles │molar flow │chemical momentum │chemical potential│ We’d been considering mechanics of systems that move along a line, via translation, but we can also consider mechanics for systems that turn round and round, via rotation. So, there are two rows for mechanics here. There’s a row for electronics, and then a row for hydraulics, which is closely analogous. In this analogy, a pipe is like a wire. The flow of water plays the role of current. Water pressure plays the role of electrostatic potential. The difference in water pressure between two ends of a pipe is like the voltage across a wire. When water flows through a pipe, the power equals the flow times this pressure difference—just as in an electrical circuit the power is the current times the voltage across the wire. A resistor is like a narrowed pipe: An inductor is like a heavy turbine placed inside a pipe: this makes the water tend to keep flowing at the same rate it’s already flowing! In other words, it provides a kind of ‘inertia’ analogous to mass. A capacitor is like a tank with pipes coming in from both ends, and a rubber sheet dividing it in two lengthwise: When studying electrical circuits as a kid, I was shocked when I first learned that capacitors don’t let the electrons through: it didn’t seem likely you could do anything useful with something like that! But of course you can. Similarly, this gizmo doesn’t let the water through. A voltage source is like a compressor set up to maintain a specified pressure difference between the input and output: Similarly, a current source is like a pump set up to maintain a specified flow. Finally, just as voltage is the time derivative of a fairly obscure quantity called ‘flux linkage’, pressure is the time derivative of an even more obscure quantity which has no standard name. I’m calling it ‘pressure momentum’, thanks to the analogy momentum: force :: pressure momentum: pressure Just as pressure has units of force per area, pressure momentum has units of momentum per area! People invented this analogy back when they were first struggling to understand electricity, before electrons had been observed: • Hydraulic analogy, Wikipedia. The famous electrical engineer Oliver Heaviside pooh-poohed this analogy, calling it the “drain-pipe theory”. I think he was making fun of William Henry Preece. Preece was another electrical engineer, who liked the hydraulic analogy and disliked Heaviside’s fancy math. In his inaugural speech as president of the Institution of Electrical Engineers in 1893, Preece proclaimed: True theory does not require the abstruse language of mathematics to make it clear and to render it acceptable. All that is solid and substantial in science and usefully applied in practice, have been made clear by relegating mathematic symbols to their proper store place—the study. According to the judgement of history, Heaviside made more progress in understanding electromagnetism than Preece. But there’s still a nice analogy between electronics and hydraulics. And I’ll eventually use the abstruse language of mathematics to make it very precise! But now let’s move on to the row called ‘thermal physics’. We could also call this ‘thermodynamics’. It works like this. Say you have a physical system in thermal equilibrium and all you can do is heat it up or cool it down ‘reversibly’—that is, while keeping it in thermal equilibrium all along. For example, imagine a box of gas that you can heat up or cool down. If you put a tiny amount $dE$ of energy into the system in the form of heat, then its entropy increases by a tiny amount $dS.$ And they’re related by this equation: $dE = TdS$ where $T$ is the temperature. Another way to say this is $\displaystyle{ \frac{dE}{dt} = T \frac{dS}{dt} }$ where $t$ is time. On the left we have the power put into the system in the form of heat. But since power should be ‘effort’ times ‘flow’, on the right we should have ‘effort’ times ‘flow’. It makes some sense to call $dS/dt$ the ‘entropy flow’. So temperature, $T,$ must play the role of ‘effort’. This is a bit weird. I don’t usually think of temperature as a form of ‘effort’ analogous to force or torque. Stranger still, our analogy says that ‘effort’ should be the time derivative of some kind of ‘momentum’, So, we need to introduce temperature momentum: the integral of temperature over time. I’ve never seen people talk about this concept, so it makes me a bit nervous. But when we have a more complicated physical system like a piston full of gas in thermal equilibrium, we can see the analogy working. Now we have $dE = TdS - PdV$ The change in energy $dE$ of our gas now has two parts. There’s the change in heat energy $TdS$, which we saw already. But now there’s also the change in energy due to compressing the piston! When we change the volume of the gas by a tiny amount $dV,$ we put in energy $-PdV.$ Now look back at the first chart I drew! It says that pressure is a form of ‘effort’, while volume is a form of ‘displacement’. If you believe that, the equation above should help convince you that temperature is also a form of effort, while entropy is a form of displacement. But what about the minus sign? That’s no big deal: it’s the result of some arbitrary conventions. $P$ is defined to be the outward pressure of the gas on our piston. If this is positive, reducing the volume of the gas takes a positive amount of energy, so we need to stick in a minus sign. I could eliminate this minus sign by changing some conventions—but if I did, the chemistry professors at UCR would haul me away and increase my heat energy by burning me at the stake. Speaking of chemistry: here’s how the chemistry row in the analogy chart works. Suppose we have a piston full of gas made of different kinds of molecules, and there can be chemical reactions that change one kind into another. Now our equation gets fancier: $\displaystyle{ dE = TdS - PdV + \sum_i \mu_i dN_i }$ Here $N_i$ is the number of molecules of the ith kind, while $\mu_i$ is a quantity called a chemical potential. The chemical potential simply says how much energy it takes to increase the number of molecules of a given kind. So, we see that chemical potential is another form of effort, while number of molecules is another form of displacement. But chemists are too busy to count molecules one at a time, so they count them in big bunches called ‘moles’. A mole is the number of atoms in 12 grams of carbon-12. That’s roughly atoms. This is called Avogadro’s constant. If we used 1 gram of hydrogen, we’d get a very close number called ‘Avogadro’s number’, which leads to lots of jokes: (He must be desperate because he looks so weird… sort of like a mole!) So, instead of saying that the displacement in chemistry is called ‘number of molecules’, you’ll sound more like an expert if you say ‘moles’. And the corresponding flow is called molar flow. The truly obscure quantity in this row of the chart is the one whose time derivative is chemical potential! I’m calling it chemical momentum simply because I don’t know another name. Why are linear and angular momentum so famous compared to pressure momentum, temperature momentum and chemical momentum? I suspect it’s because the laws of physics are symmetrical under translations and rotations. When the assumptions of Noether’s theorem hold, this guarantees that the total momentum and angular momentum of a closed system are conserved. Apparently the laws of physics lack the symmetries that would make the other kinds of momentum be conserved. This suggests that we should dig deeper and try to understand more deeply how this chart is connected to ideas in classical mechanics, like Noether’s theorem or symplectic geometry. I will try to do that sometime later in this series. More generally, we should try to understand what gives rise to a row in this analogy chart. Are there are lots of rows I haven’t talked about yet, or just a few? There are probably lots. But are there lots of practically important rows that I haven’t talked about—ones that can serve as the basis for new kinds of engineering? Or does something about the structure of the physical world limit the number of such rows? Mildly defective analogies Engineers care a lot about dimensional analysis. So, they often make a big deal about the fact that while effort and flow have different dimensions in different rows of the analogy chart, the following four things are always true: • $pq$ has dimensions of action (= energy × time) • $\dot{p} q$ has dimensions of energy • $p \dot{q}$ has dimensions of energy • $\dot{p} \dot{q}$ has dimensions of power (= energy / time) In fact any one of these things implies all the rest. These facts are important when designing ‘mixed systems’, which combine different rows in the chart. For example, in mechatronics, we combine mechanical and electronic elements in a single circuit! And in a hydroelectric dam, power is converted from hydraulic to mechanical and then electric form: One goal of network theory should be to develop a unified language for studying mixed systems! Engineers have already done most of the hard work. And they’ve realized that thanks to conservation of energy, working with pairs of flow and effort variables whose product has dimensions of power is very convenient. It makes it easy to track the flow of energy through these systems. However, people have tried to extend the analogy chart to include ‘mildly defective’ examples where effort times flow doesn’t have dimensions of power. The two most popular are these: │ │displacement: $q$│flow: $\dot q$│momentum: $p$ │effort: $\dot p$│ │Heat flow│heat │heat flow │temperature momentum │temperature │ │Economics│inventory │product flow │economic momentum │product price │ The heat flow analogy comes up because people like to think of heat flow as analogous to electrical current, and temperature as analogous to voltage. Why? Because an insulated wall acts a bit like a resistor! The current flowing through a resistor is a function the voltage across it. Similarly, the heat flowing through an insulated wall is about proportional to the difference in temperature between the inside and the outside. However, there’s a difference. Current times voltage has dimensions of power. Heat flow times temperature does not have dimensions of power. In fact, heat flow by itself already has dimensions of power! So, engineers feel somewhat guilty about this analogy. Being a mathematical physicist, a possible way out presents itself to me: use units where temperature is dimensionless! In fact such units are pretty popular in some circles. But I don’t know if this solution is a real one, or whether it causes some sort of trouble. In the economic example, ‘energy’ has been replaced by ‘money’. So other words, ‘inventory’ times ‘product price’ has units of money. And so does ‘product flow’ times ‘economic momentum’! I’d never heard of economic momentum before I started studying these analogies, but I didn’t make up that term. It’s the thing whose time derivative is ‘product price’. Apparently economists have noticed a tendency for rising prices to keep rising, and falling prices to keep falling… a tendency toward ‘conservation of momentum’ that doesn’t fit into their models of rational behavior. I’m suspicious of any attempt to make economics seem like physics. Unlike elementary particles or rocks, people don’t seem to be very well modelled by simple differential equations. However, some economists have used the above analogy to model economic systems. And I can’t help but find that interesting—even if intellectually dubious when taken too seriously. An auto-analogy Beside the analogy I’ve already described between electronics and mechanics, there’s another one, called ‘Firestone’s analogy’: • F.A. Firestone, A new analogy between mechanical and electrical systems, Journal of the Acoustical Society of America 4 (1933), 249–267. Alain Bossavit pointed this out in the comments to Part 27. The idea is to treat current as analogous to force instead of velocity… and treat voltage as analogous to velocity instead of force! In other words, switch your $p$’s and $q$’s: │Electronics │Mechanics (usual analogy)│Mechanics (Firestone’s analogy) │ │charge │position: $q$ │momentum: $p$ │ │current │velocity: $\dot{q}$ │force: $\dot{p}$ │ │flux linkage│momentum: $p$ │position: $q$ │ │voltage │force: $\dot{p}$ │velocity: $\dot{q}$ │ This new analogy is not ‘mildly defective’: the product of effort and flow variables still has dimensions of power. But why bother with another analogy? It may be helpful to recall this circuit from last time: It’s described by this differential equation: $L \ddot{Q} + R \dot{Q} + C^{-1} Q = V$ We used the ‘usual analogy’ to translate it into classical mechanics problem, and we got a problem where an object of mass $L$ is hanging from a spring with spring constant $1/C$ and damping coefficient $R,$ and feeling an additional external force $F:$ $m \ddot{q} + r \dot{q} + k q = F$ And that’s fine. But there’s an intuitive sense in which all three forces are acting ‘in parallel’ on the mass, rather than in series. In other words, all side by side, instead of one after the Using Firestone’s analogy, we get a different classical mechanics problem, where the three forces are acting in series. The spring is connected to source of friction, which in turn is connected to an external force. This may seem a bit mysterious. But instead of trying to explain it, I’ll urge you to read his paper, which is short and clearly written. I instead want to make a somewhat different point, which is that we can take a mechanical system, convert it to an electrical one following the usual analogy, and then convert back to a mechanical one using Firestone’s analogy. This gives us an ‘auto-analogy’ between mechanics and itself, which switches $p$ and $q.$ And although I haven’t been able to figure out why from Firestone’s paper, I have other reasons for feeling sure this auto-analogy should contain a minus sign. For example: $p \mapsto q, \qquad q \mapsto -p$ In other words, it should correspond to a 90° rotation in the $(p,q)$ plane. There’s nothing sacred about whether we rotate clockwise or counterclockwise; we can equally well do this: $p \mapsto -q, \qquad q \mapsto p$ But we need the minus sign to get a so-called symplectic transformation of the $(p,q)$ plane. And from my experience with classical mechanics, I’m pretty sure we want that. If I’m wrong, please let me know! I have a feeling we should revisit this issue when we get more deeply into the symplectic aspects of circuit theory. So, I won’t go on now. The analogies I’ve been talking about are studied in a branch of engineering called system dynamics. You can read more about it here: • Dean C. Karnopp, Donald L. Margolis and Ronald C. Rosenberg, System Dynamics: a Unified Approach, Wiley, New York, 1990. • Forbes T. Brown, Engineering System Dynamics: a Unified Graph-Centered Approach, CRC Press, Boca Raton, 2007. • Francois E. Cellier, Continuous System Modelling, Springer, Berlin, 1991. System dynamics already uses lots of diagrams of networks. One of my goals in weeks to come is to explain the category theory lurking behind these diagrams.
{"url":"http://johncarlosbaez.wordpress.com/category/chemistry/","timestamp":"2014-04-16T04:49:57Z","content_type":null,"content_length":"260897","record_id":"<urn:uuid:c5c27d39-7562-4f9e-982d-d01e8d87771f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
28: Measure and integration [Search][Subject Index][MathMap][Tour][Help!] Measure theory and integration is the study of lengths, surface area, and volumes in general spaces. This is a critical feature of a full development of integration theory; moreover, it provides the basic framework for probability theory. Measure theory is a meeting place between the tame applicability of real functions and the wild possibilities of set theory. This is the setting of fractals. For numerical integration of real functions see Numerical Analysis Treated here are measure theory both abstractly and on the real line. For measure theory and analysis on Lie groups, see 43-XX. For measure and integration on infinite-dimensional vector spaces see 46-XX and 47-XX. The Borel sets and related families are constructed as a part of "descriptive" set theory (now in section 03E). Chaotic attractors are treated in 37: Dynamical Systems; these may lead to fractal sets. Many common and important indefinite integrals cannot be expressed in terms of the elementary functions but are themselves studied in 33: Special Functions. This includes the elliptic functions, the gamma function, the Fresnel integrals, and so on. (Indeed, many of these functions are defined by integrals). There is a general theory of computing anti-derivatives in "closed form"; this isn't really part of the study of integration at all. See rather 12H: Differential and difference algebra Browse all (old) classifications for this area at the AMS. Bourbaki, N., "Integration", separate chapters published separately by Hermann, Paris ca. 1969 Bear, H. S.: "A primer of Lebesgue integration", Academic Press, Inc., San Diego, CA, 1995. 163 pp. ISBN 0-12-083970-9 MR96f:28001 Cohn, Donald L.: "Measure Theory", Birkhäuser Boston, Inc., Boston, MA, 1993. 373 pp. ISBN 0-8176-3003-1 MR98b:28001 (Reprint of the 1980 original: see MR81k:28001.) Ulam, S. M.: "What is measure?", Amer. Math. Monthly 50, (1943). 597--602. MR5,113g Birkhoff, G. D.: "What is the ergodic theorem?" Amer. Math. Monthly 49, (1942). 222--226. MR4,15b Schanuel, Stephen H.: "What is the length of a potato? An introduction to geometric measure theory" Categories in continuum physics (Buffalo, N.Y., 1982), 118--126; Lecture Notes in Math., 1174, Springer, Berlin, 1986. There is a newsgroup sci.fractals; there is a (somewhat dated!) Fractal FAQ for it. Handbooks of integrals are common; particularly massive is the set of integral tables by Gradshteyn, I.S. and Ryzhik, I.M. "Tables of Integrals, Series, and Products", (5th ed, 1993), San Diego CA: Academic Press. Somewhat closer to a textbook (offering some discussion of the principal themes) is Zwillinger, Daniel: "Handbook of integration", Jones and Bartlett Publishers, Boston, MA, 1992. ISBN 0-86720-293-9. Online integrators from Wolfram Inc. and Fateman. (The latter calls the former if it gets stuck.) The GAMS software tree has a node for numerical evaluation of definite integrals You can reach this page through http://www.math-atlas.org/welcome.html Last modified 2000/01/24 by Dave Rusin. Mail:
{"url":"http://www.math.niu.edu/~rusin/known-math/index/28-XX.html","timestamp":"2014-04-20T13:20:59Z","content_type":null,"content_length":"10756","record_id":"<urn:uuid:bc44cf0f-0e79-43f7-91d4-48c46c7222e0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry With Infotrac Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/trigonometry-infotrac-5th-mckeague-charles/bk/9780534403928","timestamp":"2014-04-16T20:15:19Z","content_type":null,"content_length":"30661","record_id":"<urn:uuid:50990b66-6f27-48e4-92fe-5fa42b48c1ff>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Recommended book for Optimisation? I'm looking to do a course on Optimisation, however there was no prescribed textbook and I'm a bit wary of doing a course without a textbook to reference. There was a generalised list given, of like 10 textbooks, but this is a bit too much, especially with 3 other subjects to do! Here is the general outline, perhaps someone can recommend 1 - 2 books? Overview: Optimization is the study of problems in which we wish to optimize (either maximize or minimize) a function (usually of several variables) often subject to a collection of restrictions on these variables. The restrictions are known as constraints and the function to be optimized is the objective function. Optimization problems are widespread in the modelling of real world systems, and cover a very broad range of Problems of engineering design (such as the design of electronic circuits subject to a tolerancing and tuning provision), information technology (such as the extraction of meaningful information from large databases and the classication of data), nancial decision making and investment planning (such as the selection of optimal investment portfolios), and transportation management and so on arise in the form of a multi-variable optimization problem or an optimal control problem. Introduction: What is an optimization problem? Areas of applications of optimization. Modelling of real life optimization problems. Multi-variable optimization. Formulation of multi-variable optimization problems; Struc- ture of optimization problems: objective functions and constraints. Mathematical background: multi-variable calculus and linear algebra; (strict) local and (strict) global minimizers and maximizers; convex sets, convex and concave functions; global extrema and uniqueness of solutions. Optimality conditions: First and second order conditions for unconstrained prob- lems; Lagrange multiplier conditions for equality constrained problems; Kuhn-Tucker conditions for inequality constrained problems. Numerical Methods for Unconstrained Problems: Steepest descent method, Newton's method, Conjugate gradient methods. Numerical Methods for Constrained Problems: Penalty Methods. Optimal Control: What is an optimal control problem? Areas of applications of optimal control. Mathematical background: ordinary differential equations and systems of linear differential equations. The Pontryagin maximum principle: Autonomous control problems; unbounded
{"url":"http://www.physicsforums.com/showthread.php?p=4199814","timestamp":"2014-04-20T08:37:48Z","content_type":null,"content_length":"28315","record_id":"<urn:uuid:e3617e78-6d86-4c6e-bad6-dd12383a55e6>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
a question about diagonal prikry forcing up vote 5 down vote favorite Suppose <\kappa_n|n<\omega> is a strictly increasing sequence of measurable cardinals, \kappa is the limit of this sequence. For each n<\omega, U_n is a normal measure on \kappa_n. P is the diagonal Prikry forcing corresponding to \kappa_n's and U_n's. Suppose g is P-generic sequence over V. We have known that for each strictly increasing sequence x of length \omega such that each x(i)<\kappa_i and x\in{V}, x is eventually dominated by g. In V[g], suppose A is a subset of \kappa, A is not in V. Is there a strictly increasing sequence y of length \omega such that each y(i)<\kappa_i and y\in{V[A]}, y is not eventually dominated by g? (g can eventually dominate all such sequences in V, V[A] is greater than V, I feel g can not eventually dominate all such sequences in V[A].) set-theory forcing Can you be more explicit about P? What are the conditions and the order, etc.? – Joel David Hamkins Feb 10 '10 at 13:48 Hi. Every condition of P is a ordered pair (s,F). s is a strictly increasing finite sequence such that each s(i)<\kappa_i. F is a function, dom(F)=\omega, for each i<\omega, F(i)\in{U_i}. (s,F) and (t,H) are two conditions. (s,F) is stronger than (t,H) means: (i) s end extends t; (ii) for each i<\omega, F(i) is a subset of H(i); (iii) for each i, if |t|\leqslant{i}<|s|, s(i)\in{H(i)}. Also, I do not know whether this forcing should be called ``diagonal prikry forcing''. – Ant emyy Lee Feb 10 '10 at 16:50 Francois pointed out correctly that there is an A for which g does not dominate all functions in V[A]. But perhaps you meant to ask whether this is true for all A not in V? Could you clarify? Also, do you know that the Prikry property holds for this forcing? That is, can we decide any given statement by shrinking only the F(i) and not exending the stem s? – Joel David Hamkins Feb 10 '10 at yes. I mean for every A, such that A\subseteq{\kappa} and A\notin{V}, there is such a sequence in V[A] not dominated by g. – Ant emyy Lee Feb 11 '10 at 2:47 This forcing has Prikry property. Similarly to Prikry forcing, if \kappa is the limit of this measurable cardinal, then this forcing does not add any new bounded subset of \kappa. This forcing appears in the chapter "Prikry-type forcing" of Handbook of set theory written by Moti Gitik. It is in the section 1.3 of this chapter. – Ant emyy Lee Feb 11 '10 at 2:52 show 2 more comments 1 Answer active oldest votes Yes. The answer is obviously "yes" if $A$ is a subset of $g$, so it is sufficient to show for any subset $A\subset\kappa$ there is $A'\subset g$ such that $V[A']=V[A]$. I assume that this is known, and it struck me at the start as obviously true, but I can't recall having seen it. Here is an outline of a proof. Write the conditions (following Gitik) as $x=\langle x_i\mid i \in\omega\rangle$, with $x_i\in\kappa_i\cup U_i$. Write $A_n=A\cap \kappa_n$. There is $x\leq^* 1^P$ which forces that $A_n$ is decided by conditions $y$ with $y_i\in U_i$ for $i>n$. It follows in particular that $A_n\in V$. up vote 7 down vote Find $x'\leq^* x$ which decides, for each $n$, the sentence "there is $z\in \dot G$ such that $z_i$ decides the value of $\dot A_n$ and $z_i\in U_i$." Let $b_n$ be the (finite) set of $i$ for which this sentence is forced to be false. Then there is $x''\leq^* x'$ and 1--1 functions $h_n\colon \Pi_{i\in b_n} x_i''\to \kappa_n$ such that $x''$ forces that $A_n=h_n(g\ upharpoonright b_n)$. Set $A'=\bigcup_n b_n$. Then $V[A] = V[A']$. add comment Not the answer you're looking for? Browse other questions tagged set-theory forcing or ask your own question.
{"url":"http://mathoverflow.net/questions/14883/a-question-about-diagonal-prikry-forcing","timestamp":"2014-04-19T22:45:05Z","content_type":null,"content_length":"57293","record_id":"<urn:uuid:989bd75d-51b1-471b-afcd-66c2fd044b59>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
How many non-equivalent sections of a regular 7-simplex? up vote 9 down vote favorite Suppose we have a regular 7-simplex in $\mathbb{R}^8$ defined by vertices <1,0,0,...,0>, <0,1,0,..,0>,...,<0,...,0,1>. A section is a 3-dimensional linear subspace of $\mathbb{R}^8$ that contains simplex centroid and three other points, each of which is a centroid of a non-empty set of simplex vertices. Two sections are equivalent if they are identical spaces under permutation of coordinates. In other words, when some permutation of coordinates is the bijection between two spaces. How many non-equivalent sections are there? Is there an efficient way to enumerate them? Motivation: visualizing symmetric priors over distributions over 8 outcomes Update 12/09 I tried an automatic search and got 49 sections, same as Peter Shor below. Here they are. Note that grouping is a bit different since I group sections with or without unexpected centroids together. No empty vertices: One empty vertex: Two empty vertices: Three empty vertices: Four empty vertices: old stuff Here's an illustration of solving this problem for 2-sections of a 3-simplex in 4 dimensions. There seems to be only 2 non-equivalent 2-sections (triangle and square). This solves the problem of visualizing entropy (contour lines) of distributions over 4 outcomes, and I'd like extend it to 8 outcomes. convex-polytopes geometry pr.probability 3 "equivalent if they define the same space": Could you expand on this? It does not have an unambiguous meaning to me, but that may be my lack. (Cool image!) – Joseph O'Rourke Sep 20 '10 at 20:28 OK, definition was ambiguous, should be fixed now. – Yaroslav Bulatov Sep 22 '10 at 3:57 The answer is 45 (I wish it had been 42, but you probably wouldn't believe me if it were). As long as you restrict yourself to 3-dimensional sections, you should be able to write a reasonably efficient computer program to enumerate them. For 30-dimensional sections of a 273-simplex, my techniques would not give an efficient algorithm. I'll try to write an actual answer when I get time. – Peter Shor Sep 24 '10 at 18:18 I cannot add ... after checking my work, the real answer is 49 (and the problem was indeed caused by my addition). – Peter Shor Sep 24 '10 at 21:02 Very nice computation. – Peter Shor Dec 22 '10 at 23:07 add comment 2 Answers active oldest votes Here's the answer. The main claims (Claims 1-4) I am fairly sure I got right, but I could easily have missed a case (or counted an extra case) in the later enumeration. If anybody finds a mistake, please comment. Let me remark that I find Claims 1-4 much more interesting than the subsequent enumeration based on them. The simplex centroid $e=(1,1,1,1,1,1,1,1)$ is included in all our hyperplanes, but I'm generally not counting it as a centroid in the discussion below (so centroid means centroid of a $k$-dimensional face, with $k < 7$). We'll divide the question into cases. The first case we deal with is when we don't have any unexpected centroids. We start with three centroids $a$, $b$, and $c$. These will automatically generate $\bar{a}$, $\bar{b}$, $\bar {c}$. We will call any centroid other than these six an unexpected centroid. We represent our centroids as subsets of {$1,2,\ldots,8$}. If we have the centroid {$1,2,3$}$=\langle 1,1,1,0,0,0,0,0\rangle $, then we automatically have the centroid {$4,5,6,7,8$}$=\langle 0,0,0,1,1,1,1,1\rangle$ corresponding to the complement of the set. So, to summarize, the first case consists of hyperplanes which pass through exactly six centroids: $a,b,c,\bar{a},\bar{b},\bar{c}$. Now, let's represent this case by putting the numbers {$1,2,\ldots,8$} on the vertices of a cube. The cube will have three faces corresponding to $a,b,c$ and the three opposite faces will correspond to $\bar{a},\bar{b},\bar{c}$. For example, if the sets were $a=${$1,2,3$}, $b=${ $1,5,6$} and $c=${$2,5,6,7$}, then the vertex $\bar{a}bc$ would contain {$5,6$}, the vertex $\ bar{a}\bar{b}\bar{c}$ would contain {$4,8$}, and the vertex $\bar{a}b\bar{c}$ would be empty. It's not too hard to see that whether there is an unexpected centroid only depends on the positions of the empty vertices. It's also clear that rotations and reflections of this cube give equivalent sections. Claim 1: If two adjacent vertices are empty, there is an unexpected centroid. Proof: The two adjacent vertices form an edge. We might as well rotate the cube so that the empty edge is the $ab$ edge. Then we have $a \cap b = \emptyset$. This means that $a \cup b$ is an unexpected centroid (here we have to use the fact $a \neq \bar{b}$). Example: if $a =${$1,2$}, $b = ${$3,4,5$}, then $a \cup b =${$1,2,3,4,5$} is in the linear span of $a$ and $b$. Claim 2: If two opposite vertices of the cube are empty, then there is an unexpected centroid. Proof: We can rotate the cube so the empty vertices correspond to $\bar{a}\bar{b}\bar{c}$ and $abc$. Then, for example, if $a=(1,1,1,0,0,0,0,0)$, $b=(0,0,1,1,1,1,0,0)$ and $c= (1,0,0,0,0,1,1,1)$, we can take $a+b+c-e$ where $e$ is the all-ones vector, and get $(1,0,1,0,0,1,0,0)$. Claim 3: If the odd- or even-parity vertices of the cube are empty, then there is an unexpected centroid. Proof: Rotate the cube so that $\bar{a}\bar{b}\bar{c}$ is empty. Now, every coordinate is in exactly 1 or 3 of $a,b,c$. Thus, $\frac{1}{2}(a+b+c-e)$ is an unexpected centroid. Example $a= (1,1,1,0,0,0,1,1)$, $b=(0,0,0,1,1,0,1,1)$ and $c=(0,0,0,0,0,1,1,1)$, and $\frac{1}{2}(a+b+c-e) = (0,0,0,0,0,0,1,1)$ Claim 4: If none of the situations in Claims 1,2,3 hold, then there is no unexpected centroid. Proof: We can rotate the cube so that the empty vertices are a subset of $\bar{a}bc$, $a\bar{b}c$ and $ab\bar{c}$. For there to be an unexpected centroid, you must be able to find $\alpha a + \beta b + \gamma c$ so that the coordinates of this vector take on two values. One of these coordinates is 0 (since $\bar{a}\bar{b}\bar{c}$ is not empty), meaning we can assume wlog that $\alpha, \beta, \gamma$ are either 1 or 0. But for $\alpha + \beta + \gamma$ to also be either 1 or 0, we need two of $\alpha, \beta, \gamma$ to be 0, which means that we don't get an unexpected centroid. So now, we need to enumerate the number of ways of putting 8 elements onto the vertices of a unit cube so that at least one element is on each of the nonempty vertices. Since sections are equivalent under permutations of the coordinates, we should consider these to be 8 identical elements (so the only thing that matters is how many elements are on a vertex). There are four Case A: no empty vertices. There is just 1 way of doing this: putting one element on each vertex. Case B: one empty vertex. There are 3 ways of doing this. Exactly one vertex will have two elements on it, and it can be either Hamming distance one, two, or three from the empty vertex. Case C: two empty vertices. In this case, these two vertices must have Hamming distance 2. The two extra elements can either be on the same vertex (3 ways) or two different vertices (7 Case D: three empty vertices. In this case, any pair of these three vertices must have Hamming distance 2, so there's only one way of arranging them. The three extra elements can either be all on the same vertex (3 ways), divided two on one vertex and one on another (7 ways), or on three different vertices (4 ways). up vote 10 This gives 28 essentially different sections with no unexpected centroids. down vote accepted We now must count the cases with unexpected centroids corresponding to Claims 1-3. We'll deal with the situation in the Claims 1,2,3 separately. Case of Claim 3 Let's start with the situation in Claim 3. First, we can assume that there are no empty vertices of the cube other than the two opposite ones (if this happens, we are in the Claim 1 situation, and we take care of it there). We now can choose another centroid $d$ in the linear span of $a,b,c,e$ so that $a \cup b \cup c \cup d$ covers every coordinate exactly twice. By the criterion that there are six non-empty vertices, none of the six intersections $a \cap b$, $a \cap c$, etc. can be empty. We need to put the eight elements into these six intersections. This corresponds to putting 8 elements on the edges of a tetrahedron so that every edge corresponds to at least one element. There are 3 ways to do this (two extra on one edge, one extra on each of two opposite edges, and one extra on each of two adjacent edges). Claim 3 thus gives 3 more non-equivalent sections. Notice that if we had analysed Claim 3 by just looking at the symmetries of the cube (as we did for the cases without unexpected centroids), we would have obtained four non-equivalent sections. Case of Claim 2 What I'd like to claim here is that this is really the situation in Claim 1 disguised. Maybe the best way to do this is by example. If we have $a=${$1,2,7,8$}, $b=$ {$3,4,7,8$}, $c=${$5,6,7,8$}, then the centroid {$7,8$} is in our hyperplane, and the hyperplane is thus generated by $a'=${$1,2$}, $b'=${$3,4$}, $c'=${$5,6$}, which is covered by Claim Case of Claim 1 Here, there are three possibilities. In the first one, there are four centroids $a$, $b$, $c$, $d$, with pairwise empty intersections so that $a \cup b \cup c \cup d =$ {$1,2,\ldots,8$}. The number of ways of doing this is the number of partitions of 8 into four non-empty parts, which is 5: {$(5,1,1,1), (4,2,1,1), (3,3,1,1),(3,2,2,1),(2,2,2,2)$}. In the second possibility, we have three pairwise disjoint centroids $a$, $b$, $c$, with $a \cup b \cup c = e$, and also another centroid $d$ so that both $d\cap x$ and $\bar{d} \cap x$ are non-empty for $x=a,b,c$. The cardinalities of $a,b,c$ could be {$4,2,2$} or {$3,3,2$}. In either case, we get two non-equivalent sections, giving 4 total non-equivalent sections. For the third possibility, we have three pairwise disjoint centroids $a$, $b$, $c$, with $a \cup b \cup c = e$, and we have two more pairwise disjoint centroids $f$ and $g$ so that $f \ cup g = a \cup b$. In this case, the cardinality of $c$ can range from 1 to 4. I'll just list representative vectors for these possibilities. The coordinates considered are those not in This gives 9 more non-equivalent sections, making 49 altogether. Could you define an "unexpected centroid"? Just one accidentally included beyond the simplex centroid + the three other starting points? – Joseph O'Rourke Sep 24 '10 at 20:01 1 Exactly: An unexpected centroid is one included that is not the simplex centroid, the three other starting points, or their complements. So a section with an unexpected centroid contains more than seven centroids (counting the simplex centroid). – Peter Shor Sep 24 '10 at 23:01 I'm trying to understand how the placement of centroids on the cube works. The edges are labeled by the intersection of the subsets placed at the adjacent faces, and the vertices are labeled by the triple intersections of the faces or edges, right? In the first example, I'm getting $\bar{a}\bar{b}\bar{c}$={$4,8$}. I don't yet see why it's just the empty vertices that determine unexpected centroids. Is there some deeper feature of hypercubes that is being exploited here? – j.c. Sep 26 '10 at 4:15 The cube is just coming from the fact that it has the right symmetry ... it has nothing to do with the 7 in the 7-simplex; you'd get a cube for an $n$-simplex, too. It is 3-dimensional 1 because we're asking for 3-sections (generated by 3 centroids an the overal centroid), and, as far as I can tell, it just happens to have the right symmetry. This is good, because it makes it easy to visualize. If you were looking for 4-sections, then you'd be using the 4-dimensional hypercube, and I suspect the analysis of unexpected centroids would become much more complicated. – Peter Shor Sep 27 '10 at 23:04 1 BTW, I also got 49 sections with an automatic search. A nice surprise is that section corresponding to no empty vertices is a regular octahedron. – Yaroslav Bulatov Dec 22 '10 at 4:53 show 6 more comments Preliminary thoughts: There are 8 vertices of a 7 dimensional simplex, 28 edges, and $\binom{8}{k+1}$ k-faces. Each $k$-face is a $k$-simplex and they all have distinct centroids. There are thus $\sum_{k=0}^6\ binom{8}{k+1}=2^8-2=254$ different centroids that we might choose from (you have excluded the centroid of the entire 7-simplex by asking for "other" points). 3d sections are defined by making 3 (for non-degenerate sections, different) choices, so there will be $\binom{254}{3}$. Many are still degenerate, and many are equivalent. Let the rank of a centroid be the dimensionality of the face that it arises from, and I'll write k-centroid for a centroid with rank k. Each non-degenerate section is the intersection of the 7-simplex with a 3-plane through the unique 7-centroid (I'll call that point "the origin"). Let's see if we can count distinct 3-planes We'd like to choose 3 centroids which are all "linearly independent" over the origin. Let's order the choices so that the rank is nondecreasing. First note the following duality - the centroid of a $k$-face and the centroid of the "opposing" $(6-k)$-face lie on the same line through the origin. The vertices of the k-face and the (6-k)-face form a partition of the vertices of the 7-simplex. Therefore, we can restrict the first choice to choosing centroids with rank between 0 and 3 inclusive, and let's just choose a subset of 1/2 of the 3-centroids which don't lie opposite each other, so there are 8+28+56+35=127 choices at first. (It was helpful to me here to draw the 3-d case, I'm sorry that I haven't included images). (When we start considering choices up to equivalence rather than up to defining the same planes, we probably can just reduce to 4 choices here, one for each different For the second centroid, there are 126 remaining centroids from the set above, however some of the choices lead to equivalent 2-planes. The principle here should be similar. We've already ruled out points which lie opposite each other from the origin, now we need to rule out points which lie opposite each other from the line defined by the 1st centroid and the origin. The first centroid $q$ defines a m-simplex $T$ if it is rank m and also defines a (6-m)-simplex $S$ which comes from the vertices not in $T$. For a k-centroid $p$ which is a centroid of a subsimplex of $T$ (or $S$), the (m-1-k)-centroid opposite $p$ in $T$ (or (6-m-1-k)-centroid opposite $p$ in $S$, as the case may be) defines the same 2-plane with the origin and $q$. We'd just like to count one of these. I'm less clear on what happens with the centroids arising from simplices with vertices both in $T$ and $S$, so perhaps I'll take a break for now. Addendum 1: I realized that what I'm describing above is really part of a vector matroid. Let the set $E$ be the set of 255 centroids of nonempty subsimplices of the 7-simplex, now treated as vectors in a vector space (they are vectors from the 7-centroid at the origin to their positions on the faces of the 7-simplex). There is a matroid $M$ represented by $E$ - the closure $\langle A\ rangle$ on a subset $A$ of $E$ is defined by adjoining to $A$ the other centroids in $E$ which are in the linear span of $A$. [Hopefully I have my terminology correct, I am just learning about matroids. Further, I hope that introducing the matroid language clarifies, rather than confuses. Please comment with any suggestions.] The problem of counting the 3-planes above can be reformulated as finding the rank 3 flats of $M$. The problem of finding 3-sections up to equivalence is then finding the orbit sets of these flats under the obvious action of the group of permutations of 8 letters $S_8$. up vote Let me work out the case of 1- and 2-sections in a 3d simplex. Let the vectors to each of the vertices of the 3 simplex be $e_1,e_2,e_3,e_4$ and note that $e_1+e_2+e_3+e_4=0$. 4 down vote The elements of the set $E$ of centroids are in one to one correspondence with nonemepty subsets of the set {1,2,3,4}. For instance {1,2} corresponds to $\frac{e_1+e_2}{2}$, the centroid of the 1-face whose vertices are $e_1$ and $e_2$. I find this combinatorial notation a little easier to work with than keeping track of the detailed geometry. As I sketched out in my "preliminary thoughts", there is a duality between the centroid of a k-face and the centroid of the (2-k)-faces opposite them. What I meant by this is just that this pair of centroids lie on a line, i.e. are linearly dependent. This fact follows from the single relation $e_1+e_2+e_3+e_4=0$ above. In the subset notation, this corresponds to a duality between a subset and its complement in {1,2,3,4}, e.g. {1} and {2,3,4}. In matroid language, this just means that the closure $\langle\{1\}\rangle$ must contain {2,3,4} as well. From this, it follows that the set of 1-sections (rank 1 flats of $M$) is in bijection with the partitions of {1,2,3,4} into 2 strict subsets. (We must also include {1,2,3,4} in every flat, since it represents the zero vector). It's easy to see that there are 4+3=7 of these: {{1},{2,3,4},{1,2,3,4}}, {{2},{1,3,4},{1,2,3,4}}, {{3},{1,2,4},{1,2,3,4}}, and {{4},{1,2,3},{1,2,3,4}} {{1,3},{2,4},{1,2,3,4}}, {{1,2},{3,4},{1,2,3,4}}, and {{1,4},{2,3},{1,2,3,4}} These 7 break into two orbits (of size 4 and 3) under the action of $S_4$, so there are only two inequivalent 1-sections. These are the center and lower left picture in Yaroslav's images. The set of (non-degenerate) 2-sections (rank 2 flats of $M$) is more intricate, but not hard to enumerate "by hand" by considering closures of subsets of the power set of {1,2,3,4}. Rather than write out in more detail the closure operation, let me give an example of finding the closure of a subset. Suppose we'd like to find the closure of {{1},{1,2}}, which is certainly a rank 2 subset of $M$. We must adjoin {2} because the complement of {1} in {1,2} is {2} (this corresponds to the fact that $e_2$ is in the linear span of $\{e_1,(e_1+e_2)/2\}$). Similarly, we adjoin {3,4} and {1,3,4} and {2,3,4} as they are the complements of {1,2}, {2}, and {1}, respectively in {1,2,3,4}. I hope that you can see that this arises from the relation $e_1+e_2+e_3+e_4=0$. Finally, we adjoin {1,2,3,4}, which represents the 0 vector; though again, this is just a notational artifact. Eventually, we get: {{1},{2},{1,2},{3,4},{1,3,4},{2,3,4},{1,2,3,4}} and 5 other permutations (see any of the triangles in Yaroslav's pictures) {{1,2},{2,3},{3,4},{1,4},{1,2,3,4}} and 2 other permutations (the square in Yaroslav's pictures) Unfortunately, my combinatorics is not good enough to extrapolate up to the 7-simplex. But perhaps someone else can see the pattern? Alternatively, you might try inputting this matroid into Macek or Oid and doing the computation in those systems? It also looks like SAGE has some matroid functionality as well see this link -- though for this problem, the computation needed is linear algebra, and ought to be doable in Mathematica as well. However, I can't at the moment extract from my response an algorithm much better than brute force; particularly, I don't know a good way to count orbit sets of the linear spans under $S_8$... Any ideas? add comment Not the answer you're looking for? Browse other questions tagged convex-polytopes geometry pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/39429/how-many-non-equivalent-sections-of-a-regular-7-simplex","timestamp":"2014-04-16T16:54:21Z","content_type":null,"content_length":"84582","record_id":"<urn:uuid:28dbe297-83f8-44ce-a70c-1ca49ba57bd0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Laguna Hills Trigonometry Tutor Find a Laguna Hills Trigonometry Tutor ...He is patient with his students and very knowledgeable in the subject. He has a way of making you understand the problems and get good grades in the class. He is in fact the best tutor I have ever had!" - Ashlie W., Huntington Beach "As a former student of Kris in an educational setting as a t... 24 Subjects: including trigonometry, Spanish, physics, writing ...I am willing to help a student based on their needs. I have a love of teaching math. I have excellent communication skills. 9 Subjects: including trigonometry, calculus, algebra 1, SAT math ...From my experience, I have found many creative ways of explaining common problems. I love getting to the point when the student finally understands the concept and tells me that they want to finish the problem on their own. I look forward to helping you with your academic needs. 14 Subjects: including trigonometry, calculus, physics, geometry ...Go into the test with the right equipment, whether that means a calculator, class notes, textbooks or simply a pencil. 8. Answer test questions they know first and then go on to the more challenging questions. 9. Use all of the clues available to them while reading, such as headlines, pictures, captions, charts, tables and graphs. 10. 18 Subjects: including trigonometry, geometry, ASVAB, GRE ...I help students •hone their critical reading comprehension skills; •master standard written English grammar, usage, punctuation, and composition; •develop killer writing skills that will serve them well throughout their K-12 school years; •discover exactly what is required of them so they can ... 41 Subjects: including trigonometry, English, reading, writing
{"url":"http://www.purplemath.com/Laguna_Hills_Trigonometry_tutors.php","timestamp":"2014-04-21T10:42:25Z","content_type":null,"content_length":"24310","record_id":"<urn:uuid:7a4b296b-e70c-4179-998f-ccdc2092a5f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Prizes, Awards, and Honors for Women Mathematicians Prizes and Awards Lecture Series A trick question! There is no Nobel prize in mathematics. Why not? That question has created numerous stories, myths, and anecdotes. The most popular is that Nobel's wife had an affair with a mathematician, usually said to be Mittag-Leffler, and in revenge Nobel refused to endow one of his prizes in mathematics. Too bad for this story that Nobel was a life-long bachelor! The other common story is that Mittag-Leffler, the leading Swedish mathematician of Nobel's time, antagonized Nobel and so Nobel gave no prize in mathematics to prevent Mittag-Leffler from becoming a winner. This story is also suspect, however, because Nobel and Mittag-Leffler had almost no contact with each other. Most likely Nobel simply never gave any thought to including mathematics among his list of prize areas. 1. Garding, Lars and Lars Hormander. "Why is there no Nobel prize in mathematics?" The Mathematical Intelligencer, 7(3)(1985), 73-74. 2. Ross, Peter. "Why isn't there a Nobel prize in mathematics?" Math Horizons, November 1995, p9. [Reprint from the Math Forum] 3. Why is there no Nobel Prize in Mathematics?, http://www.almaz.com/Nobel/why_no_math.html, The Nobel Prize Internet Archive The Fields Medal is considered to be the equivalent of the Nobel prize for mathematics. John Charles Fields (1863-1932), a Canadian mathematician, endowed funds in his will for an award for mathematical achievement and promise that would emphasize the international character of the mathematical endeavor. The first Fields Medal was awarded at the International Congress of Mathematics meeting in Oslo in 1936. Since 1950 the medal has been awarded every four years at the International Mathematical Congress to between 2 and 4 mathematicians. Although there is no specific age restriction in Fields' will, he did wish that the awards recognize both existing work and the promise of future achievement, so the medals have been restricted to mathematicians under the age of 40. No woman mathematician has ever won a Fields Medal. 1. IMU Awards and Prizes, [contains complete list of all winners of the Fields Medal and pictures of the front and back of the medal] 2. Historical Introduction by Alex Lopez-Ortiz, part of his FAQ site on mathematics. [Description from the Notices of the American Mathematical Society] The Ruth Lyttle Satter Prize in Mathematics was established in 1990 using funds donated to the American Mathematical Society by Joan S. Birman of Columbia University in memory of her sister, Ruth Lyttle Satter. Professor Satter earned a bachelor's degree in mathematics and then joined the research staff at AT&T Bell Laboratories during World War II. After raising a family, she received a Ph.D. in botany at the age of forty-three from the University of Connecticut at Storrs, where she later became a faculty member. Her research on the biological clocks in plants earned her recognition in the U.S. and abroad. Professor Birman requested that the prize be established to honor her sister's commitment to research and to encouraging women in science. The prize is awarded every two years to recognize an outstanding contribution to mathematics research by a woman in the previous five years. The winners have been: [Description from the Notices of the American Mathematical Society] The Executive Committee of the Association for Women in Mathematics established the annual Louise Hay Award for Contributions to Mathematics Education. The purpose of this award is to recognize outstanding achievements in any area of mathematics education, to be interpreted in the broadest possible sense. While Louise Hay was widely recognized for her contributions to mathematical logic and for her strong leadership as head of the Department of Mathematics, Statistics, and Computer Science at the University of Illinois at Chicago, her devotion to students and her lifelong commitment to nurturing the talent of young women and men secure her reputation as a consummate educator. The annual presentation of this award is intended to highlight the importance of mathematical education and to evoke the memory of all that Hay exemplified as a teacher, scholar, administrator, and human being. The winners have been: • 1991 Shirley Frye, National Council for Teachers of Mathematics • 1992 Olga Beaver, Williams College • 1993 Naomi Fisher, University of Illnois at Chicago • 1994 Kaye A. de Ruiz, U.S. Air Force • 1995 Etta Falconer, Spelman College [Biography] • 1996 Glenda T. Lappan, Michigan State University, and Judith Roitman, University of Kansas [Biography] • 1997 Marilyn Burns, Marilyn Burns Education Associates • 1998 Deborah Hughes Hallett, Harvard University and the University of Arizona • 1999 Martha K. Smith, University of Texas at Austin • 2000 Joan Ferrini-Mundy, Michigan State University • 2001 Patricia D. Shure, University of Michigan • 2002 Annie Sheldon, Tennessee Technological University • 2003 Katherine Puckett Layton, Beverly Hills High School and UCLA Graduate School of Education • 2004 Bozenna Pasik-Duncan, University of Kansas • 2005 Susanna S. Epp, DePaul University • 2006 Patricia Kenshaft, Monclair State University • 2007 Virginia McShane Warfield, University of Washington • 2008 Harriet S. Pollatsek, Mount Holyoke College • 2009 Deborah Loewenberg Ball, University of Michigan • 2010 Phyllis Z. Chinn, Humboldt State University • 2011 Patricia Campbell, University of Maryland • 2012 Bonnie Gold, Monmouth University • 2013 Amy Cohen, Rutgers University • 2014 Sybilla Beckmann, University of Georgia For more information about the award and the recipients, visit Louise Hay Award at the Association for Women in Mathematics web site. The Steele Prizes were established in 1970. In 1993, the AMS formalized three categories for the prizes. The prize for "seminal contributions to research" is awarded for a paper, whether recent or not, that has proved to be of fundamental or lasting importance in its field, or a model of important research. Women mathematicians who have won the prize are: • 2007 Karen Uhlenbeck, "Removable singularities in Yang-Mills fields," Comm. Math. Phys. 83 (1982), 11-29; and "Connections with L^p bounds on curvature," Comm. Math. Phys. 83 (1982), 31-42. • 2011 Ingrid Daubechies, "orthonormal bases of compactly supported wavelets," Comm. Pure Appl. Math. 41 (1988), 909-996 The Chauvenet Prize is awarded annually by the Mathematical Association of America to the author of an outstanding expository article on a mathematical topic by a member of the association. First awarded in 1925, the Prize is named for William Chauvenet, a professor of mathematics at the United States Naval Academy. It was established through a gift in 1925 from J.L. Coolidge, then MAA President. Winners of the Chauvent Prize are among the most distinguished of mathematical expositors. Women mathematicians who have won the prize are: • 1996 Joan Birman, "New Points of View in Knot Theory," AMS Bulletin, 28(1993). • 2001 Carolyn S. Gordon (with David L. Webb), "You can't hear the shape of a drum", American Scientist 84 (1996), 46-55. • 2002 Ellen Gethner (with Stan Wagon and Brian Wick), "A Stroll through the Gaussian Primes", American Mathematical Monthly, vol 105, no. 4 (1998), 327-337. The Euler Book Prize is awarded annually to an author or authors of an outstanding book about mathematics. The Prize is intended to recognize authors of exceptionally well written books with a positive impact on the public's view of mathematics and to encourage the writing of such books. Eligible books include mathematical monographs at the undergraduate level, histories, biographies, works of fiction, poetry; collections of essays, and works on mathematics as it is related to other areas of arts and sciences. To be considered for the Euler Prize a book must be published during the five years preceding the award and must be in English. The prize was established in 2005 and has been given every year at a national meeting of the Association, beginning in 2007, the 300th anniversary of the birth of Leonhard Euler. This award also honors Virginia and Paul Halmos whose generosity made the award possible. Women mathematicians who have won the prize are: • 2012 Daina Taimina, Crocheting Adventures with Hyperbolic Planes, A.K. Peters, 2009. The Beckenbach Book Prize, established in 1986, is the successor to the MAA Book Prize established in 1982. It is named for the late Edwin Beckenbach, a long-time leader in the publications program of the Association and a well-known professor of mathematics at the University of California at Los Angeles. The Prize of $2,500 is intended to recognize the author(s) of a distinguished, innovative book published by the MAA and to encourage the writing of such books. The award is not given on a regularly scheduled basis. To be considered for the Beckenbach Prize a book must have been published during the five years preceding the Award. Women who have won the prize are: • 1996, Constance Reid, The Search for E.T. Bell, Also Known as John Taine, Spectrum, 1993. • 2006 Jennifer Quinn (with Arthur Benjamin), Proofs That Really Count: the Art of Combinatorial Proof, Dolciani Mathematical Expositions, 2003. • 2014 Judith Grabiner, A Historian Looks Back: The Calculus as Algebra and Selected Writings, MAA Spectrum, 2010. For more information about the Beckenback prize, see http://www.maa.org/programs/maa-awards/writing-awards/beckenbach-book-prize. MacArthur fellowships, popularly known as the "genius awards," cannot be applied for; rather, candidates are drawn from a pool of initial nominations by an anonymous group of 100 people. The John D. and Catherine T. MacArthur Foundation aims to recognize people whose achievements in the arts, humanities, sciences, social sciences, and public affairs show the promise of even greater accomplishments in the future. There are no strings attached. Recipients can spend the money, usually anywhere from $150,000 to $375,000 over a period of five years, anyway they want. The fellowships were established in 1981. Women mathematicians who have received MacArthur Fellowships are: The Schafer Prize is awarded to an undergraduate woman in recognition of excellence in mathematics and is sponsored by the Association for Women in Mathematics The Schafer Prize was established in 1990 by the executive committee of the AWM and is named for former AWM president and one of its founding members, Alice T. Schafer, who has contributed a great deal to women in mathematics throughout her career. The criteria for selection includes, but is not limited to, the quality of the nominees' performance in mathematics courses and special programs, exhibition of real interest in mathematics, ability to do independent work, and if applicable, performance in mathematical competitions. The winners of the Schafer Prize have been: • 1990 Linda Green (University of Chicago) and Elizabeth Wilmer (Harvard University) • 1991 Jeanne Nielsen (Duke University) • 1992 Zvezdelina E. Stankova (Bryn Mawr College) • 1993 Catherine O'Neil (University of California) and Dana Pascovici (Dartmouth College) • 1994 Jing Rebecca Li (University of Michigan) • 1995 Ruth Britto-Pacumio (Massachusetts Institute of Technology) • 1996 Ioana Dumitriu (New York University's Courant Institute of Mathematical Sciences) • 1997 no prize award due to calendar change • 1998 Sharon Ann Lozano (University of Texas at Austin) and Jessica A. Shepherd (University of Utah) • 1999 Caroline J. Klivans (Cornell University) • 2000 Mariana E. Campbell (University of California, San Diego) • 2001 Jaclyn (Kohles) Anderson (University of Nebraska at Lincoln) • 2002 Kay Kickpatrick (Montana State University) and Melanie Wood (Duke University) • 2003 Kate Gruher (University of Chicago) • 2004 Kimberley Spears (University of California) • 2005 Melody Chan (Yale University) • 2006 Alexandra Ovetsky (Princeton University) • 2007 Ana Caraiani (Princeton University) • 2008 Galyna Dobrovolska (Massachusetts Institute of Technology) and Alison Miller (Harvard University) • 2009 Maria Monks (Massachusetts Institute of Technology) • 2010 Hannah Alpert (University of Chicago) and Charmaine Sia (Massachusetts Institute of Technology) • 2011 Sherry Gong (Harvard University) • 2012 Fan Wei (Massachusetts Institute of Technology) • 2013 MurphyKate Montee (University of Notre Dame) • 2014 Sarah Peluse (University of Chicago) For more information about the Alice T. Schafer Prize for Excellence in Mathematics by an Undergraduate Woman, see Alice T. Schafer Prize at the Association for Women in Mathematics web site. This award presented by the Association for Women in Mathematics is named for M. Gweneth Humphreys (1911-2006). Professor Humphreys taught mathematics to women for her entire career at Mount St. Scholastica College, Sophie Newcomb College, and finally at Randolph Macon Woman's College for over thirty years. The award recognizes her commitment to and her profound influence on undergraduate students of mathematics. Recipients have been: For more information about this award, see Humphreys Award at the Association for Women in Mathematics web site. The AWM-Microsoft Research Prize serves to highlight exceptional research in some area of algebra by a woman early in her career. The field will be broadly interpreted to include number theory, cryptography, combinatorics and other applications, as well as more traditional areas of algebra. The prize will be awarded every other year, beginning in 2014. Recipients have been: The AWM-Sadosky Research Prize serves to highlight exceptional research in analysis by a woman early in her career. The field will be broadly interpreted to include all areas of analysis. The award is named for Cora Sadosky, a former president of AWM. The prize will be awarded every other year, beginning in 2014. Recipients have been: The Yueh-Gin Gung and Dr. Charles Y. Hu Award for Distinguished Service to Mathematics is the most prestigious award made by the Mathematical Association of America. This award, first given in 1990, is the successor to the Award for Distinguished Service to Mathematics, awarded since 1962. Women mathematicians who have won this award or the previous Distinguished Service Award are: The Sylvester Medal has been awarded by the Royal Society of London every three years since 1901 for the encouragement of mathematical research without regard to nationality. It is given in honor of Professor J. J. Sylvester. Women mathematicians who have won the Sylvester Medal are: Complete list of winners of the Sylvester Medal The De Morgan Medal, the London Mathematical Society's premier award, is awarded every third year in memory of Professor A. De Morgan, the Society's first President. The only criteria for the award is the candidate's contributions to mathematics. The medal was first awarded in 1884. Women mathematicians who have won the De Morgan Medal are: Complete list of winners of the De Morgan Medal The Adams Prize, given annually by the University of Cambridge to a British mathematician under the age of 40, commemorates the discovery by John Couch Adams of the planet Neptune through calculation of the discrepancies in the orbit of Uranus. It was endowed by members of St John's College, Cambridge, and approved by the Senate of the University in 1848. Each year applications are invited from mathematicians who have worked in a specific area of mathematics. Women mathematicians who have won the Adams Prize are: • 2002 Susan Howson, University of Nottingham (Number Theory) The CRM-Fields-PIMS prize is intended to be the premier mathematics prize in Canada. The prize recognizes exceptional achievement in the mathematical sciences. The winner's research should have been conducted primarily in Canada or in affiliation with a Canadian university. The main selection criterion is outstanding contribution to the advancement of research. The prize was established by the Centre de recherches mathematiques and the Fields Institute as the CRM-Fields prize in 1994. In 2005, Pacific Institute for the Mathematical Sciences (PIMS) became an equal partner. Women mathematicians who have won the CRM-Fields-PIMS prize are: The Florence Nightingale David Award recognizes a female statistician who exemplifies the contributions of Florence Nightingale David. The award is to be granted to a female statistician who serves as a role model to other women by her contributions to the profession through excellence in research, leadership of multidisciplinary collaborative groups, statistics education, or service to the professional societies. Winners of the award have been: This award shall recognize an individual who exemplifies the contributions of Elizabeth L. Scott's lifelong efforts to further the careers of women in academia. The award is given every other year in even numbered years. Winners of the award have been: The Janet L. Norwood award is presented annually by the School of Public Health at The University of Alabama at Birmingham to recognize outstanding achievement by a woman in the statistical sciences. Dr. Janet Norwood was the first woman commissioner of the US Bureau of Labor Statistics and served as president of the American Statistical Association. The winners of the award have been: UAB web site about the Janet L. Norwood Award. The Salem Prize, founded in 1968 by the widow of Raphael Salem, is awarded every year to a young mathematician judged to have done outstanding work in Salem's field of interest, primarily Fourier series and related areas in analysis. The prize is considered highly prestigious. Women who have won the Salem Prize are: • 2006 Stephanie Petermichl, University of Texas at Austin • 2010 Nalini Anantharaman, Université Paris-Sud, Orsay The Association for Women in Mathematics established the Emmy Noether Lectures to honor women who have made fundamental and sustained contributions to the mathematical sciences. These one-hour expository lectures are presented at the Joint Mathematics Meetings each January. The Emmy Noether Lecturers have been: AWM web site about the Emmy Noether Lectures. The Emmy Noether Lectures at the International Congress of Mathematicians, held every four years, is jointly organized by European Women in Mathematics, the Committee on Women of the Canadian Mathematical Society, and the Association for Women in Mathematics. • 1994 Olga Ladyzhenskaya, "On some evolutionary fully nonlinear equations of geometrical nature" • 1998 Cathleen Morawetz, "Variations on conservation laws for the wave equation" • 2002 Hu Hesheng, "Two-dimensional harmonic maps" • 2006 Yvonne Choquet-Bruhat, "Mathematical problems in General Relativity" [Video] • 2010 Idun Reiten, "Cluster categories" • 2014 Georgia Benkart The Association for Women in Mathematics and the Mathematical Association of America annually present the Etta Z. Falconer Lectures to honor women who have made distinguished contributions to the mathematical sciences or mathematics education. These one-hour expository lectures are presented at Mathfest each summer. While the lectures began with Mathfest 1996, the title "Etta Z. Falconer Lecture" was established in 2004 in memory of Falconer's profound vision and accomplishments in enhancing the movement of minorities and women into scientific careers. The Falconer Lecturers have • 1996 Karen E. Smith, MIT, "Calculus mod p" • 1997 Suzanne M. Lenhart, University of Tennessee, "Applications of Optimal Control to Various Population Models" • 1998 Margaret H. Wright, Bell Labs, "The Interior-Point Revolution in Constrained Optimization" • 1999 Chuu-Lian Terng, Northeastern University, "Geometry and Visualization of Surfaces" • 2000 Audrey Terras, University of California at San Diego, "Finite Quantum Chaos" • 2001 Pat Shure, University of Michigan, "The Scholarship of Learning and Teaching: A Look Back and a Look Ahead" • 2002 Annie Selden, Tennessee Technological University, "Two Research Traditions Separated by a Common Subject: Mathematics and Mathematics Education" • 2003 Katherine P. Layton, Beverly Hills High School, "What I Learned in Forty Years in Beverly Hills 90212" • 2004 Bozenna Pasik-Duncan, University of Kansas "Mathematics Education of Tomorrow" • 2005 Fern Hunt, National Institute of Standards and Technology, "Techniques for Visualizing Frequency Patterns in DNA" • 2006 Trachette Jackson, University of Michigan, "Cancer Modeling: From the Classical to the Contemporary" • 2007 Katherine St. John, City University of New York, "Comparing Evolutionary Trees" • 2008 Rebecca Goldin, George Mason University, "The Use and Abuse of Statistics in the Media" • 2009 Kate Okikiolu, University of California, San Diego, "The sum of squares of wavelengths of a closed surface" • 2010 Ami Radunskaya, Pomona College, "Mathematical Challenges in the Treatment of Cancer" [Slides from talk] • 2011 Dawn Lott, Delaware State University, "Mathematical Interventions for Aneurysm Treatment" • 2012 Karen King, New York University, "Because I Love Mathematics: The Role of Disciplinary Grounding in Mathematics Education" • 2013 Patricia Kenschaft, Montclair University, "Improving Equity and Education: Why and How" • 2014 Marie Vitulli, University of Oregon, "From Algebraic to Weak Subintegral Extensions in Algebra and Geometry" AWM web site about the Falconer Lectures. The Association for Women in Mathematics in cooperation with the Society for Industrial and Applied Mathematics (SIAM) sponsors the AWM-SIAM Sonia Kovalevksy Lecture Series. The lecture is given annually at the SIAM Annual Meeting by a woman who has made distinguished contributions in applied or computational mathematics. The lectureship may be awarded to any woman in the scientific or engineering community. The Kovalevsky Lecturers have been: See the AWM web site or the SIAM webs site for more information about the Sonia Kovalevsky Lecture. The Canadian Mathematical Society inaugurated the The Krieger-Nelson Prize to recognize outstanding research by a female mathematician. The first prize was awarded in 1995. The winners have been: • 1995 Nancy Reid, University of Toronto • 1996 Olga Kharlampovich, McGill University • 1997 Cathleen Morawetz, New York University • 1998 Catherine Sulem, University of Toronto • 1999 Nicole Tomczak-Jaegermann, University of Alberta • 2000 C. Kanta Gupta, University of Manitoba • 2001 Lisa Jeffrey, University of Toronto • 2002 Priscilla Greenwood, University of British Columbia and Arizona State University • 2003 Leah Keshet, University of British Columbia • 2004 Not awarded • 2005 Barbara Keyfitz, University of Houston • 2006 Penny Haxell, University of Waterloo • 2007 Pauline van den Driessche, University of Victoria • 2008 Izabella Laba, University of British Columbia • 2009 Yael Karshon, University of Toronto • 2010 Lia Bronsard, McMaster University • 2011 Rachel Kuske, University of British Columbia • 2012 Ailana Fraser, University of British Columbia • 2013 Chantal David, Concordia University As part of its celebrations of the World Mathematical Year in 2000, the Canadian Mathematical Society sponsored the creation of a poster on women in mathematics. The poster features the six outstanding women mathematicians who were awarded the Krieger-Nelson prize from 1995 to 2000. The Mary Cartwright Lecture is an annual event organized by the London Mathematical Society and forms part of the annual program of Society Meetings. Lectures are given by both a female and male mathematician each year. Female lecturers have been: • 2000 Caroline Series, "Exploring the space of quasifuchsian groups" • 2001 Cathleen Synge Morawetz, "Mathematics and flying aeroplanes" • 2002 Frances Kirwan FRS, "Moduli spaces of Riemann surfaces and holomorphic bundles" • 2003 Jennifer Chayes, "Mathematical models of the internet and World Wide Web" • 2004 Mary Rees FRS, "The topographer's view of parameter spaces" • 2005 Elizabeth Thompson, "Relatedness, genome sharing, and the detection of genes" • 2006 Ulrike Tillmann, "The topology of strings: Mumford’s conjecture and beyond" • 2007 Angela Stevens, "Interacting Cell Systems: An Example for Mathematical Modeling in the Life-Sciences" • 2008 Valerie Beral FRS, "Mathematics of medicine: breast cancer treatment and prevention" • 2009 Dusa McDuff, FRS, "Symplectic embeddings of 4-dimensional ellipsoids" • 2010 Ruth Gregory, "Fun with extra dimensions" • 2011 Alison Etheridge, "Evolution in a spatial continuum" • 2012 Agata Smoktunowicz, "Old and new questions in noncommutative algebra" • 2013 Margaret Wright, "A Mathematical Journey in Non-Derivative Optimization" • 2014 Reidun Twarock, "Viruses and geometry: hidden symmetries in virology" More information about the Mary Cartwright Lecture can be found at the London Mathematical Society website. The American Mathematical Society Colloquium Lectures have been presented since 1896. Women mathematicians who have presented lectures are: Complete list of the AMS Colloquium Lecturers. To commemorate the name of Professor Gibbs, the American Mathematical Society established an honorary lectureship in 1923 to be known as the Josiah Willard Gibbs Lectureship. The lectures are of a semi-popular nature and are given by invitation. They are usually devoted to mathematics or its applications. It is hoped that these lectures will enable the public and the academic community to become aware of the contribution that mathematics is making to present-day thinking and to modern civilization. Women mathematicians who have presented the Josiah Willard Gibbs Lectures have been: • 1981 Cathleen S. Morawetz, "The mathematical approach to the sound barrier" • 1999 Nancy J. Kopell, "We got rhythm: Dynamical systems of the nervous system" (published in the AMS Notices, January 2000) • 2005 Ingrid Daubechies, "The Interplay Between Analysis and Algorithms" The Earle Raymond Hedrick Lectures were established by the Mathematical Association of America in 1952 to present to the Association a lecturer of known skill as an expositor of mathematics "who will present a series of at most three lectures accessible to a large fraction of those who teach college mathematics." Women mathematicians who have presented the Earle Raymond Hedrick Lectures have been: The J. Sutherland Frame Lectures were established by Pi Mu Epsilon to honor James Sutherland Frame who was instrumental in founding the Pi Mul Epsilon Journal and in creating the Pi Mu Epsilon Summer Student Paper Conferences in conjunction with the American Mathematical Society and the Mathematical Association of America. The lectures are presented at the summer meeting of the Mathematical Association of America. Women mathematicians who have presented the J. Sutherland Frame Lectures have been: • 1988 Doris Schattschneider, "You Too Can Tile the Conway Way" • 1989 Jane Cronin Scanlon, "Entrainment of Frequency • 1995 Marjorie Senechal, "Tilings as Differential Games" • 2004 Joan P. Hutchinson, "When Five Colors Suffice" Complete List of J. Sutherland Frame Lecturers. The Association for Women in Mathematics was established in 1971 to encourage women to enter careers in mathematics and related areas, and to promote equal opportunity and equal treatment of women in the mathematical community. The Presidents of the AWM have been: • 1971-1973 Mary Gray • 1973-1975 Alice T. Schafer • 1975-1979 Lenore Blum • 1979-1981 Judy Roitman • 1981-1983 Bhama Srinivasan • 1983-1985 Linda Rothschild • 1985-1987 Linda Keen • 1987-1989 Rhonda Hughes • 1989-1991 Jill Mesirov • 1991-1993 Carol Wood • 1993-1995 Cora Sadosky • 1995-1997 Chuu-Lian Terng • 1997-1999 Sylvia Wiegand • 1999-2001 Jean Taylor • 2001-2003 Suzanne Lenhart • 2003-2005 Carolyn Gordon • 2005-2007 Barbara Keyfitz • 2007-2009 Cathy Kessel • 2009-2011 Georgia Benkart • 2011-2013 Jill Pipher • 2013-2015 Ruth Charney • 2015-2017 Kristin Lauter In December 1915, ten women and 96 men met at The Ohio State University to established the organization that became the Mathematical Association of America. Women who have served as President of the MAA have been The American Mathematical Society was founded in 1889. Since then, women who have served as President of the AMS have been The Society for Industrial and Applied Mathematics (SIAM) was incorporated in 1952 as a nonprofit organization to convey useful mathematical knowledge to other professionals who could implement mathematical theory for practical, industrial, or scientific use. Women who have served as President of SIAM have been The Society for Mathematical Biology, founded in 1973, is an international society that exists to promote and foster interactions between the mathematical and biological sciences communities. Women who have served as President of the Society have been
{"url":"http://www.agnesscott.edu/lriddle/women/prizes.htm","timestamp":"2014-04-18T18:15:51Z","content_type":null,"content_length":"65381","record_id":"<urn:uuid:54e79c50-bbf2-4e06-acd2-48b1ecaa360c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Aaaag i suck at word problems! ITS MULTIPLE CHOICE!!! I WOULD REALLY APPRECIATE ANY HELP!! A doctors office schedules 15-min appointments and half hour appointments for weekdays. The doctor limits these appointments to at most 30 hours per week. Write an inequality to represent the number of 15 min appt. x and the number of half hour appt. y the doctor may have in a week a. 15x + 30y <= 30 b. 15x + 30y <= 1800 c. 15x + 30y > 1800 d. 15x + 1/2y <= 30 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4febc0bfe4b0bbec5cfb822c","timestamp":"2014-04-21T07:50:05Z","content_type":null,"content_length":"73816","record_id":"<urn:uuid:1f15d13a-ba89-4682-a59b-2e10a509d393>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00179-ip-10-147-4-33.ec2.internal.warc.gz"}