content
stringlengths
86
994k
meta
stringlengths
288
619
Machine Learning :: Text Feature Extraction (tf-idf) - Part II Originally Authored by Christain S. Perone Read the first part of this tutorial: Text feature extraction (tf-idf) – Part I. This post is a continuation of the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend you to read the first part of the post series in order to follow this second post. Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first. In the first post, we learned how to use the term-frequency to represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ? The tf-idf weight comes to solve this problem. What tf-idf gives is how important is a word to a document in a collection, and that’s why tf-idf incorporates local and global parameters, because it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms; a term that occurs 10 times more than another isn’t 10 times more important than it, that’s why tf-idf uses the logarithmic scale to do that. But let’s go back to our definition of the $\mathrm{tf}(t,d)$ which is actually the term count of the term $t$ in the document $d$. The use of this simple term frequency could lead us to problems like keyword spamming, which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (Information Retrieval) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document. To overcome this problem, the term frequency $\mathrm{tf}(t,d)$ of a document on a vector space is usually also normalized. Let’s see how we normalize this vector. Vector normalization Suppose we are going to normalize the term-frequency vector $\vec{v_{d_4}}$ that we have calculated in the first part of this tutorial. The document $d4$ from the first part of this tutorial had this textual representation: d4: We can see the shining sun, the bright sun. And the vector space representation using the non-normalized term-frequency of that document was: $\vec{v_{d_4}} = (0,2,1,0)$ To normalize the vector, is the same as calculating the Unit Vector of the vector, and they are denoted using the “hat” notation: $\hat{v}$. The definition of the unit vector $\hat{v}$ of a vector $\ vec{v}$ is: $\displaystyle \hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p}$ Where the $\hat{v}$ is the unit vector, or the normalized vector, the $\vec{v}$ is the vector going to be normalized and the $\|\vec{v}\|_p$ is the norm (magnitude, length) of the vector $\vec{v}$ in the $L^p$ space (don’t worry, I’m going to explain it all). The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1. Source: http://processing.org/learning/pvector/ But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of the $L^p$ spaces, also called Lebesgue spaces. Lebesgue spaces Source: http://processing.org/learning/pvector/ Usually, the length of a vector $\vec{u} = (u_1, u_2, u_3, \ldots, u_n)$ is calculated using the Euclidean norm – a norm is a function that assigns a strictly positive length or size to all vectors in a vector space -, which is defined by: Source: http://processing.org/learning/pvector/ $\|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}$ But this isn’t the only way to define length, and that’s why you see (sometimes) a number $p$ together with the norm notation, like in $\|\vec{u}\|_p$. That’s because it could be generalized as: $\displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}$ and simplified as: $\displaystyle \|\vec{u}\|_p = (\sum\limits_{i=1}^{n}\left|\vec{u}_i\right|^p)^\frac{1}{p}$ So when you read about a L2-norm, you’re reading about the Euclidean norm, a norm with $p=2$, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without the $p$ number), you have the L2-norm (Euclidean norm). When you read about a L1-norm, you’re reading about the norm with $p=1$, defined as: $\displaystyle \|\vec{u}\|_1 = ( \left|u_1\right| + \left|u_2\right| + \left|u_3\right| + \ldots + \left|u_n\right|)$ Which is nothing more than a simple sum of the components of the vector, also known as Taxicab distance, also called Manhattan distance. Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length $6 \times \sqrt {2} \approx 8.48$, and is the unique shortest path. Source: Wikipedia :: Taxicab Geometry Note that you can also use any norm to normalize the vector, but we’re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of the scikits.learn. You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (a unit vector that used a L1-norm isn’t going to have the length 1 if you’re going to take its L2-norm later). Back to vector normalization Now that you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vector $\vec{v_{d_4}} = (0,2,1,0)$ in order to get its unit vector $\hat{v_{d_4}}$. To do that, we’ll simple plug it into the definition of the unit vector to evaluate it: $\hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\ \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\ \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)$ And that is it ! Our normalized vector $\hat{v_{d_4}}$ has now a L2-norm $\|\hat{v_{d_4}}\|_2 = 1.0$. Note that here we have normalized our term frequency document vector, but later we’re going to do that after the calculation of the tf-idf. The term frequency – inverse document frequency (tf-idf) weight Now you have understood how the vector normalization works in theory and practice, let’s continue our tutorial. Suppose you have the following documents in your collection (taken from the first part of tutorial): Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun. Your document space can be defined then as $D = \{ d_1, d_2, \ldots, d_n \}$ where $n$ is the number of documents in your corpus, and in our case as $D_{train} = \{d_1, d_2\}$ and $D_{test} = \{d_3, d_4\}$. The cardinality of our document space is defined by $\left|{D_{train}}\right| = 2$ and $\left|{D_{test}}\right| = 2$, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality. Let’s see now, how idf (inverse document frequency) is then defined: $t \in d\}\right|}}$ where $t \in d\}\right|$ is the number of documents where the term $t$ appears, when the term-frequency function satisfies $\mathrm{tf}(t,d) eq 0$, we’re only adding 1 into the formula to avoid The formula for the tf-idf is then: $\mathrm{tf\mbox{-}idf}(t) = \mathrm{tf}(t, d) \times \mathrm{idf}(t)$ and this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (local parameter) and a low document frequency of the term in the whole collection (global parameter). Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial: $M_{train} = \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix}$ Since we have 4 features, we have to calculate $\mathrm{idf}(t_1)$, $\mathrm{idf}(t_2)$, $\mathrm{idf}(t_3)$, $\mathrm{idf}(t_4)$: $t_1 \in d\}\right|}} = \log{\frac{2}{1}} = 0.69314718$ $t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511$ $t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511$ $t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0$ These idf weights can be represented by a vector as: $\vec{idf_{train}} = (0.69314718, -0.40546511, -0.40546511, 0.0)$ Now that we have our matrix with the term frequency ($M_{train}$) and the vector representing the idf for each feature of our matrix ($\vec{idf_{train}}$), we can calculate our tf-idf weights. What we have to do is a simple multiplication of each column of the matrix $M_{train}$ with the respective $\vec{idf_{train}}$ vector dimension. To do that, we can create a square diagonal matrix called $M_{idf}$ with both the vertical and horizontal dimensions equal to the vector $\vec{idf_{train}}$ dimension: $M_{idf} = \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}$ and then multiply it to the term frequency matrix, so the final result can be defined then as: $M_{tf\mbox{-}idf} = M_{train} \times M_{idf}$ Please note that the matrix multiplication isn’t commutative, the result of $A \times B$ will be different than the result of the $B \times A$, and this is why the $M_{idf}$ is on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature: $\begin{bmatrix} \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\ \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf} (t_4, d_2) \end{bmatrix} \times \begin{bmatrix} \mathrm{idf}(t_1) & 0 & 0 & 0\\ 0 & \mathrm{idf}(t_2) & 0 & 0\\ 0 & 0 & \mathrm{idf}(t_3) & 0\\ 0 & 0 & 0 & \mathrm{idf}(t_4) \end{bmatrix} \\ = \begin {bmatrix} \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf} (t_4)\\ \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf} (t_4) \end{bmatrix}$ Let’s see now a concrete example of this multiplication: $M_{tf\mbox{-}idf} = M_{train} \times M_{idf} = \\ \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix} \times \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} \\ = \begin{bmatrix} 0 & -0.40546511 & -0.40546511 & 0\\ 0 & -0.81093022 & -0.40546511 & 0 \end{bmatrix}$ And finally, we can apply our L2 normalization process to the $M_{tf\mbox{-}idf}$ matrix. Please note that this normalization is “row-wise” because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole: $M_{tf\mbox{-}idf} = \frac{M_{tf\mbox{-}idf}}{\|M_{tf\mbox{-}idf}\|_2}$$= \begin{bmatrix} 0 & -0.70710678 & -0.70710678 & 0\\ 0 & -0.89442719 & -0.4472136 & 0 \end{bmatrix}$ And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1. Python practice Environment Used: Python v.2.7.2, Numpy 1.6.1, Scipy v.0.9.0, Sklearn (Scikits.learn) v.0.9. Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using the Scikit.learn feature extraction module. The first step is to create our training and testing document set and computing the term frequency matrix: from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") count_vectorizer = CountVectorizer() print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} freq_term_matrix = count_vectorizer.transform(test_set) print freq_term_matrix.todense() #[[0 1 1 1] #[0 2 1 0]] Now that we have the frequency term matrix (called freq_term_matrix), we can instantiate the TfidfTransformer, which is going to be responsible to calculate the tf-idf weights for our term frequency from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") print "IDF:", tfidf.idf_ # IDF: [ 0.69314718 -0.40546511 -0.40546511 0. ] Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute called idf_. Now that fit() method has calculated the idf for the matrix, let’s transform the freq_term_matrix to the tf-idf weight matrix: tf_idf_matrix = tfidf.transform(freq_term_matrix) print tf_idf_matrix.todense() # [[ 0. -0.70710678 -0.70710678 0. ] # [ 0. -0.89442719 -0.4472136 0. ]] And that is it, the tf_idf_matrix is actually our previous $M_{tf\mbox{-}idf}$ matrix. You can accomplish the same effect by using the Vectorizer class of the Scikit.learn which is a vectorizer that automatically combines the CountVectorizer and the TfidfTransformer to you. See this example to know how to use it for the text classification process. I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity. If you liked it, feel free to comment and make suggestions, corrections, etc. Understanding Inverse Document Frequency: on theoretical arguments for IDF The classic Vector Space Model Sklearn text feature extraction code Added the info about the environment used for Python examples Source: http://pyevolve.sourceforge.net/wordpress/?p=1747
{"url":"http://css.dzone.com/articles/machine-learning-text-feature-0","timestamp":"2014-04-18T00:45:15Z","content_type":null,"content_length":"99933","record_id":"<urn:uuid:1fd6f987-d6b0-41b1-89a7-6c0ee3bc3f1a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Detecting and predicting changes Results 1 - 10 of 27 - In , 2003 "... Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various s ..." Cited by 33 (14 self) Add to MetaCart Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the co-occurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge—identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computational-level analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domain-general statistical inference guided by domain-specific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge—the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships—and show how they provide the constraints that people need to induce useful causal models from sparse data. "... What mechanisms support the ability of human infants, adults, and other primates to identify words from fluent speech using distributional regularities? In order to better characterize this ability, we collected data from adults in an artificial language segmentation task similar to Saffran, Newport ..." Cited by 20 (5 self) Add to MetaCart What mechanisms support the ability of human infants, adults, and other primates to identify words from fluent speech using distributional regularities? In order to better characterize this ability, we collected data from adults in an artificial language segmentation task similar to Saffran, Newport, and Aslin (1996) in which the length of sentences was systematically varied between groups of participants. We then compared the fit of a variety of computational models— including simple statistical models of transitional probability and mutual information, a clustering model based on mutual information by Swingley (2005), PARSER (Perruchet & Vintner, 1998), and a Bayesian model. We found that while all models were able to successfully complete the task, fit to the human data varied considerably, with the Bayesian model achieving the highest correlation with our results. "... Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible fo ..." Cited by 20 (4 self) Add to MetaCart Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of “rational process models” that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson’s (1990, 1991) Rational Model of Categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose two alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure - Cognitive Science Society , 2009 "... In many situations human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and that predicted by Bayesian inference: p ..." Cited by 17 (5 self) Add to MetaCart In many situations human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and that predicted by Bayesian inference: people often appear to make judgments based on a few samples from a probability distribution, rather than the full distribution. Although sample-based approximations are a common implementation of Bayesian inference, the very limited number of samples used by humans seems to be insufficient to approximate the required probability distributions. Here we consider this discrepancy in the broader framework of statistical decision theory, and ask: if people were making decisions based on samples, but samples were costly, how many samples should people use? We find that under reasonable assumptions about how long it takes to produce a sample, locally suboptimal decisions based on few samples are globally optimal. These results reconcile a large body of work showing sampling, or probability-matching, behavior with the hypothesis that human cognition is well described as Bayesian inference, and suggest promising future directions for studies of resource-constrained cognition. - Behavioral and Brain Sciences , 2011 "... To be published in Behavioral and Brain Sciences (in press) ..." - Advances in Neural Information Processing Systems 22 , 2009 "... While many perceptual and cognitive phenomena are well described in terms of Bayesian inference, the necessary computations are intractable at the scale of realworld tasks, and it remains unclear how the human mind approximates Bayesian computations algorithmically. We explore the proposal that for ..." Cited by 6 (3 self) Add to MetaCart While many perceptual and cognitive phenomena are well described in terms of Bayesian inference, the necessary computations are intractable at the scale of realworld tasks, and it remains unclear how the human mind approximates Bayesian computations algorithmically. We explore the proposal that for some tasks, humans use a form of Markov Chain Monte Carlo to approximate the posterior distribution over hidden variables. As a case study, we show how several phenomena of perceptual multistability can be explained as MCMC inference in simple graphical models for low-level vision. 1 "... Bandit problems provide an interesting and widely-used setting for the study of sequential decision-making. In their most basic form, bandit problems require people to choose repeatedly between a small number of alternatives, each of which has an unknown rate of providing reward. We investigate rest ..." Cited by 4 (0 self) Add to MetaCart Bandit problems provide an interesting and widely-used setting for the study of sequential decision-making. In their most basic form, bandit problems require people to choose repeatedly between a small number of alternatives, each of which has an unknown rate of providing reward. We investigate restless bandit problems, where the distributions of reward rates for the alternatives change over time. This dynamic environment encourages the decision-maker to cycle between states of exploration and exploitation. In one environment we consider, the changes occur at discrete, but hidden, time points. In a second environment, changes occur gradually across time. Decision data were collected from people in each environment. Individuals varied substantially in overall performance and the degree to which they switched between alternatives. We modeled human performance in the restless bandit tasks with two particle filter models, one that can approximate the optimal solution to a discrete restless bandit problem, and another simpler particle filter that is more psychologically plausible. It was found that the simple particle filter was able to account for most of the individual differences. "... A. Redish et al. (2007) proposed a reinforcement learning model of context-dependent learning and extinction in conditioning experiments, using the idea of “state classification ” to categorize new observations into states. In the current article, the authors propose an interpretation of this idea i ..." Cited by 4 (3 self) Add to MetaCart A. Redish et al. (2007) proposed a reinforcement learning model of context-dependent learning and extinction in conditioning experiments, using the idea of “state classification ” to categorize new observations into states. In the current article, the authors propose an interpretation of this idea in terms of normative statistical inference. They focus on renewal and latent inhibition, 2 conditioning paradigms in which contextual manipulations have been studied extensively, and show that online Bayesian inference within a model that assumes an unbounded number of latent causes can characterize a diverse set of behavioral results from such manipulations, some of which pose problems for the model of Redish et al. Moreover, in both paradigms, context dependence is absent in younger animals, or if hippocampal lesions are made prior to training. The authors suggest an explanation in terms of a restricted capacity to infer new causes. "... The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the best-known biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single pre ..." Cited by 3 (1 self) Add to MetaCart The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the best-known biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single prediction about the next event in a sequence. Our proof applies for two normative standards commonly used for evaluating hypothesis testing: maximizing expected information gain and maximizing the probability of falsifying the current hypothesis. This analysis rests on two assumptions: (a) that people predict the next event in a sequence in a way that is consistent with Bayesian inference; and (b) when testing hypotheses, people test the hypothesis to which they assign highest posterior probability. We present four behavioral experiments that support these assumptions, showing that a simple Bayesian model can capture people’s predictions about numerical sequences (Experiments 1 and 2), and that we can alter the hypotheses that people choose to test by manipulating the prior probability of those hypotheses (Experiments 3 and 4). - Management Science , 2011 "... This research analyzes how individuals make forecasts based on time series data, and tests an intervention designed to improve forecasting performance. Using data from a controlled laboratory experiment, we find that forecasting behavior systematically deviates from normative predictions: Forecaster ..." Cited by 2 (0 self) Add to MetaCart This research analyzes how individuals make forecasts based on time series data, and tests an intervention designed to improve forecasting performance. Using data from a controlled laboratory experiment, we find that forecasting behavior systematically deviates from normative predictions: Forecasters over-react to errors in relatively stable environments, but under-react to errors in relatively unstable environments. Surprisingly, the performance loss due to systematic judgment biases is larger in stable than in unstable environments. In a second study, we test an intervention designed to mitigate these biased reaction patterns. In order to reduce the salience of recent demand signals, and emphasize the environment generating theses signals, we require forecasters to prepare a forecast in other time-series before returning to their original time-series. This intervention improves forecasting performance.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9636514","timestamp":"2014-04-20T20:18:19Z","content_type":null,"content_length":"40094","record_id":"<urn:uuid:e53ebd7f-94f5-44f7-89cc-d3c1dcc5937c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5767648 - Base force/torque sensor apparatus for the precise control of manipulators with joint friction and a method of use thereof A preferred embodiment of the apparatus of the invention is shown schematically in block diagram form in FIG. 1. A manipulator 10 has a base 12, which is connected to a reference body 14 (such as the ground or a shop floor) through a base mounted force/torque sensor 16, also referred to herein and in the literature as a wrench sensor. (A "wrench" is a compact way of describing the force and torque condition of an interaction, defined as a vector consisting of the force vector and the moment vector that describes a mechanical interaction). The manipulator shown is a six degree of freedom manipulator having six joints. However the invention is useful for robots having any degree of freedom and any number of joints. As used in this specification and the claims, the term "base" refers to so much of the manipulator as is kinematically between the first joint and the base force/torque sensor. This portion of the manipulator does not move relative to the reference body. All of the joints shown in the manipulator 10 are powered with rotary actuators. Each joint is equipped with an angular encoder, of which a representative one 18 is shown. These encoders combined with an initialization (or "calibration") routine generate signals that correspond to the angular orientation of any link j relative to the link j-1, which is closer to the reference body 14 than is the link i. Joint Torque Estimation The output signals from the wrench sensor 16 and the angular encoders 18 are passed to a torque estimation unit 20, for instance by cables or wireless transmission, such as RF, IR or any appropriate channel. The torque estimation unit, as described more fully below, takes the wrench signal and the angular orientation signals, and processes these signals along with data signals that represent other properties of the manipulator (such as the mass and moments of inertia of its links, its geometry, as described in more detail below) and generates as an output a signal that corresponds to an estimate of the torque that is actually being applied by the actuator at any selected joint to the link that is further from the reference body 14 than the actuator. This torque signal is "compensated" for gravity. By "compensated" it is meant that the signal represents only so much of the torque that is actually being applied by the actuator to generate the motions of the link in question, but not to resist the urging of gravity. In other words, it estimates the torque that would be required to move the body as it moves, in an environment where gravity is not acting. It is important to note that the estimate is the gravity compensated torque that is actually being applied to the link, as opposed, for instance, to a torque value that is computed based on the current supplied to a rotary motor. An estimate computed on the basis of the current is subject to error due to frictional and other losses that reduce the torque that is actually applied to the The signals from the joint position encoders (or sensors) are fed back to a position error generator 22, which takes as its other input a desired angular position signal that is provided by a desired position signal generator 24. This signal provides the desired angular position for each joint of the manipulator that is desired to be controlled. The corresponding components of these two signals are compared to generate a position error signal that is received by a position controller 26, which generates a desired torque signal based on the position error. The desired torque signal is compared with the estimated torque at a torque error generator 28, which generates a torque error signal to a torque controller 30. The torque controller 30 issues current commands to the motors of each of the joints to move the motor components with the desired torque to achieve the desired position. The goal of the torque controller 30, described more fully below, is to make the torque error equal to zero. Thus, the torque controller 30 will automatically, over time, compute the motor current to make the estimated torque equal to the desired torque. In particular, it will overcome the friction to achieve that goal. Before describing the components of the torque estimator 20, it is helpful to review the theoretical issues that ground its operation and design. Theoretical Issues In this section, the basic dynamic equations used by the torque estimator of the invention in the torque estimation process are developed. With reference to FIG. 3, consider a manipulator 10 mounted on a reference body 14 through a base wrench (force/torque) sensor 16. The wrench W.sub.b exerted by the manipulator 10 on its supporting reference body 14 can be expressed as the sum of two wrenches: W.sub.b =W.sub.g +W.sub.d, (1) where W.sub.g is the wrench due only to the action of gravity. W.sub.d is the dynamic wrench, due only to the motions of the manipulator. The dynamic wrench would be zero if none of the links were moving. It would be the only wrench applied by the manipulator on the reference body if the manipulator were in a gravity free environment. The dynamic wrench is also referred to herein as the gravity compensated wrench. The gravity wrench would be zero if the manipulator were in a gravity free environment. It would be the only wrench applied by the manipulator on the reference body if no link of the reference body were moving. It should be noted that the base sensor measures wrenches that correspond only to forces and torques effectively transmitted to the manipulator's links. Hence transmission friction does not contribute to the measured base wrench, which would be the same for friction free actuators as it would be for high friction actuators (assuming the motion and position of the robot is the same). The first theoretical step in the estimation process is to generate the gravity component W.sub.g and compensate for it, in order to estimate the dynamic component W.sub.d. This step is also implemented in a real embodiment of the invention. The gravity wrench is compensated for using the following model 8!: ##EQU4## where F.sub.g and M.sub.g.sup.O.sbsp.s are the vectors that represent the gravity force and moment expressed at the measurement point O.sub.s of the sensor 16, which is typically its center, m.sub.j and G.sub.j are the mass and the position of the center of mass of link j, respectively, and g is the acceleration due to gravity. The gravity wrench is set out in brackets in Eq. (2). The sensor measurement point of the sensor is not identified in FIG. 3. O.sub.s G.sub.j represents a vector from the sensor measurement point to the center of mass of link j. The value of the gravity compensated dynamic wrench (W.sub.d) can be computed because the total wrench W.sub.b is measured, the masses are known and positions of the links relative to the sensor measurement point are readily determined by analysis of the angular position signals from the angle encoders 18. The masses can either be know a priori, or determined using the wrench sensor, such as is taught by 8!, which is incorporated herein fully by reference. In the following analysis, the gravity compensated wrench (W.sub.d) is propagated through the successive links j of the manipulator 10. This results in estimated joint torques that do not include the joint gravity component. The Newton Euler equations of the first i links are, after gravity compensation: ##EQU5## where W.sub.i→i+1 is the wrench exerted by the link i on the link i+1 as shown schematically in FIG. 4 and W.sub.dyn.sbsb.i is the resultant dynamic wrench for the link i. W.sub.dyn.sbsb.i can also be expressed at any point A (whether within the body of the manipulator or not) in terms of the acceleration V.sub.G.sbsb.i of G.sub.i, the angular acceleration ω.sub.i and the angular velocity ω.sub.i, all with respect to a fixed frame. ##EQU6## where I.sub.i is the inertia tensor of link i at G.sub.i. Summing the equations (3) yields: ##EQU7## Given this wrench, the torque in joint i+1 (between links i and i+1) is obtained by projecting the moment vector at O.sub.i along z.sub.i (indicated by the operator -z.sub.i.sup.t ! in the following expression) (FIG. 2): ##EQU8## where M.sub.d.sup.O.sbsp.i is the moment of the dynamic wrench developed in Eq. 2, expressed at O.sub.i, which is the origin of the axis around which the link i+1 rotates relative to the link i. (This is equal to M.sub.d.sup.O.sbsp.s +O.sub.s O.sub.i M.sub.d.sup.O.sub.s is the dynamic portion of the wrench at the sensor origin and F.sub.d is the dynamic portion of the force, both of which are known because Eq. (2) can be evaluated as discussed above. Eqs. 2 and 6 can be used in the torque estimator 20, which can be implemented as a programmed general purpose computer 48, as is shown in more detail in FIG. 15. Signals from the wrench sensor 16 are amplified through a voltage amplifier 17 and passed through analog to digital convertor 44. The digitized signals are passed to a central processing unit 46 of a programmed general purpose computer 48 having a memory 52 and input output devices 50, such as a keyboard, mouse, monitor, etc. Joint position encoders, 18 duly initialized using any regular additional joint equipment (zero switches, potentiometer or whatever is convenient) are transmitted to the CPU 46 through an encoder-digital unit 42. The memory is provided with data that represent the masses m of the n links, their moments of inertia I, the locations of their centers of mass G (with respect to the link frame), the locations of the axis O that are aligned with the points of connection between links (with respect to the link frame), and any other parameters that describe the geometrical and mass properties of the manipulator links. With signals representing these parameters, the processor can be programmed to evaluate Eq. 2 and then Eq. 6 for each link, thereby generating a signal that is passed to the actuator control 54 (which is a general instance of the torque control 30 shown in FIG. 1) which is an estimate of the actual torque, compensated for gravity, that is being delivered by each actuator to each link to which it is attached. Typically, the torque estimator is constituted as a programmed general purpose computer, as shown in FIG. 15. This programmed computer can also be programmed to perform the functions of the position control 26, torque control 30 and position and torque comparators 22 and 28 respectively, shown in FIG. 1. Alternatively, as is known to the person skilled in the art, the estimator and the other modules can be implemented as dedicated, special purpose processors or circuitry, with dedicated memories, either ROM or writable. Decisions as to which configuration will be used depend on the environment in which the manipulator is to be used, cost, space, flexibility, etc. A schematic representation of the torque estimator 20, implemented as special purpose processors, is shown in FIG. 16. Signals from the joint position encoders 18 are passed to a gravity wrench generator 3, that implements the part of Eq. 2 in the brackets to generate the gravity component W.sub.g of the measured base wrench signal W.sub.b. The gravity wrench generator 3 also makes use of other inputs, not shown, such as data specifying the masses of the links. Alternatively, a look-up table, that specifies the gravity wrench for a given combination of joint positions, can be used. Once determined, the gravity wrench W.sub.g is subtracted from the measured wrench signal W.sub.b in the dynamic wrench signal generator 5, to result in the gravity compensated, dynamic wrench W.sub.d. The gravity wrench generator 3 and the dynamic wrench signal generator 5 can together be considered a gravity wrench compensator 9. The dynamic wrench W.sub.d is passed to a joint analyzer module 7, which implements Eq. 6, and generates as its output the estimated torque for the joint in question. The joint analyzer 7 also uses the joint position signals generated by the joint position encoders 18, as well as data that represents the masses, moments of inertia and the locations of the centers of mass of the links, as above. Application to a Representative Manipulator Puma 550 The general equations presented above can be significantly simplified when applied to a particular manipulator, for instance a Puma brand manipulator, model 550, sold by Staubli Unimation, Inc., of Duncan, S.C. FIG. 3 shows schematically, a five joint Puma 550 manipulator 10 mounted on a base wrench (force/torque) sensor 16. A suitable six axis base force torque sensor is sold by AMTI, Advanced Mechanical Technologies, Inc., of Watertown, Mass., under trade designation MC12-1000. It has been shown experimentally, for a different purpose, that the total gravity wrench can be efficiently and accurately estimated by developing equation (2) as a function of joint angles 9!. The work conducted in this reference had nothing to do with analyzing or controlling the motion of the manipulator. Rather, its goal was to move the reference body to which the manipulator was attached, as if there were no gravity. Thus, it was necessary to know the total wrench due to gravity acting on the reference body as a result of the total manipulator. However, there was no need and no attempt to consider the implications of the gravity wrench on the state of the actuators in the manipulator.) The actual implementation of Eq. 2 is simplified, because it is applied to a given design (PUMA 550). For the specific implementation of the PUMA 550, calculation of the gravity wrench W.sub.g is simplified as follows. Note that this estimation of W.sub.g does not include the constant part, which does not depend on the configuration of the manipulator, and only includes the position dependent part. The constant part is compensated at the initialization, when the base sensor is zeroed (the constant part is assumed to be a sensor offset). Eq.(2) reduces to: ##EQU9## where: ∝.sub.j depends on m.sub.j, G.sub.j etc.; and C.sub.k, S.sub.k =cos(q.sub.k), sin(q.sub.k). To estimate the torques τ.sub.1, τ.sub.2, and τ.sub.3 generated by the first three joints of the Puma, applied to the links L.sub.1, L.sub.2 and L.sub.3, respectively, the following assumptions are W.sub.b is measured directly in the reference frame that has its origin at O.sub.1, by applying an appropriate calibration matrix to the sensor measured voltage. (See FIG. 3). The center of mass G.sub.1 of the link L.sub.1 is on the z.sub.1 axis, which is a reasonable assumption because it is true; and Off-diagonal terms in the inertia tensors I.sub.1 and I.sub.2, expressed in frames G.sub.1, (x1, y1, z1) and G.sub.2, (x2, y2, z2) respectively, can be neglected, which is also reasonable because they are known to be quite small. For the three joints, these assumptions yield, respectively: τ.sub.1 =-z.sub.p.sup.t M.sub.d.sup.O.sbsp.1 =-(0,0,1.sup.t M.sub.d.sup.O.sub.i (7) τ.sub.2 =-z.sub.1.sup.t M.sub.d.sup.O.sbsp.1 =-(s.sub.1 x.sub.0,-c.sub.1 y.sub.0,0).sup.t M.sub.d.sup.O.sub.1 (8) ##EQU10## where a.sub.2, A.sub.1 to A.sub.7 are constant scalar values depending on masses, inertia and lengths of the two first joints (see appendix A), q.sub.i is the angular orientation of link i relative to the link i-1, and (s.sub.i, c.sub.i) stands for (sin(q.sub.i), cos(q.sub.i)). A Typical Implementation The torque estimation requires knowledge of the angular positions of adjacent links (also referred to as "joint positions", or "joint angles"), and their, velocities and accelerations relative to the reference body. Joint positions may be precisely measured with standard optical incremental encoders. In addition, a digital signal processor board such as sold by Delta Tau Data Systems, Inc., of Northbridge, Calif., under trade designation PMAC-VME Board, acquires the encoder data and performs differentiations and filtering, at a sampling rate of 2500 Hz, to compute the velocities and accelerations. By experiment, estimation of the position derivatives using this hardware is sufficiently fast and precise, since neither noise nor delay corrupt the torque estimation process. Knowledge of mass and inertia properties is also required in the estimation process. In a present implementation of the invention, these values have been identified using the base wrench (force/ torque) sensor as suggested in 9!. However, other, more conventional methods can be used. For instance, if a manipulator manufacturer were to include such a torque estimator as part of a manipulator, it could measure all of these parameters precisely using more conventional techniques, and include them in the software provided with the manipulator. The torque estimation calculations required for the Puma 550 described above are not computationally intensive. Using a single 68020 Motorola VME CPU board sold by Heurikon Corp. of Madison, Wis. supporting VxWorks V.4.o.2. real time development software sold by Wind River Systems, Inc., of Alameda, Calif., a 300 Hz sampling frequency was achieved to measure the base wrench, compensate for the gravity (Equation 2), compute the torques (Equations 7, 8 and 9) and control the torque control loop presented in the next section. Torque Control Open Loop Results Open loop experiments have been conducted to provide a relevant model for the torque control design, and to evaluate the accuracy and the validity of the torque estimation apparatus and process. The experiments consisted of applying a given voltage to the input of the power amplifiers and simultaneously estimating the torques at the joints with the base wrench sensing apparatus. From the experimental results, a very simple model of the Puma actuators has been derived. The amplifiers (that transform the controller voltage output into a proportional motor current), actuators and transmissions can be modeled as a linear term K.sub.act, with a disturbance torque τ.sub.dist that accounts for unmodeled nonlinear effects. The torque τ.sub.load provided by the actuator to the joint is: τ.sub.load =K.sub.act V.sub.command +τ.sub.dist, (10) where V.sub.command is the controller voltage output. FIG. 7A, which shows graphically an open loop result for the first joint, between the base link L.sub.0 (which is fixed to the base 12) and the link L.sub.1, shows the validity of the model. The base sensor estimated torque reproduces the input voltage sine wave with a disturbing torque whose sign is changed when the velocity sign changes. This disturbance torque (computed using Eq. 10, where τ.sub.est is used instead of τ.sub.load) appears to be mostly a Coulomb friction term, as shown in FIG. 7B which shows that when the velocity changes sign, the friction approximates a step function at the point of switching from one value to another, of an opposite sign. Also, note that the estimated joint torque has very low noise, due to the quality of the force sensor and its accompanying electronics. Closed Loop Control As shown in FIG. 7B, open loop experimental results exhibit very large Coulomb friction. In very fine motion applications, friction will be much larger in magnitude than the dynamic torque desired to be applied to the load. Hence, a high DC gain in the torque controller is required to compensate for this static friction disturbance. Considering this, the torque control law implemented is an integral controller with feed forward compensation: ##EQU11## where τ.sub.des and τ.sub.est are the desired and the actual (i.e. base-sensed) torques, respectively. Linear analysis of an experimentally derived model of a one DOF robot has suggested that an integral compensator provides the best performance in force/torque control 10!. It achieves low-pass filtering and zero steady state error, whereas a proportional compensator could introduce instability, and a derivative compensator is ineffective and difficult to implement. While this study also suggests that a feed forward compensator should not be used in conjunction with integral control, experimental work by the present inventors (with a real nonlinear system) shows some improvement in the torque control performance when a feed forward term is used. The control gain K.sub.int was tuned to 75% of the value that caused experimental structural oscillations. Experimental Results FIG. 5 demonstrates the effectiveness of base sensed torque control for the first joint (for torque applied to the link L.sub.1 of the Puma 550 around the axis Z.sub.0, FIG. 3). For this demonstration, the gravity compensation is not required because the first joint has a vertical axis of rotation. Gravity does not result in a moment about this axis. Therefore, no gravity compensation was conducted for this experiment. In this example, the desired torque to be actually delivered to the link L.sub.1 (compensated for gravity) by the actuator between this link and the base 12 is a triangular function with a maximum value of 3 Nm, while the dry friction is more than 5 Nm. Without torque feedback, if the motor were provided with current to deliver 3 Nm, the actual torque applied to the link would simply be zero, as the friction would be larger than the motor torque. However, with estimated torque feedback, the experimental results show that the actual torque remains very close to its desired value (FIG. 8A). In FIG. 8A, the triangular signal appears to constitute one trace, but it is actually two: one for the desired torque and one for the estimated torque with the estimated torque measured by the base wrench sensor estimator and method described above. The torque controlled motor must produce nearly 8 Nm to obtain the next 3 Nm required by the command. FIG. 8B shows that when the sign of the angular velocity changes (smoothly varying curve, measured along the left hand vertical scale), involving a large torque disturbance, (for instance at time equals three seconds) the torque error peak remains small (.+-.1 Nm, i.e. 20% only of the Coulomb friction) and is compensated for quickly. In FIG. 9, experimental results for the Puma's second joint are shown for a desired torque to be applied to the link L.sub.2 (around the horizontal axis Z.sub.1 (FIG. 3)) varying at 3 Hz, between .+-.10 Nm. The results for this joint are different from first joint (FIG. 7A) because: 1) The second joint is experiencing gravity while the first joint was not; 2) The sign of the velocity remains constant during the experiment. The square wave input corresponds to a square wave acceleration, symmetric with respect to the zero axis, which corresponds to a triangular velocity signal having a constant sign (either positive or negative). Because the magnitude of the motion is small (a few degrees) the gravity torque is almost constant. Also, because the velocity sign is constant, the friction does not vary significantly (see FIG. 7B showing that the friction variation in the first joint appears mainly on velocity sign changes). Therefore the torque delivered by the motor to provide an estimated torque equal to the desired torque is equal to the sum of the desired signal and a roughly constant value, attributable to gravity and joint friction. An important feature of using a base sensor and the method described above, as opposed to using individual torque sensors in each joint, is that the gravity need be compensated for only once, from the base measured wrench. Conversely, if one were to use direct torque sensing methods, such as sensors in each joint, it would be necessary to provide a gravity joint torque compensation model for each joint of the manipulator. The computations would need to be performed for each joint. Thus, the contributions to the gravity wrench that are provided by links that are distant from the reference body would be computed over and over again, for each joint between the link in question and the reference body. This is computationally wasteful, as compared to the base sensor invention described Position Control With Torque Feedback Controller Design The base measured torque feedback method has been used where the manipulator had a zero desired gravity compensated applied torque for each link, and external forces were applied to the end-effector. In this case, the manipulator 10 behaves virtually as a frictionless and free-floating device. The apparatus is shown schematically in FIG. 10. The manipulator 10 is again provided with a base wrench sensor 16, which provides a wrench signal to a torque estimator 20. The torque estimator 20 is composed of a gravity compensation module 159, which generates signals that correspond to both the dynamic and the gravity portions of the measured base wrench, implementing equation 2 for a general case, or Eq. 6.5 for the particular case of a Puma 550. A signal that corresponds to the gravity compensated dynamic wrench W.sub.d is output to the joint analyzer, stage 157 of the torque estimator 20, which implements equation 6, for the general case, or equations such as 7, 8 and 9, for the particular case of a Puma 550. The angle encoders (not shown) of the manipulator 10 pass their signals to the position error generator 122, which compares them to the desired position signal from the desired position signal generator 124. This position error signal is input to a position control module 126, which implements the proportional and derivative torque gain control to generate a desired torque signal. The desired torque signal is compared with the estimated torque signal at a torque error signal generator 128, which error signal is input to an integrator 160 with integral gain K.sub.int. The output of this integrator is combined with a feed forward signal of the desired torque, at a summing operator 162, which is input to a linear gain 130, that generates the command voltage that is applied to the manipulator motors to achieve the desired motion. Precise position control can be achieved using a simple PD loop enclosing the torque controller. The final controller is shown schematically in FIG. 10: ##EQU12## with: τ.sub.des =K.sub.p (q.sub.d -q)+K.sub.d (q.sub.d -q),(13) where K.sub.p and K.sub.d are the proportional and derivative diagonal gain matrices, respectively. Because the torque control loop eliminates any significant frictional effects, the position control tuning (to determine K.sub.p and K.sub.d) is very straightforward and corresponds to a linear second order system. Assuming that an average value for the joint inertia is J.sub.i, K.sub.p and K.sub.d are chosen as: K.sub.p =J.sub.i ω.sub.o.sup.2 (14) K.sub.d =2ξJ.sub.i ω.sub.o.sup.2, (15) where ω.sub.o is the closed loop band width that the user wants to provide and ξ is the damping coefficient the user wants to provide. The robustness and effectiveness of the base sensed torque control apparatus and method is illustrated in the following additional experimental results. Joint Space Experimental Results The task considered is to move the link L.sub.1 (FIG. 3) very slowly, tracking a triangular position wave. The magnitude of the desired motion is .+-1 degrees, with a period of 10 seconds. This corresponds to a desired velocity for the encoders used, of seven encoder counts per second. The base sensed torque feedback control apparatus and method of the invention, implemented as shown schematically in FIG. 10 has been experimentally compared with conventional PD and PID position controllers for this task. For these three controllers, the proportional and derivative positions gains have been tuned to provide a bandwidth of 5 Hz and a damping ratio of 0.5. The integral gain in the PID control has been selected to be quite high, equal to 80% of the smallest value exhibiting instability. FIGS. 11A and 11B displays the improved performance provided by the base sensed torque feedback. Conventional PD control leads to almost no motion, due to dry friction. The PID controller performs much better, and provides a zero steady state positioning error (FIG. 11B). However, when the sign of the velocity changes, for instance at t=2.5 or 7.5 s, the position integral compensator requires a long time (˜2.5 s) to compensate for the friction disturbance, resulting in lack of positioning precision. On the other hand, the base sensed torque feedback control apparatus compensates rapidly for the Coulomb friction at velocity sign changes (˜50 ms) and the position error remains close to zero during the task. Table 1 quantitatively summarizes the performances of the three controllers. The results of base sensed torque control show that the resolution of the encoder is reached. An encoder count corresponds to a 0.0058 degree angle, and thus that the Root Mean Square error (0.0042 deg) is less than one encoder count throughout the entire task. TABLE I______________________________________Summary of position control performances. Root Mean Integral Max. Square Square Error error errorController (deg) (deg) (deg.sup.2 s)______________________________________PD 0.12 0.059 7.7 10.sup.-2PID 0.056 0.020 9.1 10.sup.-3PD + base 0.012 0.0042 4.0 10.sup.-4sensedtorquecontrol______________________________________ Cartesian Space Experimental Results Cartesian space motion tasks require the end-effector to track a desired trajectory illustrated schematically with respect to FIG. 12. From this desired path, the desired trajectories of the first three links L.sub.1, L.sub.2 and L.sub.3 (the wrist joints 170 are locked during these experiments) are computed off-line using inverse kinematics and provided in a look-up table stored in computer memory. Thus, the control scheme is unchanged (FIG. 10). For very fine motion tasks, the torque estimation process is simplified. It has been found experimentally that the precision performance is not affected by assuming the following: W.sub.g is assumed to be constant, and is set equal to the initial static wrench measured with the base sensor. The dynamic terms in equations (7) to (9) are neglected. Hence the experimental results shown thereafter have been obtained with a controller that does not require any knowledge of the robot's mass properties. The desired end-effector trajectory is a circle with a 350 μm radius. The robot configuration is selected such that the corresponding joint displacements are maximized. In such a configuration (FIG. 12), the maximum magnitude of the joint motions is 0.1 degrees. To verify the end-effector positioning performance, an external position sensor 172 was used. This 2D photodetector measures the position of a spot of light created by a laser 174 mounted on the robot's end-effector 176 (FIG. 12). FIG. 13 shows the Cartesian tracking results in the sensor coordinate frame. The desired trajectory is a circle and the actual, external sensor path is a slightly broader trace that generally tracks the desired trajectory. Since the motion is cyclic, the sign of the velocity changes at least once in all the three joints during the motion. This results in large frictional disturbances. Despite these perturbations, the precision remains excellent: the maximum absolute position error is less than 30 μm, the root mean square error is about 10 μm. Using a Base Force/Torque Sensor Without Computing the Wrench due to Gravity The use of the apparatus described to this point has assumed that the signal processing equipment computes the gravity portion of the measured wrench, W.sub.b so that the dynamic portion of the wrench can be determined by subtracting the gravity portion from the measured wrench. As described above, this is computed using Eq. 2, or, in a special case, an equation corresponding to Eq. 6.5, based on knowledge of the values for the mass properties of the links. It is also possible, to generate a table of the gravity compensated wrench W.sub.g for many joint position combinations, and then use that table to look up the gravity portion of the wrench, rather than computing it. This is useful for operations that are repeated many times, through an identical trajectory each time. The table is generated by moving the manipulator through the trajectory an initial time, very slowly or incrementally, so that the dynamic portion of the wrench is essentially zero. Thus, the measured wrench W.sub.b is equal to the gravity portion of the wrench, W.sub.g. The measured values are recorded in a look-up table for each of the desired locations, and are used when the manipulator is moved through the trajectory at an operational speed. An advantage of this technique is that the gravity wrench need not be computed. It is only necessary to look it up. This is typically faster. It is also very accurate, not being subject to modeling errors. This technique is also useful when the manipulator is moving through only a small region of space, for which the gravity wrench in every position can be measured initially and stored in a table. Linear Joints The foregoing discussion has focused mostly on rotary joints. The examples have been conducted using rotary joints. However, the invention is fully applicable to hybrid manipulators that include linear joints along with rotary joints, such as is shown in FIG. 5. FIG. 5 shows a portion of a robot having a rotary joint and a linear joint. The joint between the links L.sub.i and L.sub.i+1 is linear and the joint between the links L.sub.i+1 and L.sub.i+2 is rotary. The invention can also be used for manipulators having only linear joints or only rotary joints. In order to analyze the force actually being applied at a linear joint, compensated for gravity, the apparatus and method of the invention is the same as that described above. A base wrench sensor is introduced between the manipulator and the reference body to which it is attached in the same manner. All of the theoretical relations underlying the analysis are the same. The only difference is that, rather than projecting the component of the moment of the wrench around an axis at a joint, what is computed is the component of the force along an axis at a joint. For instance, the analog to Eq. 6 for the gravity compensated force actually applied between the links L.sub.i and L.sub.i+1 of FIG. 5 along the z.sub.i axis would be: ##EQU13## No other special considerations need to be taken, even if some of the joints between the joint under analysis and the reference body are rotary and some are linear. The relation described by Eq. 5 is general. Due to the way that the ω.sub.j and V.sub.G.sbsb.i terms are computed as a function of encoder signals, Eq. 5 automatically takes into account the nature of the joint (rotary or linear translational). A Preferred Embodiment of the Method of the Invention Although a preferred embodiment of the method of the invention has been described above in connection with the apparatus of the invention, it is helpful to review such an embodiment, as illustrated schematically in flowchart form in FIG. 6. The method begins at 202. At 204, a system memory is initialized with the mass and inertia parameters. These parameters can be obtained in any reasonable fashion, including using the base sensor to measure them, as taught in 9!. If the gravity wrench and/or trajectory position is to be determined using look-up tables, these are also initialized at this time. The manipulator joints are set in motion by providing 206 the actuator controllers with signals that represent the initial desired force or torque, as the case may be depending on whether or not the joint in question is linear or rotary. (Note that if the joint is rotary, then a desired torque is determined, even if the actuator that generated the torque is a linear piston, such as is shown in FIG. 14, arranged to create a torque around the joint. In such a case, another routine is conducted to relate the output of the actuator to the desired torque or force to be applied through the joint.) The joint angle and/or displacement signals are generated 208 by angle encoders, displacement scales, or other appropriate apparatus. Simultaneous with the joint angle/displacement signal generation, the base wrench sensor generates 210 a wrench signal. The joint angle/displacement signals are differentiated once 212 and then again 214 to provide velocity and acceleration signals, respectively. These signals are differentiated over a sampling period in a conventional manner. From the total base wrench signal, W.sub.b, signals that represent the gravity W.sub.g, and dynamic W.sub.d, wrenches are generated 216. These signals are then processed, along with the position, velocity, acceleration, mass and inertia signals, in accordance with Eq. 6, (or Eq. 7, 8, and 9) to generate a signal that corresponds to the desired component of the wrench (force or torque) at the joint desired to be analyzed. Such a method is carried out for each link desired to be analyzed. It will be understood that, while Equation 6 completely describes the component of the wrench desired to be determined, in many cases, many of the terms can be neglected or simplified. This may be due to the geometry of the manipulator, such as in the example discussed initially above, or the small size of certain parameters (such as velocity and acceleration during very fine motion), as in the example shown most immediately above. Thus, it may be possible to implement the invention with far fewer terms for some or all of the links desired to be analyzed. However, these simplified cases are also contemplated as part of the invention. In fact, it will be the rare case where it is necessary to analyze Eq. 6 with all terms being non-zero, or non-combinable with other terms. Using the Base Sensor in Conjunction with a Conventional End-Effector Force/Torque Sensor The base wrench sensing apparatus and method has been discussed above in connection with situations where there is no force or moment transmitting interaction with the environment. This is a special, although common case. The invention of a base wrench sensor can also be used where there are force and/or moment interactions with the environment, as shown schematically with reference to FIG. 14. FIG. 14 shows a situation similar to that shown in FIG. 3, with the further addition of an end effector sensor, and additional signal processing equipment. A manipulator 210 is attached to a fixed reference body 214 through a six axis wrench (force/torque) sensor 216, as above. The wrench sensor generates a signal that corresponds to the wrench measured at the sensor origin, composed of a moment component M.sub.b and a force component F.sub.b of the wrench W.sub.b. The manipulator is equipped with an end effector 270 that interacts with an element in the environment 272, such as tightening a bolt with a wrench 274. The task involves both motion X(t) (including both translation and rotation around and a force F.sub.int and a moment M.sub.int, both of which are expressed as vectors. Using the apparatus and method of invention to realize such tasks could be done in several ways, in conjunction with different existing position/force control techniques that are well known in robotics, such as Hybrid Position/Force control 11! or Impedance Control 12!. All these techniques suppose that, at the lowest level of the control, the joint torques can be driven with enough accuracy. Thus, compensating for joint friction will provide a significant improvement in the performance of the overall system. The first way of exploiting the invention is to estimate the joint torques using a very similar analysis, but including a few changes that are described below. First, Eq. (1) is replaced by: W.sub.b =W.sub.g +W.sub.tot, (17) where W.sub.tot is the wrench that is due to both motions of the manipulator and external interaction (W.sub.tot =W.sub.int +W.sub.dyn). Second, in Eq (2) and following, the subscript .sub.d is replaced by the subscript .sub.tot. Using this analysis, the torque (or force) that is estimated corresponds to the torque (or force) actually applied at the joint to provide both the motion of the robot and the interaction wrench, compensated for gravity. For some control schemes which could be used in conjunction with the invention, it is necessary to separately control the torque (or force) that corresponds to the interaction wrench and the torque (or force) that corresponds to the link's motions. This could be achieved by exploiting the wrist force/torque sensor measurement, W.sub.int. From this wrench, the amount of torque due to the interaction, τ.sub.int, is given by: ##EQU14## where A is a point of the end-effector, q is the joint position, and J.sub.A (q) is the Jacobian of the manipulator at A. This quantity can be then subtracted from the total torque to get the amount of torque due to the motion. Some manipulators interact with the environment at points other than or in addition to the end effector. In such a case, a wrench sensor should be located at each point that is likely to interact with the environment, in such a way as to sense the wrench so that it can be incorporated into the Newton-Euler analysis. Hydraulic and Other Actuators The base wrench sensing apparatus and method can be applied to manipulators having hydraulic actuators as well, such as the Schilling Titan II manipulator, sold by Schilling Development Inc., Davis, Calif. This kind of manipulator experiences a very large amount of friction at the joints, due to the seals in the actuators. In addition, the actuator behavior is not linear, as it nearly is for an electric motor. Finally it is virtually impossible to control precisely the torques applied to this kind of robot and to realize delicate and/or precise tasks. The method used is strictly the same for the joint torque (or joint force) estimation. The difference stands in the torque control scheme which is accomplished by conventional or other means, that do not depend on the present invention. The same can be said for other types of actuators, such as linear electric motors (known also as "Sawyer" motors), and any other type of actuator. The apparatus and method of the invention can be used with simple, single joint manipulators. In such a case, the relation 6 for the gravity compensated torque that is actually applied reduces to τ.sub.1 =-z.sub.0.sup.t M.sub.d.sup.O.sub.0 !, where the terms are as defined above. Similarly, for a single joint manipulator, the gravity compensated force that is actually applied reduces to f.sub.1 =-z.sub.0.sup.t F.sub.d !. Thus, an embodiment of the method of the invention is a new method to compensate for joint friction in fine motion control of manipulators. Previous methods require either a complex modeling and identification process or expensive, delicate, noise sensitive sensors that must be designed into the equipment. An apparatus to carry out the method of the invention has also been disclosed, including a base wrench sensor and signal processing equipment that carries out the method of the invention. The method and the apparatus of the invention is very practical. A preferred embodiment of the apparatus of the invention is a 6 axis wrench (force/torque) sensor mounted at the base of the manipulator. The sensor is external to the robot and hence can be easily retrofitted under existing manipulators. A torque estimation process, as well as a controller design, have been developed. No friction model is required during any stage of the development. In addition, for very fine motion applications, the method does not require any knowledge of the mass properties. The experimental results show a very substantial enhancement of the manipulator capabilities. At the joint level, the precision reaches the encoders' resolution. At the end-effector, during very slow displacements, the position error remains smaller than 30 μm. The invention has been described above in connection with compensating the measured wrench for the wrench required to overcome a gravitational field. The invention can also be used to compensate the measured wrench for any field that is uniform and can be modeled. For instance, the invention could be used to compensate for the effects of a uniform, or known magnetic field, acting on magnetically responsive elements, or a fluid flow field, such as air or water, acting on elements immersed in a flowing stream of fluid. The wrench for these environments is simply calculated in a corresponding manner as has been the gravity based wrench, and its effects are compensated for in the same manner as are those of the gravity wrench. Thus, the invention can be used in situations where such a different type of force field is present, either in the presence of gravity, or in the absence of gravity. The foregoing discussion should be understood as illustrative and should not be considered to be limiting in any sense. While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the claims. For instance, the invention can be used with either rotary, helical or linear joints or actuators, or a combination of any of the three. The apparatus of the invention can be used in any control system, for instance any control scheme that is used with wrench sensors located at the joints. The invention can be used with manipulators that interact with the environment by exchanging a wrench therewith, or with manipulators that simply move an end effector, without contacting the environment in a wrench exchanging mode. The invention can be used in connection with manipulators, or motions, for which some of the terms in the general relations, such as Eq. 6, are equal to zero, or cancel out due to the orientation of the manipulator links. The invention can be used with manipulators having any number of joints and degrees of freedom. 1! B Armstrong, Control Of Machines With Friction, Kluwer Academic Publishers, Boston, USA, 1991. 2! M. R Popovic, K. B. Shimoga and A. A. Goldenberg, Model Based Compensation Of Friction In Direct Drive Robotic Arms, J. of Studies in Informatics and Control, Vol. 3, No 1., pp. 75-88, March 1994. 3! C. Canudas de Wit, Adaptive Control Of Partially Known Systems, Elsevier, Boston, USA, 1988. 4! M. R. Popovic, D. M. Gorinevsky and A. A. Goldenberg, Accurate Positioning Of Devices With Nonlinear Friction Using Fuzzy Logic Pulse Controller, Int. Symposium of Experimental Robotics, ISER' 95, preprints, pp. 206-211. 5! J. Y. S. Luh, W. B. Fisher and R. P. Paul, Joint Torque Control By Direct Feedback For Industrial Robots, IEEE Trans. on Automatic Control, vol. 28, No 1, February 1983. 6! L. E. Pfeffer, O. Khatib and J. Hake, Joint Torque Sensory Feedback Of A PUMA Manipulator, IEEE Trans. on Robotics and Automation, vol. 5, No 4, pp. 418-425, 1989. 7! Hake J. C. and Farah J., Design Of A Joint Torque Sensor For The Unimation PUMA 500 Arm, Final Report, ME210, University of Stanford, Calif., 1984. 8! H. West, E. Papadopoulos, S. Dubowsky and H. Chean, A Method For Estimating The Mass Properties Of A Manipulator By Measuring The Reaction Moment At Its Base, Proc. IEEE Int. Conf. on Robotics and Automation, 1989. 9! T. Corrigan and S. Dubowsky, Emulating Micro-Gravity In Laboratory Studies Of Space Robots, Proc. ASME Mechanisms Conf., 1994. 10! R. Volpe and P. Khosla, An Analysis Of Manipulator Force Control Strategies Applied To An Experimentally Derived Model, Proc. IEEE/ RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1989-1997, 1992. 11! M. H. Raibert and J. J. Craig, Hybrid Position/Force Control of Manipulators, ASME Journal of Dynamic Systems, Measurement and Control, Vol. 102, 1981, pp. 126-133. 12! N. Hogan, Impedance Control, A New Approach to Manipulation, ASME Journal of Dynamic Systems, Measurement and Control, Vol. 107, 1985, pp. 1-24. Appendix A The expressions for the constant parameters used in the torque estimation for the Puma 550 (Eq. 7,8,9) are: A.sub.1 =m.sub.2 r.sub.2 y(r.sub.2.sbsb.z +d.sub.2) A.sub.2 =m.sub.2 r.sub.2 x(r.sub.2.sbsb.z +d.sub.2)-m.sub.1 a.sub.2 r.sub.1.sbsb.z A.sub.3 =-I.sub.2.sbsb.zz -(r.sub.2.sbsb.x.sup.2 +r.sub.2.sbsb.x a.sub.2 +r.sub.2.sbsb.y.sup.2)m A.sub.4 =-m.sub.2 a.sub.2 r.sub.2.sbsb.y A.sub.5 =-m.sub.2 r.sub.2.sbsb.y (a.sub.2 +r.sub.2.sbsb.x) A.sub.6 =A.sub.4 -A.sub.5 where a.sub.2 and d.sub.2 are the Denavit-Hartenberg parameters describing the transformation between the frames 1 and 2, O.sub.2 G.sub.2 =r.sub.2 =r.sub.2.sbsb.x x.sub.2 +r.sub.2.sbsb.y y.sub.2 +r.sub.2.sbsb.z z.sub.2 (see FIG. 3) and ##EQU15## in the base (x.sub.2,y.sub.2,z.sub.2). These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims and accompanying drawings, where: FIG. 1 is a schematic block diagram showing a preferred embodiment of an apparatus that incorporates the invention, showing a manipulator and control modules. FIG. 2 is a schematic representation of a typical link i in a manipulator, showing the axes used to characterize the link, and its center of mass G.sub.i. FIG. 3 shows schematically a manipulator (a Puma brand model 550) with link frames of reference attached, following the Denavit-Hartenberg notation, equipped with a base wrench sensor. FIG. 4 shows schematically the application of Newton-Euler equations to a representative link i of a manipulator used in connection with the invention. FIG. 5 shows schematically a hybrid robot, having both rotary and linear joints. FIG. 6 shows schematically in flow chart form the steps of a preferred embodiment of the method of the invention of generating a signal that corresponds to a component of the actual, gravity compensated force or torque that is applied to a selected link of a manipulator. FIG. 7A shows schematically the open loop torque response results of an open loop experiment for joint 1 of a manipulator shown in FIG. 3. FIG. 7B shows schematically the friction characteristics of an open loop experiment for joint 1 of a manipulator shown in FIG. 3. FIG. 8A shows schematically the torque control response results of an open loop experiment for joint 1 of a manipulator shown in FIG. 3. FIG. 8B shows schematically the torque error results of the open loop experiment for joint 1 illustrated in FIG. 8A. FIG. 9 shows schematically the torque control response results of an open loop experiment for joint 2 of a manipulator shown in FIG. 3. FIG. 10 shows schematically in block diagram form a precise position control scheme with base sensed wrench feedback. FIG. 11A shows graphically the joint position tracking performance for an experiment using the apparatus of the invention with the precise position control scheme illustrated in FIG. 10. FIG. 11B shows graphically the joint position error for the experiment illustrated in FIG. 11A. FIG. 12 shows schematically a setup for a Cartesian space experiment illustrating the effectiveness of the invention. FIG. 13 shows the results of the Cartesian tracking experiment using the setup shown in FIG. 12 and the apparatus of the invention. FIG. 14 shows schematically an embodiment of the apparatus of the invention for use with manipulators that interact with the environment, exchanging a wrench with the environment at the end effector. FIG. 15 is a schematic block diagram showing an embodiment of a portion of the invention, including a wrench sensor, joint encoders, and signal processing equipment for using the signals from the wrench sensor and the joint encoders to estimate a component of the wrench applied at a joint within a manipulator such as is shown in FIG. 1. FIG. 16 is a schematic block diagram of a preferred embodiment of a torque estimator of the invention. This invention relates in general to robots and to manipulators and more specifically to the control of such devices that experience joint friction. It relates more specifically to an apparatus for estimating the torques or forces that are actually delivered by joint actuators to adjacent links. In many new applications of robotic manipulators, such as surgery or micro assembly, the manipulator end-effector position must be controlled very accurately during small, slow motions. In some cases, the position/orientation must be controlled. In others, the force/moment applied must be controlled. In still others, position/orientation and force/moment must be controlled. The precision required is difficult to achieve with currently available systems, due to nonlinear joint friction, which can lead to stick-slip motions, static positioning errors, or limit cycle oscillations. Previous techniques developed to deal with this problem can be classified in three categories: model based compensation, torque pulse generation, and torque feedback control. In model based compensation, a model is used to compute an estimate of the friction torque, which is provided to the actuator controller. The friction model can be used either in feed forward compensation control 1,2!, or in feedback compensation control 3!. A very accurate model is needed in this method, as there is no measurement of the friction in the joint. Such precise models can be adaptively identified 3!, but they still must account for many nonlinear phenomena such as Coulomb friction, dependency on joint position, influence of changes in load and temperature, nonbackdriveability, etc. As a result of this complexity, the modeling, identification, and adaptation aspects of the model-based compensation method are not fully solved, and not likely ever to be fully solved, thus making these techniques difficult to implement in practice. The problem remains prevalent, generating many papers per year proposing solutions. The torque pulse friction compensation method computes the width and magnitude of a torque pulse necessary to provide a small joint displacement. The computation can use either an explicit model 1! or simple rules of qualitative reasoning 4!. This approach appears to be more practicable than model based compensation, and usually a few pulses are sufficient to accurately reach the desired position in spite of Coulomb friction. However, the pulse generation method is limited to applications for which the trajectory to reach the final position is not important, since only finite displacements are controlled. The torque feedback control technique is based on a joint torque control loop. The torque applied to the manipulator joint is sensed and fed back in a joint torque loop. This method has produced among the best experimental results found in the literature for joint friction compensation. In experiments involving manipulators with high friction gear trains, this technique has reduced the effective friction torque by up to 97% 5,6!. In addition, the method does not require any friction model and is very robust with respect to changes in load or friction torque magnitude. Unfortunately, for a rotary joint, the method requires knowing the torque of the joint. Most commercially available manipulators are not equipped with joint torque sensors. Retrofitting such sensors in the joints of an existing manipulator would be very difficult. Also, manipulators designed to include such sensors have a number of practical problems. For example, introducing flexures instrumented with strain gages in the joint adds structural flexibilities and decreases the overall performances of the manipulator 7!. Substantial nonlinearities in the sensor output can result from the complex loading on the sensor by a joint gear train. Each individual joint sensor must be specifically calibrated. This calibration must be done on board, when the manipulator is fully assembled, to take into account the loading conditions. Consideration of the effects of gravity upon the links must be taken, in a feed forward module, by repeated applications of a gravity model at each joint. Finally, individual joint sensors are expensive, add to wiring complexity, and are subject to damage due to manipulator vibrations or overloads. The lengthy wiring required from each sensor to the processing unit is prone to pick up noise particularly when the wires pass by actuator motors. Thus, although torque feedback control has been known to provide excellent results for over ten years, it is not broadly used due to these complexities and inadequacies, because determining the state of a joint is rather difficult. There is, then, need for an apparatus that can provide precise position/orientation and/or force/moment control of a manipulator's end effector, even in the presence of significant joint friction. The need extends to such an apparatus that could be practically retrofitted to existing devices as well as used in new devices. Further it is desirable to achieve such precise control without introducing structural flexibilities and nonlinearities into the manipulator. It is also desirable to minimize any additional expense, complexity and fragility. It is also desirable that the mechanical design of any such apparatus be simple and robust with minimal wiring. Thus, the several objects of the invention include to provide an apparatus that generates a signal that corresponds to the actual torque or force that is being applied by an actuator to a link, compensated for the torque or force required to overcome gravity. An additional object of the invention is to provide an apparatus that automatically eliminates the need to model or anticipate frictional effects in the joints. A further object is to provide such an apparatus that may be economically retrofitted to existing apparatus, which is robust, mechanically simple, and does not introduce structural flexibilities or nonlinearities into the manipulator. The invention provides a new approach to deal with joint friction in manipulators performing fine motions, that overcomes the difficulties of the known methods discussed above. The invention uses a six axis wrench sensor (also called a force/torque sensor) mounted between the manipulator and a reference body upon which it is supported (see FIG. 1). For rotary joints, torques are estimated from the measurements provided by this sensor. The estimation process uses Newton-Euler equations of successive link bodies. The estimated torques are used in joint torque control loops as is done with direct torque measurements. A position control loop encloses the torque controller and provides it with desired torques computed from measured position errors. For linear joints, appropriate forces are estimated. More particularly, a preferred embodiment of the apparatus of the invention is an apparatus for generating a signal that corresponds to the gravity compensated torque actually applied to a link at a rotary joint of a manipulator. The apparatus comprises a wrench sensor that is connected between the base and the reference body to generate a base wrench signal that corresponds to the base wrench that is applied between the base and the reference body, expressed at a sensor measurement point. Coupled to the position sensors and the wrench sensor, is a gravity compensator, which generates a dynamic wrench signal that corresponds to the gravity compensated dynamic component of the base wrench signal, based on the position signals and the base wrench signal. Coupled to the gravity compensator and the position sensors, a joint analyzer, which generates a signal that corresponds to the gravity compensated torque that is actually applied to the link at the rotary joint, based on the dynamic wrench signal and the position signals. In one embodiment, the gravity compensator is a gravity wrench generator, which generates a signal that corresponds to the gravity component of the base wrench, based on the position signals and, coupled to the gravity wrench generator and the wrench sensor, a dynamic wrench signal generator, which generates the signal that corresponds to the gravity compensated dynamic component of the base wrench signal, based on the gravity wrench signal and the base wrench signal. The gravity wrench generator may include means for generating a vector O.sub.s G.sub.j from the sensor measurement point to the center of mass of each link; and means for generating a gravity moment signal that corresponds to ##EQU1## where n is the number of links, m.sub.j is the mass of link j and g is the acceleration due to gravity. Alternatively, rather than generating all of the position vectors and calculating the gravity wrench, a look up table may be used that has been prepared before hand. In general, the apparatus implements the following relation to relate the gravity compensated torque that is actually applied to the link at the rotary joint on the one hand and, on the other hand, the dynamic wrench signal and said position signals: ##EQU2## where: τ.sub.l+1 is said gravity compensated torque that is actually applied to the link at the rotary joint; M.sub.d.sup.O.sbsp.i is the moment of the dynamic wrench signal, expressed at O.sub.i, which is the origin of the axis around which the link to which the torque is actually applied rotates relative to an adjacent link that is kinematically closer to the reference body; -z.sub.i.sup.t ! is an operator that projects a vector, onto an axis z.sub.i, which axis is at O.sub.i ; i is the number of the movable links that are kinematically closer to the reference body than the link to which the gravity compensated torque is actually applied. For each link j of the i movable links: I.sub.j is the inertia tensor at its center of mass G.sub.j ; ω.sub.j is the angular velocity relative to a fixed frame; ω.sub.j is the angular acceleration; V.sub.G.sbsb.j is the linear acceleration of the center of mass; O.sub.i G.sub.j is the vector between O.sub.i and the center of mass G.sub.j ; and m.sub.j is the mass. Rather than implementing this entire relation, for a specific manipulator, or for particular tasks where the dynamic or gravity terms are either negligible or constant, the relation reduces to fewer terms. Typically, however, knowledge of the positions of each of the links is required. Another embodiment of the invention is an apparatus for determining the force that is applied at a linear joint of a manipulator. The apparatus is similar to that described above for a rotary joint. However, the specific relation that is implemented by some embodiments of the invention with respect to a force is: ##EQU3## In this case, the variables are as follows: f.sub.i+1 is the gravity compensated force that is actually applied to the link at the linear joint; F.sub.d is the force of the dynamic wrench signal; the axis z.sub.i, which axis is at O.sub.i, is the origin of the axis along which the link to which the force is actually applied translates relative to an adjacent link that is kinematically closer to the reference body. The remaining parameters are the same as above. Other embodiments of the apparatus of the invention include similar apparatus for determining the torque applied at a rotary joint of a single link manipulator and for determining the force applied at a linear joint of a single link manipulator. Of course, the foregoing apparatus can be combined into a single apparatus that can perform all of the functions of assessing the force or torque, as the case may be, at a joint, whether it is linear or rotary. Another preferred embodiment of the invention is a method for generating a signal that corresponds to the gravity compensated torque actually applied to a link at a rotary joint of a manipulator. The method comprises the steps of: generating a base wrench signal that corresponds to the base wrench that is applied between the base and the reference body, expressed at a sensor measurement point. Gravity is compensated for by generating a dynamic wrench signal that corresponds to the gravity compensated dynamic component of the base wrench signal, based on the position signals and the base wrench signal. A signal is generated that corresponds to the gravity compensated torque that is actually applied to the link at the rotary joint, based on the dynamic wrench signal and the position Another preferred embodiment of the invention is a method for determining the gravity compensated force that is actually applied at a linear joint of a manipulator. This method is similar to that described for a rotary joint. Typically, the method of the invention evaluates the appropriate relation, or reduced version thereof, as set out above. Further embodiments of the method of the invention can be used to determine the force or torque at a single joint manipulator.
{"url":"http://www.google.com/patents/US5767648?ie=ISO-8859-1&dq=%E2%80%9Cconfiguration+using+structure+and+rules+to+provide+a+user+interface.%E2%80%9D","timestamp":"2014-04-19T12:42:57Z","content_type":null,"content_length":"185712","record_id":"<urn:uuid:cfa5775b-76ff-459e-bc57-0b989b3f3587>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Winchester, MA SAT Math Tutor Find a Winchester, MA SAT Math Tutor ...I've worked with hundreds of students, both one-on-one and in classrooms, of ALL ages, backgrounds, and ability levels - from those in middle and high school to Harvard and MIT undergraduate and graduate students and adults in their forties. MY PHILOSOPHY: My overarching goal in all the subjects... 47 Subjects: including SAT math, chemistry, English, reading ...I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. I even earned a perfect score on the Math Subject Test. 36 Subjects: including SAT math, English, reading, calculus ...Exponential and logarithmic equations, 11. Permutations, combinations, and basic probability, 12. Matrices. 9 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...Calculus is the study of rates of change, and has numerous and varied applications from business, to physics, to medicine. The complexity of the topics involved however, require that your grasp of mathematical concepts and function properties is strong. I have helped numerous students master both the foundations and the specific skills taught in a variety of calculus courses. 23 Subjects: including SAT math, calculus, GRE, physics ...Please feel free to ask me any questions. I look forward to working with you!I am licensed to teach math grades 8-12. This license includes Algebra I. 9 Subjects: including SAT math, geometry, algebra 1, algebra 2 Related Winchester, MA Tutors Winchester, MA Accounting Tutors Winchester, MA ACT Tutors Winchester, MA Algebra Tutors Winchester, MA Algebra 2 Tutors Winchester, MA Calculus Tutors Winchester, MA Geometry Tutors Winchester, MA Math Tutors Winchester, MA Prealgebra Tutors Winchester, MA Precalculus Tutors Winchester, MA SAT Tutors Winchester, MA SAT Math Tutors Winchester, MA Science Tutors Winchester, MA Statistics Tutors Winchester, MA Trigonometry Tutors Nearby Cities With SAT math Tutor Arlington Heights, MA SAT math Tutors Arlington, MA SAT math Tutors Belmont, MA SAT math Tutors Burlington, MA SAT math Tutors Charlestown, MA SAT math Tutors Everett, MA SAT math Tutors Lexington, MA SAT math Tutors Malden, MA SAT math Tutors Medford, MA SAT math Tutors Melrose, MA SAT math Tutors Reading, MA SAT math Tutors Stoneham, MA SAT math Tutors Wakefield, MA SAT math Tutors West Medford SAT math Tutors Woburn SAT math Tutors
{"url":"http://www.purplemath.com/Winchester_MA_SAT_Math_tutors.php","timestamp":"2014-04-17T04:32:37Z","content_type":null,"content_length":"23986","record_id":"<urn:uuid:9a64e21f-78d6-4355-a090-18daf6cf825c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylor Series and shifting indices May 15th 2011, 10:48 AM #1 Junior Member Oct 2010 Taylor Series and shifting indices For the truncated form of the Taylor series, the Nth Taylor polynomial is given in our notes as $T_{N}(x) = \sum_{n = 0}^N \frac{f^{(n)}(0)}{n!} \cdot x^{n}$ But then later, he defines $T_{3}(x)$ for $f(x)=\sin x$ as $x-\frac{x^{3}}{6}$ Should it not be expanded to three terms, or is the notation signifying that it should be expanded until n, rather than N, reaches 3? Also, I have a question about shifting indices. Our notes give this equality $\sum_{n = 1}^\infty \frac{(-1)^{n}4^{n}}{n!} = \sum_{n = 1}^\infty \frac{(-4)^{n}}{n!} = \sum_{n = 0}^\infty \frac{(-4)^{n}}{n!} - 1 = e^{-4} - 1 ,$ but is it not: $\sum_{n = 1}^\infty \frac{(-4)^{n}}{n!} = \sum_{n = 0}^\infty \frac{(-4)^{n}}{n!} - \frac{4}{n+1}?$ For the first, notice that all even-numbered terms cancel since $\sin(0)=0$. For the second, you're just adding the term corresponding to $n=0$, I don't really know what you did to get to the other expression. Oh ok I see now, his notes just said to take 1 away from n in the index and increase it in the summand, not very clear to be honest! Thank you for that May 15th 2011, 10:56 AM #2 Super Member Apr 2009 May 15th 2011, 11:05 AM #3 Junior Member Oct 2010
{"url":"http://mathhelpforum.com/differential-geometry/180670-taylor-series-shifting-indices.html","timestamp":"2014-04-19T20:26:21Z","content_type":null,"content_length":"36716","record_id":"<urn:uuid:70845846-f6e5-4d51-8b17-c66eb6d3b96a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory of linear and integer programming, Wiley-Interscience Series in Discrete Mathematics Results 1 - 10 of 106 - Journal of Artificial Intelligence Research , 1996 "... This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem ..." Cited by 1298 (23 self) Add to MetaCart This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. - IN PROC. OF THE ELEVENTH INTERNATIONAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE , 1995 "... Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argu ..." Cited by 131 (10 self) Add to MetaCart Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the structure of MDPs. , 1992 "... This paper addresses the problem of compiling perfectly nested loops for multicomputers (distributed memory machines). The relatively high communication startup costs in these machines renders frequent communication very expensive. Motivated by this, we present a method of aggregating a number of lo ..." Cited by 103 (20 self) Add to MetaCart This paper addresses the problem of compiling perfectly nested loops for multicomputers (distributed memory machines). The relatively high communication startup costs in these machines renders frequent communication very expensive. Motivated by this, we present a method of aggregating a number of loop iterations into tiles where the tiles execute atomically -- a processor executing the iterations belonging to a tile receives all the data it needs before executing any one of the iterations in the tile, executes all the iterations in the tile and then sends the data needed by other processors. Since synchronization is not allowed during the execution of a tile, partitioning the iteration space into tiles must not result in deadlock. We first show the equivalence between the problem of finding partitions and the problem of determining the cone for a given set of dependence vectors. We then present an approach to partitioning the iteration space into deadlock-free tiles so that communicati... , 1999 "... We discuss topics related to lattice points in rational polyhedra, including efficient enumeration of lattice points, “short” generating functions for lattice points in rational polyhedra, relations to classical and higher-dimensional Dedekind sums, complexity of the Presburger arithmetic, efficien ..." Cited by 91 (6 self) Add to MetaCart We discuss topics related to lattice points in rational polyhedra, including efficient enumeration of lattice points, “short” generating functions for lattice points in rational polyhedra, relations to classical and higher-dimensional Dedekind sums, complexity of the Presburger arithmetic, efficient computations with rational functions, and others. Although the main slant is algorithmic, structural results are discussed, such as relations to the general theory of valuations on polyhedra and connections with the theory of toric varieties. The paper surveys known results and presents some new results and connections. - International Journal of Parallel Programming , 2000 "... Automatic parallelization in the polyhedral model is based on affine transformations from an original computation domain (iteration space) to a target space-time domain, often with a different transformation for each variable. Code generation is an often ignored step in this process that has a signi ..." Cited by 72 (3 self) Add to MetaCart Automatic parallelization in the polyhedral model is based on affine transformations from an original computation domain (iteration space) to a target space-time domain, often with a different transformation for each variable. Code generation is an often ignored step in this process that has a significant impact on the quality of the final code. It involves making a trade-off between code size and control code simplification/optimization. Previous methods of doing code generation are based on loop splitting, however they have non-optimal behavior when working on parameterized programs. We present a general parameterized method for code generation based on dual representation of polyhedra. Our algorithm uses a simple recursion on the dimensions of the domains, and enables fine control over the tradeoff between code size and control overhead. - Journal of VLSI Signal Processing , 1989 "... The parallelization of many algorithms can be obtained using space-time transformations which are applied on nested do-loops or on recurrence equations. In this paper, we analyze systems of linear recurrence equations, a generalization of uniform recurrence equations. The first part of the paper des ..." Cited by 67 (7 self) Add to MetaCart The parallelization of many algorithms can be obtained using space-time transformations which are applied on nested do-loops or on recurrence equations. In this paper, we analyze systems of linear recurrence equations, a generalization of uniform recurrence equations. The first part of the paper describes a method for finding automatically whether such a system can be scheduled by an affine timing function, independent of the size parameter of the algorithm. In the second part, we describe a powerful method that makes it possible to transform linear recurrences into uniform recurrence equations. Both parts rely on results on integral convex polyhedra. Our results are illustrated on the Gauss elimination algorithm and on the Gauss-Jordan diagonalization algorithm. 1 Introduction Designing efficient algorithms for parallel architectures is one of the main difficulties of the current research in computer science. As the architecture of super-computers evolves towards massive - JOURNAL OF SYMBOLIC COMPUTATION , 2003 "... This paper discusses algorithms and software for the enumeration of all lattice points inside a rational convex polytope: we describe LattE, a computer package for lattice point enumeration which contains the first implementation of A. Barvinok's algorithm [8]. We report on computational experiments ..." Cited by 65 (11 self) Add to MetaCart This paper discusses algorithms and software for the enumeration of all lattice points inside a rational convex polytope: we describe LattE, a computer package for lattice point enumeration which contains the first implementation of A. Barvinok's algorithm [8]. We report on computational experiments with multiway contingency tables, knapsack type problems, rational polygons, and flow polytopes. We prove that this kind of symbolic-algebraic ideas surpasses the traditional branch-and-bound enumeration and in some instances LattE is the only software capable of counting. Using LattE, we have also computed new formulas of Ehrhart (quasi)polynomials for interesting families of polytopes (hypersimplices, truncated cubes, etc). We end with a survey of other "algebraic-analytic" algorithms, including a "polar" variation of Barvinok's algorithm which is very fast when the number of facet-defining inequalities is much smaller compared to the number of - In Proc. Static Analysis Symposium, LNCS 983 , 1995 "... This paper introduces a finite-automata based representation of Presburger arithmetic definable sets of integer vectors. The representation consists of concurrent automata operating on the binary encodings of the elements of the represented sets. This representation has several advantages. First, be ..." Cited by 46 (4 self) Add to MetaCart This paper introduces a finite-automata based representation of Presburger arithmetic definable sets of integer vectors. The representation consists of concurrent automata operating on the binary encodings of the elements of the represented sets. This representation has several advantages. First, being automata-based it is operational in nature and hence leads directly to algorithms, for instance all usual operations on sets of integer vectors translate naturally to operations on automata. Second, the use of concurrent automata makes it compact. Third, it is insensitive to the representation size of integers. Our representation can be used whenever arithmetic constraints are needed. To il... - IN PROC. SUPERCOMPUTING 92 , 1992 "... This paper presents a linear algebraic approach to modeling loop transformations. The approach unifies apparently unrelated recent developments in supercompiler technology. Specifically we show the relationship between the dependence abstraction called dependence cones, and fully permutable loop nes ..." Cited by 44 (11 self) Add to MetaCart This paper presents a linear algebraic approach to modeling loop transformations. The approach unifies apparently unrelated recent developments in supercompiler technology. Specifically we show the relationship between the dependence abstraction called dependence cones, and fully permutable loop nests. Compound transformations are modeled as matrices. Nonsingular linear transformations presented here subsumes the class of unimodular transformations. Nonunimodular transformations (with determinant 1) create "holes" in the transformed iteration space. We change the step size of loops in order to "step aside from these holes" when traversing the transformed iteration space. For the class of non-unimodular loop transformations, we present algorithms for deriving the loop bounds, the array access expressions and step sizes of loops in the nest. The algorithms are based on the Hermite Normal Form of the transformation matrix. We illustrate the use of this approach in several problems such a... , 1994 "... This paper describes the POMDP framework and presents some wellknown results from the field. It then presents a novel method called the witness algorithm for solving POMDP problems and analyzes its computational complexity. We argue that the witness algorithm is superior to existing algorithms for s ..." Cited by 44 (3 self) Add to MetaCart This paper describes the POMDP framework and presents some wellknown results from the field. It then presents a novel method called the witness algorithm for solving POMDP problems and analyzes its computational complexity. We argue that the witness algorithm is superior to existing algorithms for solving POMDP's in an important complexity-theoretic sense.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=215599","timestamp":"2014-04-17T23:22:08Z","content_type":null,"content_length":"39130","record_id":"<urn:uuid:66e6ec16-8585-40fe-be57-cdfaf470bccb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is the sum of 1-2+3-4+... is 1/4? April 25th 2010, 09:00 AM #14 April 25th 2010, 08:57 AM #13 April 25th 2010, 08:54 AM #12 April 25th 2010, 08:32 AM #11 MHF Contributor April 25th 2010, 07:39 AM #10 Senior Member April 25th 2010, 06:20 AM #9 Senior Member April 25th 2010, 05:56 AM #8 April 25th 2010, 05:44 AM #7 Senior Member April 24th 2010, 08:34 PM #6 Senior Member April 24th 2010, 08:05 PM #5 Senior Member April 24th 2010, 07:35 PM #4 April 24th 2010, 07:26 PM #3 Senior Member April 24th 2010, 06:38 AM #2 April 24th 2010, 06:20 AM #1 Senior Member Nop, this is not a good reason: the conterexample $\sum^\infty_{k=0}\frac{1}{n!}=eotin\mathbb{Q}$ shows that a sum of only rational numbers can have a non-rational sum...of course, as already noted, the gist here is that the sum is infinite and thus closedness and other related things do not apply here. Nothing else to do, so I'm trying: 1 - 2 - 3 + 4 + 5 - 6 - 7 + 8 + 9 -10 - 11 + 12 + 13 .... Well, this gives me running totals of 1, -4, 5, -8, 9, -12, 13 ... Now how in heck can I paradoxize this Things get weird when we begin talking infinity though. But then again, the abstract algebra sense of the Integers does include an infinite collection. Even the linear algebra sense gives you closure since the Integers are a vector space. EDIT: I can see how this would be 1/4 in the formal power sense (which is what maddas was saying), but this sub forum is not that advanced. Read this 1 ? 2 + 3 ? 4 + · · · - Wikipedia, the free encyclopedia I don't know enough math to understand the "proofs" in that article. Can anyone explain the number manipulation method? It seems like the proof that requires the least amount of math knowledge to Lvleph is right in a sense. Since you only consider the addition and the substraction of integer terms, you cannot possibly end up with a non-integer result. So the original statement cannot be right ! Looks like I need to read up on my Real Analysis again, because this is not making sense to me. Any convergent series whose terms are integers has only finitely many non-zero terms. But closure only applies to finite sums. For instance, the rationals are closed under addition but the sum $1+\frac1{2!}+\frac1{3!}+\frac1{4!}+\cdots$ is irrational. There is no way this is 1/4. The reason? Because, integers are closed under addition. Last edited by Prove It; April 24th 2010 at 08:07 PM. Call $S:= 1-1+1-\cdots$. Cancelling off the very first term by subtracting 1 gives $-1+1-1+\cdots= - (1-1+1-\cdots)$ so $S = -(S - 1)$ or $S=\frac12$. Call $T := 1-2+3-4+\cdots$. Add the series $1-2+3-4+\cdots$ and $1-1+1-1+\cdots$ together term by term to get $T+S = (1+1) - (2+1) + (3+1) - \cdots = 2 - 3 + 4 -\cdots = -(T-1)$. Since $S=\ frac12$, $T+\frac12 = -(T-1)$ or $T=\frac14$. The remarkable thing, of course, is that you get the same answer if you do it "algebraically" like this as if you do it like Opalg does it (analytically, generating funtions). The binomial series $1-2x+3x^2-4x^3+\ldots$ converges to $(1+x)^{-2}$ when $|x|<1$. If you ignore that restriction on x and pretend that the series also converges to the same function when x=1 then you come up with the formula $1-2+3-4+\ldots = 1/4$, which of course is not true in any conventional sense. Why is the sum of 1-2+3-4+... is 1/4? Can someone explain to me the paradox 1-2+3-4+... =1/4? I really don't get why and how. Thanks so much. Things get weird when we begin talking infinity though. But then again, the abstract algebra sense of the Integers does include an infinite collection. Even the linear algebra sense gives you closure since the Integers are a vector space. They are? Under what field, say? EDIT: I can see how this would be 1/4 in the formal power sense (which is what maddas was saying), but this sub forum is not that advanced. The value of the series depends on the method used to sum it. Everyone knows that the series diverges in the traditional sense (aka. the partial sums are eventually graeater in magnitude than any number). The series is Abel summable to 1/4. The wikipedia article the OP linked discusses other summability methods under which it converges. In some sense, any "reasonable" summation method which sums this series must give 1/4. Hardy has a book on divergent series for anyone interested in these sums and their applications. (Also, Wilmer, you have wrong series; the signs alternate every term, not every two terms. The partial sums are actually 0,1,-1,2,-2,3,-3,...) April 25th 2010, 09:18 AM #15 Senior Member
{"url":"http://mathhelpforum.com/algebra/141062-why-sum-1-2-3-4-1-4-a.html","timestamp":"2014-04-19T15:20:52Z","content_type":null,"content_length":"77140","record_id":"<urn:uuid:35db392c-34e1-4546-ab6a-7ca3acc92896>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Geographic Geometry - 1 The Great Pyramid is aligned with Machu Picchu, the Nazca lines and Easter Island along a straight line around the center of the Earth, within a margin of error of less than one tenth of one degree of latitude. Other sites of ancient construction that are also within one tenth of one degree of this line include: Perseopolis, the capital city of ancient Persia; Mohenjo Daro, the ancient capital city of the Indus Valley; and the lost city of Petra. The Ancient Sumarian city of Ur and the temples at Angkor Wat are within one degree of latitude of this line. The alignment of these sites is easily observable on a globe of the Earth with a horizon ring. If you line up any two of these sites on the horizon ring, all of the sites will be right on the horizon ring. 3-D world atlas software programs can also draw this line around the Earth. Start on the Equator, at the mouth of the Amazon River, at 49° 17' West Longitude; go to 30° 18' North Latitude, 40° 43' East Longitude, in the Middle East, which is the maximum latitude the line touches; then go to the Equator at 130° 43' East Longitude, near the Northwest tip of New Guinea; then to 30° 18' South Latitude, 139° 17' West Longitude, in the South Pacific; and then back to 49° 17' West Longitude, at the Equator. Centered on Centered on Centered on Centered on 0.00° N, 49° 17' W 30° 18' N, 40° 43' E 0.00° N, 130° 43' E 30° 18' S, 139° 17' W The circumference of this line around the center of the Earth is 24,892 miles. Along this line, the great circle distance from, ☆ the Great Pyramid to Machupicchu is 7,487 miles, 30.0% of the circumference. ☆ Machupicchu is 2,564 miles from Easter Island, 10.3%. ☆ Easter Island is 10,096 miles from Angkor Wat, 40.6%. ☆ Angkor Wat is 2,490 miles from Mohenjo Daro, 10.0%. ☆ Mohenjo Daro is 2255 miles from from the Great Pyramid, 9.1%. In addition to calculating the distances between these sites as a percentage of the circumference of the Earth, the distances may also be calculated in degrees of the 360° circumference, by multiplying the percentage by 3.6. For example, the Great Pyramid is 108° away from Machupicchu. Angkor Wat and Angkor Thom were constructed at a time when 72 temples were built across the Angkor Plain. The Angkor temple at Prassat Preah Vihear, 90 miles Northeast of Angkor Wat, is within one tenth of one degree of the line. Like Machupicchu, the temple at Prassat Preah Vihear was built on the edge of a mountaintop. The first temples built around Angkor are near the city of Rolous, Southeast of Angkor Wat. The temples near Rolous are also thought to have been built on foundations constructed at a much earlier time. Halfway between Angkor Wat and the Great Pyramid is the Indus Valley, the city of Mohenjo Daro, and the unexcavated city of Ganweriwala, which is East of Mohenjo Daro, and thought to be just as large. Both of these sites are on the line between Angkor and the Great Pyramid. The Indus Valley is also antipodal to Easter Island. It is an interesting coincidence concerning these two sites, opposite each other on Earth, that of the few ancient written languages of the world that remain undeciphered, two are Indus Valley Script and Rongorongo, the written language of ancient Easter Island. The world's first known written languages, Egyptian Hieroglyphics and Sumerian Cuneiform, were also developed along this line of ancient sites. The Jewish, Christian, Muslim, Hindu, Brahman and Buddhist religions, as well as ancient Egyptian and Peruvian religions, were also developed along this line. Anatom Island is the southernmost Island in the new Republic of Vanuatu, formerly known as the New Hebrides. Anatom Island is exactly halfway between Easter Island and Angkor Wat 5,048 miles each way, or 20.3% of the circumference of the Earth. Stone ruins on Anatom Island once housed the largest missionary church in the southern hemisphere. The line crosses over the source and the mouth of the Amazon, the mouth of the Nile, the mouth of the Tigris-Euphrates, the Indus River and the Bay of Bengal near the mouth of the Ganges. The line also crosses over a number of areas of the world that are largely unexplored or unexcavated, including the Sahara Desert, the Brazilian Rainforest, the highlands of New Guinea, and underwater areas of the North Atlantic Ocean, the South Pacific Ocean and the South China Sea. For example, the midway point between the Great Pyramid and Machupicchu is in the North Atlantic Ocean, less than one degree south of the Cape Verde Islands. This is also the midway point between Easter Island and the Indus Valley. Although the Cape Verde Islands were found to be uninhabited when they were rediscovered in 1460 A.D., maps and geographical descriptions for the past 2000 years have shown this location to be the home of ancient island civilizations, including maps showing this location to be the site of Atlantis. In Plato's account of Atlantis, there was a mountainous region north of the city. Are the higher elevations of those mountains now the Cape Verde Islands? Back to Contents PART 2 - GOLDEN SECTION SITES - ANGKOR, THE GREAT PYRAMID & NAZCA Angkor Wat is 4,745 miles from the Great Pyramid and the Great Pyramid is 7,677 miles from Nazca. This is a precise expression of ?, the Golden Section: 4,745 x 1.618 = 7,677 Ninety miles northeast of Angkor Wat are the Angkor temples at Prassat Preah Vihear. Prassat Vihear is 4754 miles from the Great Pyramid. The line of ancient sites crosses over the Great Pyramid and Angkor Vihear. Twenty five miles northwest of the city of Nazca is a figure known as the Hummingbird. The Hummingbird is 7,692 miles from the Great Pyramid. The line of ancient sites also crosses over the Hummingbird. The relationship between the distances from Angkor Vihear to the Great Pyramid and from the Great Pyramid to the Nazcan Hummingbird is also a precise expression of phi: 4,754 x 1.618 = 7,692 Because the distance from the Hummingbird to Angkor Vihear is one-half of the circumference of the earth, two Golden Section relationships between these sites are shown by the circumference of the earth along the line of ancient sites: These Golden Section relationships may also be diagramed on a straight line The line of ancient sites is a line, from the perspective of the illustration in Part One, and it is a circle, from the perspective of the illustration in Part Six. The line and the circle are found in the Greek letter phi and the number 10. Zero and one are also the first two numbers and the only two numbers in the binary code. The phi relationships between these sites are reflected repeatedly in the first 500 Fibonacci numbers. The first three prime numbers, 2, 3 and 5, approximate the intervals along the circumference of 20%, 30% and 50%, between these three sites. This same percentage of the circumference relationship, accurate to three digits, is found in Fibonacci numbers 137-139: ┃ │Percentage of │ Distance │First three digits of│ First five digits of ┃ ┃ │circumference:│between sites:│ Fibonacci numbers: │ Fibonacci numbers: ┃ ┃Angkor to Giza: │ 19.1% │ 4,754 miles │#137: 191... (Prime) │#359: 47542... (Prime) ┃ ┃Giza to Nazca: │ 30.9% │ 7,692 miles │#138: 309... │#360: 76924... ┃ ┃Nazca to Angkor:│ 50.0% │ 12,446 miles │#139: 500... │#361: 12446... ┃ The next prime Fibonacci number after #137 is #359. The distances between these sites, in miles, is reflected by Fibocacci numbers 359-361, accurately to five digits. Back to Contents PART 3 - LINES THROUGH THE EARTH The line of ancient sites may be viewed as a circle because all of the sites are on a straight line around the center of the Earth. The intervals between the sites are based on their great circle distances from each other. The circle is oriented so that the two points where the circle crosses the equator are on the horizontal axis, and the two points where the circle reaches its greatest latitudes are on the vertical G = The Great Pyramid A = Angkor Wat C = Cape Verde Islands I = Indus Valley M = Machupicchu D = Mohenjo Daro N = Nazca P = Perseopolis E = Easter Island U = Ur V = Anatom Island R = Petra Straight lines may be drawn through the Earth, connecting Easter Island to Machupicchu, the Great Pyramid, Angkor Wat, and the Indus Valley (antipodal to Easter Island). The straight line distance, through the Earth, from Angkor Wat to Easter Island (7,574 miles), plus the straight line distance from Easter Island to Macchupicchu (2,522 miles), equals the great circle distance from Angkor Wat to Easter Island (10,096 miles). The straight line distance from the Great Pyramid to Easter Island (7,566 miles) is three times the straight line distance from Easter Island to Machupicchu (2,522 miles). The straight line distance from Easter Island to its antipodal point in the Indus Valley (7,924 miles), which is also the diameter of the Earth, is 3.1416 times the straight line distance from Easter Island to Machupicchu (2,522 miles), a precise expression of phi. Since the circumference of the Earth is also 3.1416 times the diameter of the Earth, the straight line distance from Easter Island to Machupicchu times pi² equals the circumference of the Earth. The angle formed by the lines from Easter Island to Machupicchu, and to the Indus Valley, is 72°. The angle formed by the lines from Easter Island to Machupicchu, and to the Great Pyramid, is 54°. Lines connecting Easter Island, the Great Pyramid, and the Angkor temples near Rolous, form an isosceles triangle with base angles of 72.9°. The base of this triangle (AG) is 4462 miles long. The height of this triangle (HE) is 7220 miles long. The length of the base of the triangle times phi equals the height of the triangle: 4,462 miles x 1.618 = 7,220 miles The length of the base of each face of the Great Pyramid is 755.6 feet. The slant height of each face is 611 feet. One half of the length of the base times phi equals the slant height of the Great Pyramid: 755.6 feet ÷ 2 = 377.8 feet 377.8 feet x 1.618 = 611 feet The ratio of the base to the slant height of the Great Pyramid is exactly two times the ratio of the base to the height of the triangle formed by through the earth straight lines connecting the Great Pyramid, Angkor and Easter Island. Lines connecting Easter Island with it's antipodal point in the Indus Valley, Nazca with it's antipodal point at Angkor, Easter Island with Nazca and Angkor with the Indus Valley, form two isosceles triangles with base angles of 72.9°. With the same angular dimensions as the triangle formed by Easter Island, Angkor and the Great Pyramid, the length of the bases of these triangles times phi also equals the height of these 2,337 miles x 1.618 = 3,782 miles The ratio of the base to the slant height of the Great Pyramid is also exactly two times the ratio of the base to the height of the triangles formed by through the earth straight lines connecting Easter Island, Nazca and the center of the Earth, and Angkor, the Indus Valley and the center of the Earth. Because the distance between the Great Pyramid and Angkor is very nearly 20% of the circumference, they are very nearly 72° apart, along the circle. Because the distance from the Great Pyramid to Easter Island is very nearly 40% of the circumference, and the distance from Angkor to Easter Island is very nearly 40% of the circumference, the Great Pyramid and Angkor are both very nearly 144° away from Easter Island, along the circle. The number 72, and to a lesser extent the numbers 54, 108, and 144, have been associated with the designs of these sites, particularly at the Great Pyramid and Angkor. The ratio of the height and the perimeter of the Great Pyramid, to the size of the Earth, is a multiple of 72. The number of temples built around Angkor is 72, and the number 54 is reflected in the numbers of statuary in the temples at Angkor. The use of these numbers is also prevalent in ancient writings and folklore surrounding these sites. The number 54 is itself a factor of 72, in that 72 plus ½ of 72, or 36, equals 108, which divided by two equals 54. The number 72 is also associated with the astronomical phenomenon known as precession, because 72 years is the length of time it takes for the constellations to move one degree due to precession. This has been offered as an explanation for the use of these numbers, suggesting that the builders of these sites were aware of the precession of the equinoxes. In the 2nd century B.C., the Greek mathematician, Archimedes, wrote an article entitled The Sand Reckoner, in which he cited earlier Greek mathematicians (like Archimedes, they had studied in Alexandria and Heliopolis) who had calculated that the Sun occupied 1/720 of the circle of the constellations. This may be an additional, or alternative, explanation for the prevalence of the number 72, and its multiples and factors, found in these sites. In any event, the existence of these numbers in the geometric relationships between these sites is complementary to the use of these numbers in their internal designs. Back to Contents PART 4 - THE GREAT PYRAMID AND THE 30th PARALLEL This circle has a different orientation than the previous diagrams and is two inches in diameter. The horizontal axis is the Equator, FC is the 30th parallel, D is 60° North latitude and E is the North Pole. The 30th parallel is exactly one-third of the great circle distance from the Equator to the North Pole, and it is located at exactly one-half of the height of the Northern Hemisphere. Like the Great Pyramid, the maximum latitude of the line of ancient sites is very close to the 30th parallel. This diagram illustrates that the relationship of the 30th parallel to the circumference of the Earth is the geometric relationship known as the Vesica Pisces. In relation to the lower circumference, DE is at 30° N latitude. In relation to the upper circumference, DE is at 30° S latitude. The ratio between the straight line distance of the 30th parallel and the radius of the Earth is 1.732 to one. 1.732 is the square root of three. Paul Michell and Charles Henry have noted the relationship between the Great Pyramid and the Vesica Pisces. The small circles in this diagram are one inch in diameter, and the large circles are three inches in diameter, forming a small Vesica Pisces circumscribed by a larger one. The triangle in this diagram has the same angular dimensions as the Great Pyramid. The circle in this diagram also represents the circumference of the Earth with the poles on the vertical axis. The radius of the circle is 1.00 inch. The exterior and interior equilateral triangles touch the circle only at the 30th parallels and the poles. The height of each equilateral triangle is 87% of the length of each of it’s sides: 3.00 ÷ 3.46 = .87 1.50 ÷ 1.73 = .87 1.00 ÷ 1.15 = .87 The length of each of the sides of the interior triangles, including the straight line distance through the earth at the 30th parallel, is also 87% of the diameter of the Earth: 1.73 ÷ 2.00 = .87 The radius of the Earth is also 87% of the distance from the center of the Earth to the point of the exterior triangles’ intersections (AP, AQ, AR, etc.): 1.00 ÷ 1.15 = .87 The Greek foot is thought to have been developed before the size and shape of the Earth was known and independently from the foot, which is also thought to have been developed before the size and shape of the Earth was known. The foot is 87% of the length of the Greek foot. The mile, which is thought to have been developed before the size and shape of the Earth was known, is 87% of the length of the nautical mile, which was developed specifically in relation to the size of the Earth. The nautical mile equals one minute of latitude, so 60 nautical miles equals one degree of latitude and 5,400 nautical miles equals the 90 degrees of latitude between the Equator and the poles. The distance from the Equator to the poles is 6,215 miles: 5,400 ÷ 6,215 = .87 One minute of latitude equals one nautical mile at any longitude. At the equator, one minute of longitude also equals one nautical mile, but at higher latitudes, the distances between each minute of longitude become shorter. Because the straight line distance through the Earth at the 30th parallel is 87% of the diameter of the Earth, the circumference around the Earth at the 30th parallel is 87% of the circumference of the Earth at the Equator, and each minute of longitude at the 30th parallel is 87% of the distance of each minute of longitude at the Equator. As a result, just as one nautical mile equals one minute of longitude at the Equator, one standard mile equals one minute of longitude at the 30th parallel. The currently accepted value for the Equatorial diameter of the Earth is 7,926 miles, with an Equatorial radius of 3,963 miles. The ratio of the radius of the Earth to the straight line distance through the Earth at the 30th parallel is 1:1.732. 3,963 miles x 1.732 = 6,864 miles (the straight line distance through the Earth at the 30th parallel). 6,864 miles x pi = 21,564 miles (the circumference of the Earth at the 30th parallel). 21,564 miles ÷ 360 degrees = 59.9 miles (one degree of longitude at the 30th parallel). 59.9 miles ÷ 60 minutes = .998 miles (one minute of longitude at the 30th parallel). As an alternative proof, the currently accepted value for the Equatorial circumference of the Earth is 24,902 miles, and the circumference of the 30th parallel is 87% of the circumference of the 24,902 miles x .87 = 21,664 miles (the circumference of the Earth at the 30th parallel). 21,664 miles ÷ 360 degrees = 60.1 miles (one degree of longitude at the 30th parallel). 60.1 miles ÷ 60 = 1.00 miles (one minute of longitude at the 30th parallel). Conclusion: One minute of longitude equals one mile at (or, like the Great Pyramid, just below) the 30th parallel. Back to Contents PART 5 - ALIGNMENTS OF THE NAZCA LINES & FIGURES The glyphs and lines at Nazca are oriented along the line of ancient sites. This image of the glyphs at Nazca, with a compass bearing, is available on the internet, but it is usually oriented away from the cardinal points so that the figures are roughly horizontal and Rotating this image so that the north-south axis is vertical, aligns the figures and geometric drawings to the line of ancient sites as it crosses Nazca. Nazca is marked by the yellow cross on the illustration [above]. The vertical line in the center of the picture is 75° West Longitude. The horizontal line is 15° South Latitude. The white dot in red circle touching the north side of the line on the upper right side of the illustration is Machupicchu. This below illustration of the Nazca lines has also been rotated so that the north-south axis is vertical, and shows the primary orientation of the lines is from Southwest to Northeast, along the line of ancient sites. Back to Contents PART 6 - THE AXIS POINTS Just as every point along the equator is 6,215 miles from both the North and South Poles, every point along the line of ancient sites is 6,215 miles from two axis points on Earth. The axis point in the Northern Hemisphere is near the Southeastern coast of Alaska, at 59° 42' N 139° 17' W, 25 miles Northeast of Yakutat, Alaska The North and South Poles have not always been in their present locations. Several theories have been offered to explain observed and suspected movements of the poles in relation to the surface of the Earth. Plate tectonics, the prevailing theory, suggests gradual movements of the surface of the Earth. This theory has been called into question by recent measurements of relative movements of the earth's surface, and by accumulating seismological data. Alternative theories include: Axial shifts; polar wander; and a catastrophic form of polar wander known as Earth crust displacement. Charles Hapgood advocated the Earth crust displacement theory in a book entitled The Path of the Poles. Hapgood supported this theory with geomagnetic and carbon dated evidence. In a book entitled When the Sky Fell, Rose and Rand Flem-Ath also advocate the Earth crust displacement theory, with additional geological and archeological evidence. Both of these works conclude that the North Pole was located in the Yukon, at 63° N 135° W, approximately 80,000 to100,000 years ago. This is about 250 miles Northeast of the axis point for the line of ancient sites at 59° 42' N 139° 17' W. It is interesting to note that some of the heaviest remaining glaciations in all of North America is on the Southeastern coast of Alaska, surrounding Yakutat. If 59° 42' N 139° 17' W was the location of the North Pole, then the line of ancient sites would have been the equator at that time. The concentric circles in the diagram represent lines of latitude from 59° 42' N 139° 17' W. The circle closest to the center of the diagram is 75°N, followed by 60°N, 45°N, 30°N and 15°N. The line of ancient sites is just beyond the horizon. Since many of the sites along the line are precisely oriented to the present North and South Poles, it is not suggested that they were constructed when the poles were in a prior location. However, if this line had previously been the equator, the placement of these sites on this line would be a remarkable coincidence. In a book entitled Atlantis Blueprint, Rand Flem-Ath and Colin Wilson have listed some of these sites, and a number of other sites, in relation to their calculation of the North Pole in the Yukon, including sites that would have been on the equator during this prior polar alignment. A line around the center of the earth, with the Yukon Pole as it’s axis point, approaches and crosses over the line of ancient sites at antipodal points in Peru and Cambodia. Along the line of ancient sites, the sites in these two areas are close to being equally distant from the Yukon Pole and from the Yakutat axis point. None of the theories offered to explain the motions of the surface of the Earth, relative to the poles, can pinpoint exact prior polar positions. The round number coordinates that are used by Hapgood and the Flem-Aths for the Yukon Pole indicate that they are approximations. If the line of ancient sites was originally selected because of its equatorial relationship with a prior polar alignment, the most accurate way to determine the location of the prior alignment is to simply calculate it from the location of the line of ancient sites. Back to Contents PART 7 - THE GREAT PYRAMID, PERU & PYTHAGORAS The Great Pyramid precisely expresses the 2pi relationship between the circumference and the radius of the Earth. □ The height of the Great Pyramid is 481.4 feet. □ The perimeter of the Great Pyramid (the length of all four sides at the base of the pyramid) is 3,023 feet. □ The height of the Great Pyramid times 2pi (6.28) is 3,023 feet. The relationship of the distances between the Great Pyramid, Nazca, and the axis point of the line of ancient sites, precisely expresses this same 2pi relationship. Inspired by Charles Hapgood's Earth crust displacement theory, Jim Bowles, a retired NASA engineer, wrote The Gods, Gemini, and the Great Pyramid. In his book, Bowles provides a scientific explanation for the causes of Earth crust displacements. He also discusses many similarities between the lines and figures at Nazca, the Great Pyramid and ancient Egyptian hieroglyphic texts. Bowles observes that the Great Pyramid and the Nazca lines and figures would have been on the equator if the North Pole had been in southeastern Alaska, and in a lengthy proof using coordinate derivations and spherical trigonometry he demonstrates the 2pi relationship between the three sites. Of course, this 2pi relationship exists between the Great Pyramid, Nazca and the axis point for the line of ancient sites, regardless of whether or not the axis point was once the North Pole. This relationship may also be demonstrated by diagramming the great circle distances between the three sites on a flat surface. Along the line of ancient sites, the distance from the Great Pyramid to the Nazca lines is 7,677.6 miles. The distance from the line of ancient sites to the axis point in southeastern Alaska is 6,215 miles. This triangle, with a base of 7,677.6 miles and sides of 6,215 miles, forms an isosceles triangle with base angles of 51° 51' and a height of 4,887.72 miles. The height of the triangle is calculated using Pythagoras' theory (a² + b² = c²). The height of the triangle times 2pi equals the base of the triangle times four. 3.1416 x 2 = 6.2832 4,887.72 miles x 6.2832 = 30,710.4 miles 7,677.6 miles x 4 = 30,710.4 miles Another special triangular relationship, found in the dimensions of the King's Chamber in the Great Pyramid, is the 3-4-5 right triangle that elegantly expresses Pythagoras' theory (3² + 4² = 5²). In the King's Chamber, the diagonal length of the east wall is 309", the length of the chamber is 412", and the long central diagonal is 515". The stone over the entrance to the King's Chamber is the only stone in the walls that is two courses high. This stone also expresses a 3-4-5 right triangle relationship by its measurements of 124"L x 93"H x 155" diagonal. The distances between the Great Pyramid, Machupicchu, and the axis point of the line of ancient sites, express this same 3-4-5 relationship. The distance from the Great Pyramid to Machupicchu (7,487 miles) is exactly 30.0% of the circumference of the Earth. The distance from the Great Pyramid and from Machupicchu to the axis point for the line of ancient sites is exactly 25% of the circumference of the Earth. Dividing this isosceles triangle by it's height, forms two 15%-20%-25% right triangles. Back to Contents
{"url":"http://www.bibliotecapleyades.net/esp_geographic_geometry_1.htm","timestamp":"2014-04-20T13:36:15Z","content_type":null,"content_length":"83775","record_id":"<urn:uuid:c72ba3e1-a6e2-4e2a-92e8-c92debba8ba7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: can u all help me to differentiate this, i try but still did not get it... find dy/dx and d^2y/dx^2question: y^2=ye^(x^2)+2x • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ced880e4b0031882dca376","timestamp":"2014-04-19T02:18:48Z","content_type":null,"content_length":"46874","record_id":"<urn:uuid:b0b7ed37-aea7-4af5-bed8-e4e36685846c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinity - Hashem Take a six-inch stick and divide it in half. You now have two sticks made of three inches each. Now divide those halves in half. Keep dividing them over and over again. How many pieces can you divide them into? Infinity, right? OK, now take a football field, and divide it into half, and then again in half, and again and again. How many pieces can you divide it into?Infinity, right? This means that both a six inch stick and a football field are comprised of the same amount of pieces: Infinity. And that makes no sense. If A/B=C, then CxB=A. So if six inches divided infinitely equals an infinite amount of pieces, that means if you take an infinite amount of pieces and line them up side by side, you’ll get a six-inch stick. Or maybe a football field?! The answer is, infinity is not a large number which you will reach if you count for a very long time. Infinity is unreachable. There is a never-ending (that is, an “infinite”) supply of finite numbers into which you can chop the stick – or the football field. Therefore, no matter how many times you chop up that football field, or that stick, the number of pieces will always be finite. You can keep chopping the pieces forever, but no matter how long you chop, you will never have an “infinite” amount of pieces. When we say you can keep chopping “for infinity” it doesn’t really mean you will ever chop the stick an infinite amount of times. Rather, it means you will never have to stop chopping - no matter how many times you have already chopped, you can always chop an additional time. You can go on like that forever. But because the amount of finite numbers never ends, the amount of slices you can chop that stick into never ends, and therefore, no matter how many times you chop that stick, and no matter how long you keep chopping, the amount of pieces that the stick – or football field – has been chopped into will always be a finite number. It will never reach “infinity.”When we say that there is an infinite amount of finite numbers, we mean you can keep counting finite numbers forever. But no matter how long you count, you will never reach infinity, ever. So if you are counting and counting and you have already reached a particular number, you can be sure that number is not infinity. Since infinity is not reachable, therefore, if you reached it, it is not infinity. TimeNow we are ready for our first question: The amount of time that has passed in all of history – if you were to add up every moment that has ever been, until now - will that amount of moments be finite or infinite? If someone was alive from the beginning of time and had been counting all the moments of his life, all throughout the past until now – would he be still counting finite numbers or would he have already reached “infinity”?Answer: He would still be counting finite numbers, since he could never reach infinity. If the past would consist of an infinite amount of time, it would never be over. At no point would you be able to say “we have reached infinity”, since that point is unreachable. The past, however, is over. Therefore, the amount of time that has already transpired in the past could never have reached infinity.The past cannot be an infinite amount of time because the past is over, and an infinite amount of time can never be over.As a syllogism: If the amount of moments in the past is infinity, those moments would never be finished. But the past has finished. Therefore, the amount of moments in the past is not infinity. This is based on the same idea as the answer to the stick-football field paradox, which appears problematic because it seems that even an inch can be divided up an infinite amount of times. This means that an inch and a mile - which also is divisible an infinite amount of times - are really the same length. But this is wrong, obviously, and the reason is because you can never divide up an inch, or a mile, an infinite amount of times. No matter how many times you divide up the distance, the resultant amount of parts will always be a finite number. So you will never, ever have an infinite amount of parts in any given line.Infinity cannot be reached in real life, ever. You can never count until infinity. You can never have an infinite amount of anything that has magnitude. Therefore, if we already had a certain amount of moments in time, since each moment does take up time, the total amount of moments cannot be infinity. And if the amount of time that has happened throughout history is finite, that means it had to have a beginning. There had to have been a first moment in time. If there was no first moment, then time would be infinite, and that would be impossible. And since there had to have been a first moment in time, then something must have caused it to begin, because nothing happens without a cause. Time could not have just popped into existence without something causing it to do so. It makes no sense that something should cause itself to begin (see this post). And the thing that caused time to begin must exist outside of time, because it was the cause of time. And if it exists outside of time, it must exist outside of space, because space can only exist within time. And that means that the entity that created time: 1. Cannot change, because change means there is a “before” and “after”, and without time, it is not possible to have before and after. 2. It also means that the entity that created time was “always” here and “always” will be here. The cause of time had no beginning and can have no end, because to begin or to end cannot happen if there is no change. 3. It also means that this entity cannot be affected by any stimuli. Nothing can impact on it at all. Because that would entail a change, which cannot happen if something is not subject to time.
{"url":"http://www.jewswithquestions.com/index.php?/topic/4-infinity/","timestamp":"2014-04-17T00:48:09Z","content_type":null,"content_length":"73936","record_id":"<urn:uuid:468cff52-9111-4606-ba81-6f4f8d772686>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Motion With Graphs Position vs. Time Graphs: 1. Graphs are commonly used in physics. They give us much information about the concepts and we can infer many things. Let’s talk about this position vs. time graph. As you see on the graph, X axis shows us time and Y axis shows position. We observe that velocity is constant. If it was not constant we would see a curved line in our graph. Now, we use this graph and make some calculations. From the given graph we calculate velocity; there is another way of this calculation. We just look at the slope of the graph and find the velocity. What we mean by slope is; which is the equation we use in calculation of the velocity. 2. Position is increasing positive direction. In this graph our velocity is changing. As a result of this change graph has curved line not linear. So position does not increase linearly. We can also find the velocity of the object from this graph. We should first find the slope of the curve and calculate the velocity. Example Using the given graph find the velocity of the object in intervals (1s – 3s) and (3s – 5s). In graph problems you should be careful while reading it. For example, in this example in the interval (3s-5s) position does not change. You can easily see it from the graph, but I want to show the calculation of this and it gives us same result. If there is no change in position then there is no velocity or vice versa. You can say more things about the motion of the object by just looking of the graph. The important thing is that you must know the relations, meaning of the slopes or area of the graphs. We will solve more examples using graph for deep understanding and analyzing the motion from the graphs. 3. This position time graph is an example of increasing position in negative direction. Red line shows nonlinear increasing and black line shows linear increasing. We say that linear increasing in position is the result of constant velocity which means zero acceleration. Moreover, nonlinear increase in the position is the result of changing velocity and it shows there is a nonzero 4. Until now we saw graphics including speeding up motion in positive and negative directions. Now we talk about a little bit s the position of the object which is slowing down in negative direction and black line shows the position of the object slowing down in positive direction. However, the blue and green lines show the linear decrease in position in positive and negative directions. We have seen various type of position vs. time graphs. I think they will help you in solving problems. It is really easy, you should just keep in mind that “slope shows the velocity”. Velocity vs. Time Graphs In velocity vs. time graphs, x axis is time as in the case of position vs. time graphs and y axis is velocity. We can benefit from this graph by two ways. One of them is area under the graph which gives the displacement and the slope which gives the acceleration. 1. We have talked about different kinds of motion such as, constant motion having constant velocity, accelerated motion like speeding up or slowing down. For instance, in this graph as it seen our velocity is constant, time passes but velocity does not change. What you see when you look at this graph? We see the relation of velocity and time, how velocity is changing with time. It can be said for this graph acceleration is zero because there is no change in velocity. Moreover, from velocity vs. time graphs we can calculate displacement of the object. How can we do this? Let’s think together. First, look at the definition of displacement; Since, the velocity times time gives us displacement the area under the velocity vs. time graph also gives us the displacement of the object. Look at the example given below to understand what we mean by the area under the graph. Example Using the given graph calculate the displacement of the object for the interval (0s – 4s). To solve this problem, I use the technique given above and classic formula. The area under the graph will give us the displacement. Then we compare the results of two techniques. As you can see, the results are the same, thus, we can say that in velocity vs. time graphs we can find the displacement by looking at the area under the graph. 2. In this graph there is a linear increase in velocity with respect to time so, the acceleration of the motion is constant. Moreover, we can also calculate the displacement by looking at under the area of the graph. Let’s solve another example for deep understanding. Example Calculate the displacement of the car from the given graph. We can calculate displacement by using the area under the graph, to do this we can first calculate the area of the triangle shown with blue lines and then rectangle shown with red lines. Finally the sum of these two areas gives us the total displacement of the car. 3. This graph shows the different accelerated motions. The lines are curved because acceleration is not constant. They represent the decreasing and increasing velocity in positive and negative directions. However, we do not deal with such problems now. slope for acceleration and area for displacement.
{"url":"http://www.physicstutorials.org/home/mechanics/1d-kinematics/free-fall/12-motion-with-graphs?showall=1","timestamp":"2014-04-21T14:41:13Z","content_type":null,"content_length":"31613","record_id":"<urn:uuid:3d8d65cc-0ab2-41ca-95af-084c4f82b35b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Fibonacci Words ACM ICPC 2012 Hi. I am new on this forum. I joined because I am preparing for a programming contest, and I tried to solve the fibonacci words ACM ICPC 2012 problem, but I was getting an OutOfMemoryError. Can anyone please help me? Here'e the code import java.util.Scanner; public class FibonacciWords { public static String fibSeq(int n){ String s = ""; return s += n; return fibSeq(n-1) + fibSeq(n-2); public static void main(String[] args){ Scanner s = new Scanner(fibSeq(40)); int c=0;
{"url":"http://www.dreamincode.net/forums/topic/293888-fibonacci-words-acm-icpc-2012-problem/page__pid__1719565__st__0","timestamp":"2014-04-18T07:45:14Z","content_type":null,"content_length":"116658","record_id":"<urn:uuid:b2014f47-041a-42d3-8e9f-5cb34ced78e3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Holliston Math Tutor Find a Holliston Math Tutor My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including logic, ACT Math, reading, probability ...I am a Massachusetts licensed teacher in the elementary grades of 1 through 6. I have been teaching since 2005. I have taught language arts for grades 6-8 and currently teach technology to students in K0 through grade 8. 14 Subjects: including prealgebra, reading, ESL/ESOL, grammar ...I also utilize my own experiences to challenge students to think like psychologists and bring those skills to tackle future endeavors. I have extensive coursework and research experience in the area of Physiology and completed my Master's degree in Physiology in May 2013. I was also the TA for a graduate Physiology course and tutored groups of healthcare and graduate students 10 Subjects: including algebra 1, algebra 2, biology, chemistry I obtained my BS and PhD in Biomedical Engineering, focusing on applying mathematical and computational tools to solve biomedical problems. MATLAB is my main computer language. I have being tutoring undergraduate and graduate students in research labs on MATLAB programming. 16 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel ...As I was a graduate student myself at the US school and have been a teacher at the US veterinary school, I understand an individual learning process and can tailor to fit students’ needs. I have taught all age groups from kindergartner to graduate/professional students during my own teaching car... 11 Subjects: including algebra 2, calculus, geometry, precalculus
{"url":"http://www.purplemath.com/holliston_ma_math_tutors.php","timestamp":"2014-04-20T15:55:18Z","content_type":null,"content_length":"23638","record_id":"<urn:uuid:9f80b234-3c33-4dc8-afd7-627401a64a11>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Recent Homework Questions About Trigonometry Post a New Question | Current Questions Four wires support a 40-meter radio tower. Two wires are attached to the top and two wires are attached to the center of the tower. The wires are anchored to the ground 30-meters from the base of the Tuesday, February 26, 2013 at 10:33am b=a(tanB) where a is your altitude, B is your angle of depression and b is your distance to airport. b=30,000(tan(72)) b=92,330.51 Monday, February 25, 2013 at 8:32pm secθ = 1/cosθ If the very names of the 6 basic trig functions are unfamiliar to you, you have some major catching up to do. Monday, February 25, 2013 at 5:36pm There's probably a good geometric way to do it, but here's a trig way, using the law of cosines. AB^2 = 6^2+15^2 - 2 * 6 * 15 * (-1/2) = 351 AC^2 = PC^2 + 15^2 - 2*PC*15(-1/2) BC^2 = PC^2 + 6^2 - 2*PC*6(-1/2) AC^2-BC^2 = AB^2 = 351, so 351 = 189 + 9PC 9PC = 162 Monday, February 25, 2013 at 12:05pm Sunday, February 24, 2013 at 4:02am Trig identities:( ?? Im sorry i dont understand what u did but im sure tht this is an identity we have to prove tht the left side equals the right side:) Saturday, February 23, 2013 at 8:17pm Trig identities:( I don't think it is an identity. let A=90 deg sin(360)-cos(360)=0-1 sin(180)-cos(180)=0+1 The sides are not equal. Saturday, February 23, 2013 at 7:52pm Trig identities:( Sin4A - cos4A = sin2A - cos2A Can i just square root the left side??:O Saturday, February 23, 2013 at 7:43pm Tht will take awhile wont it? Do u think i can master it by monday:( i also have cast rule, ambiguous case, sine and cosine law, primary trig ratios to study tho:( do u think its possible:(? Saturday, February 23, 2013 at 7:28pm OK, but the effort is the same. cos/(1+sin) Now, you know that a^2-b^2 = (1+a)(1-a), and you knwo that cos^2 = 1-sin^2, so that should lead you to simplify the denominator by multiplying top and bottom by (1-sin) cos/(1+sin) * (1-sin)/(1-sin) = cos(1-sin)/(1-sin^2) = cos(1-sin... Saturday, February 23, 2013 at 7:19pm coem on. This is trig. You shoudl know algebra I a/b * b * 1/a = 1 Just multiply the fractions and cancel factors! Saturday, February 23, 2013 at 7:08pm Just cross-multiply to clear the fractions. You get cos*cos = (1-sin)(1+sin) If that doesn't look true, you do need to work some more of these. Dozens of them. sin^2 + cos^2 = 1 is one of the most useful identities you have for trig problems. Saturday, February 23, 2013 at 7:07pm trig sub question thank you Steve Saturday, February 23, 2013 at 8:57am trig sub question You need 3x = 2tanθ, so 9x^2 = 4tan^2 θ 9x^2+4 = 4tan^2 θ + 4 = 4sec^2 θ Saturday, February 23, 2013 at 5:07am sec^2(x)-1 = tan^2(x) csc^2(x)-1 = cot^2(x) since cot = 1/tan, it follows immediately Saturday, February 23, 2013 at 4:58am (sec^2x-1)(csc^2x-1)=1 prove the following identity ive been stuck on this for hours please help Friday, February 22, 2013 at 7:53pm trig sub question Hello, I have a question concerning trigonometric substitution. let's say we have integral of dx/sqrt(9x^2 + 4), so after doing a few steps we have: 2/3 integral of sec0/sqrt((2tan0)^2 + 4*) d0 (the * is for later on) the next step turns into: 2/3 integral of sec0/(sqrt(... Friday, February 22, 2013 at 6:30pm since A+B+C = 180, we can see that C = 78° Now, use the law of sines: c/sinC = a/sinA Friday, February 22, 2013 at 6:25pm What is the length of side c to the nearest whole number if side a 105 and angle A 65 degrees and angle B 37 degrees? Friday, February 22, 2013 at 5:49pm tan x = opposite/adjacent cot x = adjacent/opposite they are reciprocals of each other. tan x = 1/cot x Can you finish from here? Thursday, February 21, 2013 at 7:26pm If cot x = .78, what is tan x? Thursday, February 21, 2013 at 7:24pm use your product-to-sum formulas: sinAcosB = 1/2 (sin(A+B) + sin(A-B)) sin4x cos3x = 1/2 (sin7x + sin(x)) Thursday, February 21, 2013 at 5:48pm Express sin4xcos3x as a sum or differences of sines and cosines Thursday, February 21, 2013 at 4:04pm trig needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, February 21, 2013 at 11:36am trig needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, February 21, 2013 at 11:22am trig needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, February 21, 2013 at 10:39am We need to find the arclength of a central angle of 22° let's use a ratio arc/(2π(30)) = 22/360 arc = 22(60π)/360 = appr 11.5 inches/sec Wednesday, February 20, 2013 at 10:33am A pendulum swings through an angle of 22 degrees each second. If the pendulum is 30 inches long, how far does its tip move each second? Wednesday, February 20, 2013 at 10:25am No, they are not 11π/12 is in quadrant II =π/12 is in IV Tuesday, February 19, 2013 at 9:02pm Whoops nevermind. I was thinking if they were 11pi/6 and -pi/6. Tuesday, February 19, 2013 at 8:56pm Is the angle 11pi/12 on the unit circle the same as the angle -pi/12? I'm thinking it would be, but I'm not sure. Tuesday, February 19, 2013 at 8:51pm An object Is propelled upward at an angle θ, 45° < θ<90°, to the horizontal with an initial velocity of (Vo) feet per second from the base of a plane that makes an angle of 45° with the horizontal. If air resistance is ignored, the distance R it ... Tuesday, February 19, 2013 at 8:35pm To determine the distance to an oil platform in the Pacific Ocean, from both ends of a beach, a surveyor measures the angle to the platform from each end of the beach. The angle made with the shoreline from one end of the beach is 83 degrees, from the other end 78.6 degrees. ... Tuesday, February 19, 2013 at 7:50pm thank you! Monday, February 18, 2013 at 8:38pm solution sets? Is there an equals sign anywhere? If all this is equal to zero.. 4x(2x^2+x-1)=4x(2x-1)(x+1)=0 x=0, x=1/2, x=-1 check those Monday, February 18, 2013 at 8:31pm 8x^3+4x^2-4x solution sets Monday, February 18, 2013 at 8:26pm using the identity cos 2A = cos^2 A - sin^2 A = 1 - 2sin^2 x = 2cos^2 x -1 LS = cos 4x + cos 2x = cos^2 2x - sin^2 2x + cos 2x = (1 - sin^2 (2x) ) - sin^2 (2x) + (1 - 2sin^2 x) = 1 - 2sin^2 (2x) + 1 - 2sin^2 x = 2 - 2sin^2 (2x) - 2sin^2 x = RS Monday, February 18, 2013 at 12:32am cos^(2) 20 degrees + sin^(2) 20 degrees +pi/2 = (cos^(2) 20 degrees + sin^(2) 20 degrees) +pi/2 = 1 + π/2 Sunday, February 17, 2013 at 11:46pm Verify the identity. cos 4x + cos 2x = 2 - 2 sin^2(2x) - 2 sin^2 x Sunday, February 17, 2013 at 11:33pm sides of triangle are (2+2.5),(2+3),(2.5+3) Start with law of cosines c^2=a^2+b^2-2abCosC label the sides a,b, c solve for angle C Then, use the law of sines a/SinA=c/SinC solve for angle A then use the fact that the sum of the angles is 180 deg, find angle B. check angle B ... Sunday, February 17, 2013 at 9:40pm Three circles with radii of 4, 5, and 6 cm, respectively, are tangent to each other externally. Find the angles of the triangle whose vertexes are the centers of the circles. Sunday, February 17, 2013 at 9:36pm cos^(2) 20 degrees + sin^(2) 20 degrees +pi/2 Sunday, February 17, 2013 at 8:59pm if the sides have length a and b, a^2 = 6^2 + 3.5^2 - 2(6)(3.5)cos42 b^2 = 6^2 + 3.5^2 + 2(6)(3.5)cos42 Sunday, February 17, 2013 at 8:37pm The diagonals of a parallelogram intersect at a 42◦ angle and have lengths of 12 and 7 cm. Find the lengths of the sides of the parallelogram. (Hint: The diagonals bisect each other.) Sunday, February 17, 2013 at 8:17pm Trig Identities OHHH!!! Ok thankyou soooo much I greatly appreciate your help and one compliment...Your way better than my teacher...once again thankyou sooo much I think I get the jist of it now:) Sunday, February 17, 2013 at 12:48pm small correction - Trig Identities My first line should have been: There is NO one correct and foolproof way to prove identities. Sunday, February 17, 2013 at 10:25am Trig Identities There is on one correct and foolproof way to prove identities. There are some general rules you might follow 1. look for obvious relations , like sin^2 x + cos^2 x = 1 or 1 + tan^2 x = sec^2 x -- make yourself a summary of these collected from your text or notebooks 2. usually... Sunday, February 17, 2013 at 10:24am Trig Identities Proving identities: 1) 1+ 1/tan^2x = 1/sin^2x 2) 2sin^2 x-1 = sin^2x - cos^2x 3) 1/cosx - cosx = sin x tan x 4) sin x + tan x =tan x (1+cos x) 5) 1/1-sin^2x= 1+tan^2 x How in the world do I prove this...please help... I appreciateyour time thankyou soo much!! Sunday, February 17, 2013 at 10:06am first off, trig functions are just numbers. they have no units (such as meters). Maybe those little m's are meant to be ° symbols. If sin(x) = 8/32, x = 14.48° did you visit the trig site I provided? Friday, February 15, 2013 at 4:04pm Handy trig calculators can be found at http://www.rapidtables.com/calc/math Friday, February 15, 2013 at 2:31pm the question probably says tanØ = 38 and sinØ = 8/32 a statement like sin = 8/32 is a "mathematical sin" sin is a trig operator and needs an argument that is like saying √ = 12 , another meaningless statement. anyway..... if tanØ = 38 , we ... Friday, February 15, 2013 at 2:28pm You are just looking for the hypotenuse of a right-angled triangle with height 27 and base angle of 60° using fundamental trig .... sin60° = 27/h, where h is the hypotenuse h = 27/sin60 = 31.1769 ft = 31 ft, 2 inches time =distance/rate = 31.1769/75 = .416 minutes ... Friday, February 15, 2013 at 1:05pm Handy trig calculators can be found at http://www.rapidtables.com/calc/math just plug in your value for 22 degrees, get the sine, and then multiply by 11. Hard to believe that someone with access to a computer can't figure out how to get some calculations made. Heck, ... Friday, February 15, 2013 at 12:32pm This is just a straightforward law of cosines problem. We want to find a, given A,b,c. A=58°, c=7.5, b=8.6 a^2 = b^2+c^2 - 2bc cosA plug and chug. . . Thursday, February 14, 2013 at 4:34pm a lighthouse is located at point A. a ship travels from point B to point C. At point B,, the distance between the ship and the lighthouse is 7.5km. At point C the distance between the ship and the lighthouse is 8.6km. Angle BAC is 58 degrees. Determine the distance between B ... Thursday, February 14, 2013 at 2:16pm let's use a sine function .... amplitude = 20 period = 2π/k = 2 2k = 2π k = π so far we have height = 20 sin π(t + c) + d , assuming we have a phase shift and a vertical shift clearly, the whole sine curve has to shifted upwards by 24 units, so that the... Wednesday, February 13, 2013 at 9:41pm A Ferris wheel is 40 meters in diameter and boarded from a platform that is 4 meters above the ground. The six o'clock position on the Ferris wheel is level with the loading platform. The wheel completes 1 full revolution in 2 minutes. How many minutes of the ride are ... Wednesday, February 13, 2013 at 8:30pm Find the lengths of the missing sides in the triangle. Write your answers as integers or as decimals rounded to the nearest tenth. The diagram is not drawn to scale. triangle has sides 7, Y and X and 45 degree angle can someone show me how to do this? im supposed to use trig ... Wednesday, February 13, 2013 at 9:40am y = 3cos(3x + pi) y = 3cos 3(x + π/3) amplitude : 3 period = 2π/3 phase shift: π/3 to the left Wednesday, February 13, 2013 at 9:26am Given the function y = 3cos(3x + pi), identify: Amplitude (if applicable, give the answer in fractional form) Period (in radians as a multiple of pi - note: do not write "rad" or "radians" in your answer) Phase Shift (if the shift is right, enter + and if ... Wednesday, February 13, 2013 at 12:51am cos(x) cos(7pi/4) - sin(x) sin(7pi/4) + cos(x) cos(7pi/4) + sin(x) sin(7pi/4) = 1 2cos(x) cos(7pi/4) = 1 2cos(x) * 1/√2 = 1 cos(x) = 1/√2 x = pi/4 or 7pi/4 Tuesday, February 12, 2013 at 11:03pm Cos[x + (7 Pi)/4] + Cos[x - (7 Pi)/4] = 1 Tuesday, February 12, 2013 at 10:22pm Cos x Tuesday, February 12, 2013 at 5:47pm Perhaps you recognized the 30-6-90 triangle (sinØ=1/2 , so Ø=30°) so just use a simple ratio: h/2 = 8/1 h = 16 or sinØ - .5 Ø = 30° then sin30 = 8/h h = 8/sin30 = 8/(1/2) = 16 Tuesday, February 12, 2013 at 11:45am ABC- with angles AB, and C and sides AB,BC, and AC, angle B is right 90degree angle, if sin of angle A is 0.5, side BC 8in., what is length of AC Tuesday, February 12, 2013 at 11:01am For height at lower angle: tan 15°10' = height1/140 height1 = 140tan15°10' in the same way: height2 = 140tan29°30' rise of the balloon in that change of angle = 140tan29°30' - 140tan15°10' = 41.256 I am sure your units are not correct. ... Monday, February 11, 2013 at 10:36pm As a hot-air balloon rises vertically, its angle of elevation from a point P on level ground d = 140 kilometers from the point Q directly underneath the balloon changes from 15°10' to 29°30' (see the figure). Approximately how far does the balloon rise during ... Monday, February 11, 2013 at 9:37pm maths (trig) tangent of theta is opposite / adjacent. So your theta (angle) here is 50, your opposite is 27, and we don't know the adjacent. So do simple algebra & bring x to the other side by multiplying so you have tan50 * x = 27, now divide tan50 * x by tan50 to get x by itself. Now... Monday, February 11, 2013 at 2:05am maths (trig) how would you work out tan 50=27/x please help Monday, February 11, 2013 at 1:56am Did you mean solve cos(2Ø) = π/4 ? Sunday, February 10, 2013 at 9:34pm what is the exact value of this equation. cos(20) if it equals pi/4 Sunday, February 10, 2013 at 8:02pm Two people decide to estimate the height of a flagpole. One person positions himself due north of the pole and the other person stands due east of the pole. If the two people are the same distance from the pole and a = 30 feet from each other, find the height of the pole if ... Sunday, February 10, 2013 at 12:23pm You have no equations to solve, you probably meant "simplify the expressions" 1. sin^2 x (1/sin^2 x - 1) = 1 - sin^2 x = cos^2 x 2. (cosx/sinx) (1/cosx) = 1/sinx = cscx 3. recall the property of complementary trig ratios, that is... sin(π/2 - x) = cosx and cos... Sunday, February 10, 2013 at 7:28am Think of a right triangle with the bottom leg 1 and the side leg 73. It is very tall and skinny. The base angle is very close to 90 degrees. If that angle is x, then tan(x) = 73. Go back to basics and start looking at right triangles to see how the trig functions are defined. ... Friday, February 8, 2013 at 4:38pm so the answer is 89.22? sorry I hope I don't sound dumb but I don't understand trig AT ALL. Friday, February 8, 2013 at 4:34pm just use your calculator for the arctan function. Or, go to http://www.rapidtables.com/calc/math/Arctan_Calculator.htm and enter 73 and get arctan(73) = 89.22° so, tan(89.22°) = 73 just think of the trig functions as another set of functions and inverses If I ... Friday, February 8, 2013 at 4:30pm sec^2(t) = 1+tan^2(t) That help? If not, add some parentheses so we know just what is being discussed. Friday, February 8, 2013 at 2:31pm Friday, February 8, 2013 at 12:51pm Thursday, February 7, 2013 at 9:35pm Thursday, February 7, 2013 at 6:32am First observation point A second observation point B top of tower P bottom of tower Q In triangle ABP, angle A = 70 --- given angle PBA = 95 --- exterior angle to 85° angle APB = 15° AB = 55m --- given by the sine law BP/sin70 = 55/sin15 BP = 55sin70/sin15 in triangle ... Tuesday, February 5, 2013 at 9:21am a tower that is a 200 meters is leaning to one side. from a certain point on that side, the angle of elevation to the top of the tower is 70 degree. From a point 55 meters closer to the tower, the angle of elevation is 85 degree. Determine the acute angle from the horizontal ... Tuesday, February 5, 2013 at 8:33am Your lack of brackets make your equations totally ambiguous. Secondly your trig ratios have no argument, sin and cos by themselves are meaningless that's like saying 5 + √ I happen to know the second one , so (1+ sinØ)/(1 - sinØ) = (secØ + tan&... Monday, February 4, 2013 at 7:03pm 1/tanβ + tanβ (1+tan^2 β)/tanβ sec^2 β/tan since sec^2 β = 1+tan^2 β Monday, February 4, 2013 at 4:58pm 1/tan beta +tan beta=sec^2 beta/tan beta Monday, February 4, 2013 at 4:50pm trig Elev. & Depress part ii as usual, draw a diagram. If the tree has height h, (h-4.5)/55 = tan61° Monday, February 4, 2013 at 4:12pm trig Elev. & Depress part ii You are 55ft from a tree. The angle of elevation from your eyes, which are 4.5 ft off the ground,to the top of the tree is 61 degrees. To the nearest foot, how tall is the tree? Monday, February 4, 2013 at 4:09pm tan 8.4 = h/3.4 h =3.4 tan 8.4° = ....... Friday, February 1, 2013 at 2:26pm A mountain road makes an angle θ = 8.4° with the horizontal direction. If the road has a total length of 3.4 km, how much does it climb? That is, find h. Friday, February 1, 2013 at 1:57pm cos(π/2 - Ø) = sinØ , complementary angle property then cos(+pi/2-theta)/csctheta+cos^2theta = cos(π/2-Ø) * 1/cscØ + cos^2 Ø = sinØ ( sinØ) + cos^2 Ø = sin^2 Ø + cos^2 Ø = 1 Thursday, January 31, 2013 at 11:46pm Thursday, January 31, 2013 at 10:49pm 1. determine the exact value.. cot 7pi/6 and sec(-210degrees) 2. determine approximate measure of all angels that satisfy following, and draw a sketch to show the quadrants involved cosTHETA = -0.77, -pi lessOr same than theta less than pi and cscTHETA=9.5, -270Degrees less ... Wednesday, January 30, 2013 at 3:22pm basic trig ... tan 56° = height/7 height = 7tan56° = ...... Tuesday, January 29, 2013 at 11:29pm I don't know if you are supposed to do this by simply substiting, or actually solve the equation. I will solve it 2sinx + 4cos 2x = 3 2sinx + 4(1 - 2sin^2 x) -3=0 2sinx + 4 -8sin^2 x - 3 = 0 8sin^2 x - 2sinx -1 = 0 sinx = (2 ± √36)/16 = 1/2 or -1/4 x = 30&deg... Tuesday, January 29, 2013 at 10:41pm To the nearest degree, all of the following angles are solutions of the equation 2sin x + 4 cos 2x =3 except: (1) 40 degrees (2) 150 degrees (3) 166 degrees (4) 194 degrees Tuesday, January 29, 2013 at 9:52pm What is -6i in standard form? It is originally in trig form. Tuesday, January 29, 2013 at 12:13pm math (trig.) You will need a phase shift your amplitude is correct at 17.5 your vertical shift of 17.5 is also correct period = 2π/k = 8 8k = 2π k = 2π/8 = π/4 so your k for the period is correct so let's adjust: height = 17.5 sin (π/4)(t + d) + 17.5 , where d ... Monday, January 28, 2013 at 8:25pm math (trig.) I answered 17.5*sin((pi/4)*t)+17.5 but it's wrong I'm not sure what I'm doing incorrectly Monday, January 28, 2013 at 7:30pm math (trig.) A ferris wheel is 35 meters in diameter and boarded at ground level. The wheel makes one full rotation every 8 minutes, and at time (t=0) you are at the 3 o'clock position and descending. Let f(t) denote your height (in meters) above ground at t minutes. Find a formula for... Monday, January 28, 2013 at 7:24pm not true, if you meant: tan(x/2) =( plus/minus) sqrt(1-cos(x/(1+cos(x))) let x = 2 radians LS = tan 1 = appr 1.557 RS = ± √(1 - cos( 2/(1 + cos 2) let's look at cos (2/(1+ cos2) = cos(2/.58385) = cos (3.4255) = -.95996 so RS = ± √( 1 - (-.95996... Sunday, January 27, 2013 at 4:00pm Pages: <<Prev | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | Next>> Post a New Question | Current Questions
{"url":"http://www.jiskha.com/math/trigonometry/?page=10","timestamp":"2014-04-18T13:49:24Z","content_type":null,"content_length":"34475","record_id":"<urn:uuid:fa625e8e-6de8-4318-b235-4734553226ad>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Carl Yerger Here is a page with links to preliminary copies of papers submitted to various mathematical journals and conference proceedings. 1. A. Benjamin and C. Yerger, A combinatorial interpretation of the wheel graph, Bull. Inst. Combin. Appl. 48 (2006), 53--62. 47 (2006), 37--42. pdf 2. C. Yerger, Monochromatic and zero-sum sets of nondecreasing diameter, in preparation. 3. A. Benjamin, N. Cameron, J. Quinn, C. Yerger, Catalan determinants – A combinatorial approach, to appear, Proceedings of the 12th International Fibonacci Conference, (2006). pdf 4. A. Shiu and C. Yerger, Rabbits Redux: The cube root of four and the Fibonacci sequence, to appear, Mathematical Spectrum (2006). pdf 5. N. Watson and C. Yerger, Cover pebbling numbers and bounds for certain families of graphs, Bull. Inst. Combin. Appl. 48 (2006), 53--62. pdf 6. J. Gardner, A. Teguia, A. Vuong, N. Watson and C. Yerger, Domination cover pebbling: Graph Families, J. Combin. Math. Combin. Comput. 64 (2008), 255--271. pdf 7. N. Watson and C. Yerger, Domination Cover Pebbling: Structural Results, to appear, JCMCC, (2005). pdf 8. A. Godbole, N. Watson and C. Yerger, Threshold and complexity results for the cover pebbling game, to appear, Discrete Mathematics, (2006). pdf 9. A. Godbole, N. Watson, and C. Yerger, Cover Pebbling Thresholds for the Complete Graph, Electronic Notes in Discrete Mathematics, 22 (2005)301--304 . pdf 10. D. Grynkiewicz and C. Yerger, On three sets with nondecreasing diameter, in preparation. 11. N. Chenette, K. Kawarabayashi, D. Kral, J. Kyncl, B. Lidicky, L. Postle, N. Streib, R. Thomas and C. Yerger. Six-critical graphs on the Klein bottle, Electronic Notes in Discrete Mathematics 31 (2008), 235--240. pdf 12. N. Chenette, L. Postle, N. Streib, R. Thomas, C. Yerger, Five coloring graphs on the Klein Bottle, in preparation. 13. L. Postle, N. Streib, C. Yerger, Pebbling graphs of diameter three and four, in preparation.
{"url":"http://www.carlyerger.com/publications.html","timestamp":"2014-04-19T15:08:23Z","content_type":null,"content_length":"5266","record_id":"<urn:uuid:b4cd4f2f-df12-415d-9578-c1be284aef16>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
South Lawrence, MA Geometry Tutor Find a South Lawrence, MA Geometry Tutor ...I use my love of math to find the best approach for each student. That being said, I try to make math fun. Math is a challenging subject for many people out there! 8 Subjects: including geometry, calculus, algebra 1, algebra 2 ...There are numerous tools and facts that you need to understand and then you build on them throughout the year. At the end of this course, a student should be able to make and critique logical arguments and calculate missing parts of a geometric diagram. Pre-calculus is the study of function families and their behavior. 23 Subjects: including geometry, physics, calculus, statistics ...Taught Algebra II as a separate course, and also as part of the pre-calculus courses taught in long term substitute assignments. Taught this to freshmen and sophomores at Reading High School in long term temp assignments (usually maternity leaves). This subject is taught at the middle school le... 8 Subjects: including geometry, algebra 1, algebra 2, trigonometry ...I have experience teaching, lecturing, and tutoring undergraduate level math and physics courses for both scientists and non-scientists, and am enthusiastic about tutoring at the high school level. I am currently a research associate in materials physics at Harvard, have completed a postdoc in g... 16 Subjects: including geometry, calculus, physics, biology ...If you or your child would like to learn how to play chess, please contact me. I've been involved in a before school chess club at an elementary school in the Merrimack Valley area for more than 6 yrs. I'm an intermediate player, and have an accounting/math teaching background. 24 Subjects: including geometry, reading, accounting, algebra 1 Related South Lawrence, MA Tutors South Lawrence, MA Accounting Tutors South Lawrence, MA ACT Tutors South Lawrence, MA Algebra Tutors South Lawrence, MA Algebra 2 Tutors South Lawrence, MA Calculus Tutors South Lawrence, MA Geometry Tutors South Lawrence, MA Math Tutors South Lawrence, MA Prealgebra Tutors South Lawrence, MA Precalculus Tutors South Lawrence, MA SAT Tutors South Lawrence, MA SAT Math Tutors South Lawrence, MA Science Tutors South Lawrence, MA Statistics Tutors South Lawrence, MA Trigonometry Tutors Nearby Cities With geometry Tutor Bradford, MA geometry Tutors Cherry Brook, MA geometry Tutors Hampton Beach, NH geometry Tutors Hastings, MA geometry Tutors Kendal Green, MA geometry Tutors Lawrence, MA geometry Tutors North Andover geometry Tutors North Sudbury, MA geometry Tutors Pingryville, MA geometry Tutors Plum Island, MA geometry Tutors Riverside, MA geometry Tutors Salisbury Beach, MA geometry Tutors Silver Hill, MA geometry Tutors Stony Brook, MA geometry Tutors Ward Hill, MA geometry Tutors
{"url":"http://www.purplemath.com/South_Lawrence_MA_Geometry_tutors.php","timestamp":"2014-04-20T09:03:58Z","content_type":null,"content_length":"24354","record_id":"<urn:uuid:c03a8df5-5ec1-4d96-b237-c2e72ce183df>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
The Boat Moves Alonga Path Defined By , Where Is ... | Chegg.com determine the radial component of the boat's velocity at the instant t = 1 {\rm s}. The boat moves alonga path defined by , where is in radians. , where is in seconds, determine theradial component of the boat's velocity at the instant = 1 Answers (1) • The boat moves alonga path defined by , where is in radians. , where is in seconds, determine theradial component of the boat's velocity at the instant = 1 Rating:4 stars 4 stars 1 Anonymous answered 13 hours later After free trial, membership will automatically continue, but you can cancel at any time. Start Your Free Trial
{"url":"http://www.chegg.com/homework-help/questions-and-answers/boat-moves-alonga-path-defined-radians-seconds-determine-theradial-component-boat-s-veloci-q750706","timestamp":"2014-04-17T14:02:43Z","content_type":null,"content_length":"24731","record_id":"<urn:uuid:b54ade4e-25b7-4896-b383-9410622d0379>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Plandome, NY Math Tutor Find a Plandome, NY Math Tutor ...Statistics is pretty much the same regardless of the content area. I’ve been tutoring and teaching since 1985 when I received my master's degree in biostatistics at the University of Texas School of Public Health. I received a master's in epidemiology from Columbia University in 1992. 4 Subjects: including statistics, SPSS, SAS, biostatistics ...I have taught Elementary Math, Algebra, Finite Mathematics, PreCalculus, Introductory and Intermediate Statistics both online and offline at other CUNY and non-CUNY colleges, including The City College of CUNY, La Guardia Community College of CUNY, St. Francis College and Berkeley College, overa... 21 Subjects: including geometry, physics, economics, econometrics I am an Chemical Engineering Graduate (B. Tech - Honours) from Indian Institute Of Technology with a career spanning over 30 years. I have strong background in Physics and Mathematics/Statistics and I enjoy teaching and tutoring. 11 Subjects: including calculus, discrete math, differential equations, linear algebra ...I am well equipped to be Math, Science and Business Tutor. I have both Bachelor and Master in chemical engineering from City College of New York (CCNY) and MBA with project management concentration from DeVry University. The chemical engineering degree provided strong graduate foundation in mat... 21 Subjects: including calculus, chemistry, elementary (k-6th), Microsoft Excel I love math/science and love to share my enthusiasm for these subjects with my students. I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. 11 Subjects: including geometry, physical science, algebra 1, algebra 2 Related Plandome, NY Tutors Plandome, NY Accounting Tutors Plandome, NY ACT Tutors Plandome, NY Algebra Tutors Plandome, NY Algebra 2 Tutors Plandome, NY Calculus Tutors Plandome, NY Geometry Tutors Plandome, NY Math Tutors Plandome, NY Prealgebra Tutors Plandome, NY Precalculus Tutors Plandome, NY SAT Tutors Plandome, NY SAT Math Tutors Plandome, NY Science Tutors Plandome, NY Statistics Tutors Plandome, NY Trigonometry Tutors Nearby Cities With Math Tutor Albertson, NY Math Tutors Baxter Estates, NY Math Tutors Glen Head Math Tutors Glenwood Landing Math Tutors Great Nck Plz, NY Math Tutors Great Neck Math Tutors Great Neck Estates, NY Math Tutors Great Neck Plaza, NY Math Tutors Greenvale Math Tutors Kensington, NY Math Tutors Manhasset Math Tutors Manorhaven, NY Math Tutors Port Washington, NY Math Tutors Russell Gardens, NY Math Tutors Thomaston, NY Math Tutors
{"url":"http://www.purplemath.com/plandome_ny_math_tutors.php","timestamp":"2014-04-20T01:47:31Z","content_type":null,"content_length":"23973","record_id":"<urn:uuid:de23cb26-4654-4e01-82e6-209e0ecbe96d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
differential equation- wordy question 2- September 1st 2006, 07:16 PM differential equation- wordy question 2- A battery is being charged. The charging rate is modelled by dq/dt=k(Q-q), where q is the charge in the battery( measured in ampere hours) at time t( measured in hours), Q is the maximum charge the battery can store and k is at constant of proportionality. The model is valid for q> and equal to o.4Q. a. It is given that q=xQ where x is a constant such that x is between 1 and 0.4- inclusive of 1 and 0.4. Solve the differential equation to find q in terms of t. b. It is noticed that the charging rate halves every 40 minutes. Show that k=3/2 in2- notice 3/2 in2 is different from 3/2in2. c. Charging is always stopped when q=0.95Q. If T is the time until charging is stopped, show that T=2in(20(1-x) over 3in2 for the values between 0.4 and 0.95 inclusive of these two values. Please explain if you can as it is a word question. Explanation has equal value to working, probably more- September 2nd 2006, 06:34 AM Originally Posted by kingkaisai2 A battery is being charged. The charging rate is modelled by dq/dt=k(Q-q), where q is the charge in the battery( measured in ampere hours) at time t( measured in hours), Q is the maximum charge the battery can store and k is at constant of proportionality. The model is valid for q> and equal to o.4Q. a. It is given that q=xQ where x is a constant such that x is between 1 and 0.4- inclusive of 1 and 0.4. Solve the differential equation to find q in terms of t. I don't have much time today, but I can whip out a solution for a). Since $q = xQ$ thus $Q = \frac{q}{x}$ where x is a constant. So ${dq}{dt} = k(Q - q) = k \left ( \frac{q}{x} - q} \right ) = k \left (\frac{1}{x} - 1 \right )q$ where k and x are constant. The general solution of such a linear differential equation is of the form $q(t) = Ae^{bt}$ So $\frac{dq}{dt} = Abe^{bt}$ Plugging these into your differential equation gives: $Abe^{bt} = k \left (\frac{1}{x} - 1 \right ) Ae^{bt}$ Comparison shows that $b = k \left (\frac{1}{x} - 1 \right )$. So far we have: $q(t) = A \cdot exp \left [ k \left (\frac{1}{x} - 1 \right ) t \right ]$ Now, we know that the maximum charge on the battery is Q. Thus the limit of q(t) as t goes to infinity is Q. Note that b is a negative number (given the range of acceptable x values.) Thus the limit of the exponential function as t gets very large is 1. ie. $\lim_{t \to \infty} exp \left [ k \left (\frac{1}{x} - 1 \right ) t \right ] = 1$ Thus $\lim_{t \to \infty} q(t) = A$ and we know $q( \infty ) = Q$ according to the problem statement. Thus A = Q and your final solution is: $q(t) = Q \cdot exp \left [ k \left (\frac{1}{x} - 1 \right ) t \right ]$
{"url":"http://mathhelpforum.com/calculus/5293-differential-equation-wordy-question-2-a-print.html","timestamp":"2014-04-18T11:50:30Z","content_type":null,"content_length":"8713","record_id":"<urn:uuid:2d834b23-68d1-4edb-85ff-2707fa317471>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Peter Jossen ( CEU) Subgroups of Mordell--Weil groups and reduction mod p Let a_1, ..., a_n and b be nonzero rational numbers. A theorem of Schinzel states that if b can be written as a product of powers of the a_i modulo all but finitely many prime numbers, then b is a product of powers of the a_i. In my presentation I will show that the analogous statement for rational points on an elliptic curve holds as well. Let E be an elliptic curve over the field of rational numbers k, given, say, by an equation E: y^2 = x^3 + Ax + B with A, B in k. The Mordell--Weil theorem states that the rational points of E, i.e. the rational solutions of the equation E form a finitely generated, commutative group- the Mordell-Weil group. Let X be a subgroup of this group. If a rational point- i.e. solution P of E belongs to X modulo all but finitely many primes, does then P belong to X? This is indeed the case. In the presentation I will try to explain the analogy between this and Schinzel's theorem, and how one can attack this kind of problems in general. The talk is aimed at a general mathematical audience.
{"url":"http://www.renyi.hu/~seminar/FIKUSZ/pjossen.html","timestamp":"2014-04-17T00:49:23Z","content_type":null,"content_length":"2997","record_id":"<urn:uuid:b2969740-3bdf-47c1-bbd7-b1fdd0f8a0c9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
"Derivative" is an easy to use free and open-source (LGPL v3) application to calculate mathematical derivatives with many features : • ordinary and partial derivatives from 1 to 3 variables; • multiple derivatives; • calculation of gradient, divergent, curl and laplacian in 3 dimensions and different coordinate systems (cartesian, cylindrical and spherical); • 5 simplification methods for non-numerical results; • option to show the calculation time; • option to show the not calculated derivative before the result; • 7 output types : simple, bidimensional, typesetting, LaTeX, MathML, C and Fortran; • copy&paste available in all entry fields and also in the result, allowing copying the result to anywhere; • history and line completion editing in the input expressions. "Derivative" runs on Maemo 4 (Nokia N800/N810), Maemo 5 (Nokia N900) or any smartphone/tablet/computer where Python, SymPy and PyQt are available. Future versions will use PySide and QML. SymPy is a computer algebra system (CAS) written in pure Python, it is free and open source project, see . SymPy last version (0.7.1, July 2011) is available for Maemo 4 (Diablo) and Maemo 5 (Fremantle), see the SymPy thread in Talk Maemo.org
{"url":"http://www.robertocolistete.net/Derivative/","timestamp":"2014-04-17T18:23:49Z","content_type":null,"content_length":"11932","record_id":"<urn:uuid:10c852e6-76a2-4fcd-93ab-f7b2749dc305>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Series Problem March 23rd 2010, 08:43 PM Series Problem Consider the following infinite series. $3 - 3x + 3x^2 - 3x^3 + 3x^4 - ...$ (a) For what values of x will the sum of the series be a finite value? (b) Find the value of the infinite series for x in the interval in part (a). Could somebody explain how this is done? I really don't understand where to even start... March 23rd 2010, 08:46 PM Consider the following infinite series. $3 - 3x + 3x^2 - 3x^3 + 3x^4 - ...$ (a) For what values of x will the sum of the series be a finite value? (b) Find the value of the infinite series for x in the interval in part (a). Could somebody explain how this is done? I really don't understand where to even start... It is a geometric series. March 23rd 2010, 09:35 PM I understand that, but I don't understand how to find the sum when x has to be in an interval from -1 to 1 (the answer to part a). How do I solve when x is an interval and not a set number? March 23rd 2010, 11:37 PM mr fantastic You should know then that a = 3 and r = -x. Now, what is the condition on r for an inifinite geometric series to have a finite value ....?
{"url":"http://mathhelpforum.com/calculus/135343-series-problem-print.html","timestamp":"2014-04-19T00:37:19Z","content_type":null,"content_length":"6340","record_id":"<urn:uuid:b34a23a7-9243-4ff3-be6c-afcbf6331afc>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Are there infinitely many additive prime numbers? up vote -5 down vote favorite Notation: For any $n\geq 1$ let $p_{n}$ be $n$-th prime number. Definition: A prime number $p_{n}$ is "additive" iff $p_{n}=\sum_{i<n} p_{i}$ Example: $5$ is an additive prime number. Question (1): What is the least additive prime number larger than $5$? Question (2): Are there infinitely many additive prime numbers? Question (3): If $p_{n}$ be an additive prime number, is $n$ a prime number too? nt.number-theory prime-numbers 5 This is more appropriate for Math StackExchange (i.e., it is not a research-level question, but rather seems to be something at the level of idle curiosity, though I have to confess that the motivation for wondering about it escapes me). – Marguax Oct 27 '13 at 23:14 4 Perhaps it's natural that when one asks five questions per day, they won't all be great. – Michael Zieve Oct 27 '13 at 23:50 6 How did this question manage to receive 4 upvotes? – Joseph Van Name Oct 28 '13 at 13:15 @JosephVanName: I think its really interesting. I introduced a very simple and natural property for a prime number which determines the number 5 "uniquely". It is rather unusual because in this kind of questions in number theory one can find no solution or infinitely many. Note that the answer is simple but not immediate, trivial or natural for researchers who are not professional number theorist. – Ali Sadegh Daghighi Oct 28 '13 at 15:34 For future reference, let me suggest how you can try to answer such questions on your own. Since your question is about prime numbers, the first thing to do is to see whether basic facts about 5 prime numbers provide an approach. There are many good references, for instance the wikipedia article on the topic, or anything you find via searching the web. Any such reference will point you to the Prime Number Theorem, which answers your question. It is better to try to find an answer on your own before posing a question to the world. – Michael Zieve Oct 28 '13 at 22:49 show 1 more comment closed as off-topic by Joseph Van Name, Andres Caicedo, Igor Pak, David White, Qiaochu Yuan Oct 28 '13 at 2:26 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question does not appear to be about research level mathematics within the scope defined in the help center." – Joseph Van Name, Andres Caicedo, Igor Pak, David White, Qiaochu Yuan If this question can be reworded to fit the rules in the help center, please edit the question. 1 Answer active oldest votes As $p_n$ is asymptotically $n\log n$, the inequality $p_n\geq p_{n-1}+p_{n-2}$ only has finitely many solutions. In particular, there are only finitely many additive prime numbers, and these are easy to find by more precise (standard) estimates for $p_n$. up vote 12 down Added. Here are more details. By the classical paper of Rosser-Schoenfeld (Illinois J. Math. 6 (1962), 64-94.) we have for $n\geq 6$ the bound $$ n\log n < p_n < n(\log n+\log\log n). vote accepted $$ From here it is easy to see that that the inequality $p_n\geq p_{n-1}+p_{n-2}$ implies $n\leq 7$, hence there are no additive prime numbers beyond the seventh prime, which is $17$. From here it follows by a manual check that $5$ is the only additive prime. What are those finite additive prime numbers? – Ali Sadegh Daghighi Oct 27 '13 at 23:09 2 What effort have you put into finding it? – Mariano Suárez-Alvarez♦ Oct 27 '13 at 23:17 13 I put some effort in this answer, so I would rather not delete it. – GH from MO Oct 27 '13 at 23:24 2 @MarianoSuárez-Alvarez: I wrote a simple program which checked the numbers less than $10^{6}$ but didn't find anything. – Ali Sadegh Daghighi Oct 27 '13 at 23:27 7 @AliSadeghDaghighi, it is generally best to include such information in the question, at the very least so that people do not waste time redoing the computation. – Mariano Suárez-Alvarez♦ Oct 27 '13 at 23:32 show 7 more comments Not the answer you're looking for? Browse other questions tagged nt.number-theory prime-numbers or ask your own question.
{"url":"http://mathoverflow.net/questions/146095/are-there-infinitely-many-additive-prime-numbers","timestamp":"2014-04-19T04:41:52Z","content_type":null,"content_length":"58488","record_id":"<urn:uuid:f84b8913-4a7a-47b5-b1fd-130da96fd20e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Westford, MA Statistics Tutor Find a Westford, MA Statistics Tutor ...Now I speak with my two young children in Arabic predominantly. First I would say that I've taught many, many hours of elementary school education. I now have two eighth graders that I tutor to and I received high marks for that. 47 Subjects: including statistics, chemistry, reading, calculus ...Seasonally I work with students on SAT preparation, which I love and excel at. I have worked successfully with students of all abilities, from Honors to Summer School. I work in Acton and Concord and surrounding towns, (Stow, Boxborough, Harvard, Sudbury, Maynard, Littleton) and along the Route 2 corridor, including Harvard, Lancaster, Ayer, Leominster, Fitchburg, Gardner. 15 Subjects: including statistics, physics, calculus, geometry ...During my MBA studies, I worked as a Research and Teaching Assistant for Marketing and Strategy courses. I researched material for class lectures and exams, and I graded exams. I have taken and excelled at numerous sociology classes during my years of schooling. 67 Subjects: including statistics, reading, English, calculus ...More than any other discipline, academic physics emphasizes the use of diagrams, cartoons, and "before and after" drawings, but eventually when the drawing is done, you still have to answer the question! I'll be able to use my experience in multiple tutoring disciplines to tailor our physics ses... 23 Subjects: including statistics, chemistry, calculus, writing I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. 14 Subjects: including statistics, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Westford_MA_statistics_tutors.php","timestamp":"2014-04-18T04:02:11Z","content_type":null,"content_length":"24057","record_id":"<urn:uuid:3c4d1b6c-748a-41a0-a905-09d0fc225f9c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Tricky induction problem with trig June 21st 2013, 08:04 PM #1 Apr 2010 Tricky induction problem with trig Okay, I'm working on a really tricky induction problem where I have to show the following (assuming that $\sin{(x/2)}eq0$): $\sin{(x)}+2\sin{(2x)}+\ldots +n\sin{(nx)}=\frac{\sin{[(n+1)x]}}{4\sin^2{(x/2)}}-\frac{(n+1)\cos{[(2n+1)(x/2)]}}{2\sin{(x/2)}}$ for all natural numbers $n$. I've already verified the base case (with the help of ebaines), so now I'm trying to show that the above equation implies this equation: $oindent{ \sin{(x)}+2\sin{(2x)}+\ldots +(n+1)\sin{[(n+1)x]}= \frac{\sin{[(n+2)x]}}{4\sin^2{(x/2)}}-\frac{(n+2)\cos{[(2n+3)(x/2)]}}{2\sin{(x/2)}}}$ I almost hesitate to ask this as to me it seems either impossible or a huge amount of work, but perhaps that's not so for those with more experience. Using the first equation we can write the second equation as $oindent{\sin{(x)}+2\sin{(2x)}+\ldots +(n+1)\sin{[(n+1)x]}=\frac{\sin{[(n+1)x]}}{4\sin^2{(x/2)}}-\frac{(n+1)\cos{[(2n+1)(x/2)]}}{2\sin{(x/2)}}+(n+1)\sin{[(n+1)x]}}$ so what we desire to show is that So I'm unsure where to go from here. We already have the denominators set up neatly, so I don't want to mess with that. I thought about distributing the $n+1$ in the third term and then adding the resulting two terms to the first two terms in some manner, but I'm not sure how the arguments will work out. Can anyone help? Re: Tricky induction problem with trig In order to see if there's some kind of strategy with this, I'm trying the desired identity with $n=3$, giving How would you go about showing this sort of thing? For example, is there a way of writing $\sin{(5x)}$ in terms of functions with an argument of $4x$? Last edited by Ragnarok; June 23rd 2013 at 09:21 PM. Re: Tricky induction problem with trig Okay before anyone wastes time doing this I just found the answer on Chegg. Thanks anyway! Re: Tricky induction problem with trig Hi Ragnorak I have a solution to this problem, took me an hour or so ! What method of solution did Chegg use ? If different, I'll post mine. Re: Tricky induction problem with trig Nice! I am far too lazy to type up Chegg's whole solution but the main steps were (starting from the LHS of the desired identity, last one in my first post): Got a common denominator. Rewrote $4(n+1)\sin{[(n+1)x]}\sin^2{(x/2)}$ in the numerator as $2(n+1)\sin{[(n+1)x]}2\sin^2{(x/2)}=$ $=2(n+1)\sin{[(n+1)x]}(1-\cos{(x)})$ (half-angle formula). Distributed $1-\cos{(x)}$. Added $\sin{[(n+1)x]}$ terms. Factored $-2(n+1)$ out of remaining terms. Used product identies on $\sin{(x/2)}\cos{[(2n+1)(x/2)]}$ and $\sin{[(n+1)x]}\cos{(x)}$ terms. Used a difference identity on $\sin{[(n+1)x]}-\sin{[(n+2)x]}$. Simplified and divided by denominator. Re: Tricky induction problem with trig Okay, I used the same identities but attacked it in a different way, (having failed to find my way through the induction approach). Start by letting $S=\sin x + 2\sin 2x + 3\sin 3x + \dots + k\sin kx.$ Then, multiplying by $2\cos x,$ $2S\cos x = 2\sin x\cos x + 2(2\sin 2x \cos x)+3(2\sin 3x \cos x) +\dots + k(2\sin kx \cos x)$ $=\sin 2x + 2(\sin 3x+\sin x)+3(\sin 4x+\sin 2x) + 4(\sin 5x + \sin 3x)+\dots + k(\sin(k+1)x + \sin(k-1)x),$ $=2\sin x + 4\sin 2x+ 6\sin 3x +\dots + (2k-2)\sin (k-1)x +(k-1)\sin kx + k\sin (k+1)x.$ Now subtract the original series, $2S\cos x - S = \sin x + 2\sin 2x + 3\sin 3x +\dots +(k-1)\sin (k-1)x + (k-1)\sin kx + k\sin (k+1)x - k\sin kx,$ $= S+ (k-1)\sin kx + k\sin (k+1)x -2k\sin kx,$ $= S - \sin (k+1)x +(k+1)\sin (k+1)x -(k+1)\sin kx.$ $2S\cos x - 2S = - \sin (k+1)x +(k+1)\sin (k+1)x -(k+1)\sin kx,$ or, multiplying both sides by a negative sign, $2S(1-\cos x)= \sin (k+1)x -(k+1)\sin (k+1)x +(k+1)\sin kx,$ $4S\sin^{2}(x/2)=\sin (k+1)x -(k+1)(\sin (k+1)x - \sin kx),$ $=\sin(k+1)x-(k+1)2\cos((2k+1)(x/2))\sin (x/2).$ So, finally $S = \frac{\sin(k+1)x}{4\sin^{2}(x/2)}-(k+1)\frac{\cos((2k+1)(x/2))}{2\sin(x/2)}.$ Hope there aren't any misprints ! Re: Tricky induction problem with trig Very interesting! I'm going to give it a week and then try this problem again, see if I remember anything. June 23rd 2013, 09:17 PM #2 Apr 2010 June 23rd 2013, 10:00 PM #3 Apr 2010 June 26th 2013, 06:27 AM #4 Super Member Jun 2009 June 26th 2013, 10:39 AM #5 Apr 2010 June 26th 2013, 03:01 PM #6 Super Member Jun 2009 June 26th 2013, 03:35 PM #7 Apr 2010
{"url":"http://mathhelpforum.com/trigonometry/220062-tricky-induction-problem-trig.html","timestamp":"2014-04-18T01:59:54Z","content_type":null,"content_length":"56437","record_id":"<urn:uuid:3a12edf9-e5ab-43d0-8b75-a0f64916e46a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 223.10005 Autor: Erdös, Paul; Turán, P. Title: On some problems of a statistical group theory. V. (In English) Source: Period. Math. Hung. 1, 5-13 (1971). Review: [Part IV, Acta math. Acad. Sci. Hungar. 19, 413-435 (1968; Zbl 235.20004).] Let S[n] be the symmetric group of n elements. It is well known that the number of conjugacy classes of S[n] is p(n) the number of partitions of n. Let H be an element of S[n] O(H) its orders which only depends on the conjugacy class of H. P(H) denotes the greatest prime factor of O(H). The authors prove the following theorem: For almost all H (i.e. for all H except for o(p(n)) of them) we have |P(H)-({\sqrt{6n} \over 2 \pi} log n-{\sqrt{6n} \over \pi} log log n )| < \omega (n) \sqrt n where \omega(n) tends to infinity as slowly as we please. [See also the authors, Acta. Math. Acad. Sci. Hung. 18, 151-163 (1967; Zbl 189.31302).] Classif.: * 11P82 Analytic theory of partitions 20P05 Probability methods in group theory 05A17 Partitions of integres (combinatorics) 20B35 Subgroups of symmetric groups 20B30 General theory of symmetric groups 00A07 Problem books Citations: Zbl 235.20004; Zbl 235.20003 © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/22310005.htm","timestamp":"2014-04-20T05:46:09Z","content_type":null,"content_length":"4653","record_id":"<urn:uuid:2efe6ead-67a2-411f-ad0d-fdafb031fa79>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2011 [00508] [Date Index] [Thread Index] [Author Index] Re: Loop problem • To: mathgroup at smc.vnet.net • Subject: [mg123064] Re: Loop problem • From: Bill Rowe <readnews at sbcglobal.net> • Date: Tue, 22 Nov 2011 05:35:15 -0500 (EST) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com On 11/21/11 at 4:27 AM, puya.sharif at live.com (P Shar) wrote: >Hey guys, i need a loop (that does the following) and can't figure >out what to do.. >I need a set of lists {i,j,k,l} i=E2=89 j=E2=89 k=E2=89 l, i=E2=89= j=k=E2=89 l, i=E2=89 j=E2=89 k=l, i=E2=89 k=E2=89 j=l >where 0 < i,j,k,l < 3. >So basically all the cases where the first condition holds and all >the cases there the second holds etc.. >Easiest would be to get the output as a matrix with the {i,j,k,l}'s >as rows. >Any ideas (or at least where to start)? First, I assume the range for i,j,k,l is {0,1,2,3} i.e., you meant less than or equal rather than strictly less than. If you did mean strictly less than, then your conditions lead to a null set for all cases. There are only two integers between 0 and 3 and you require a minimum of 3 distinct integers. So with that, the first case (all four distinct) is done by The remaining conditions can all be achieve with variations of: Cases[Tuples[{0, 1, 2, 3}, 3], _?(Length@Union[#] == 3 &)] /. {a_, b_, c_} -> {a, b, b, c} Here, I use Tuples to generate all possible {i,j,k} with each >=0 and <=3 Cases to select only those with distinct values and a replacement rule to insert a copy of one in the right position. No explicit loops needed.
{"url":"http://forums.wolfram.com/mathgroup/archive/2011/Nov/msg00508.html","timestamp":"2014-04-17T18:38:05Z","content_type":null,"content_length":"26261","record_id":"<urn:uuid:a8c4eccd-7fec-4efb-85d1-1e95377f02ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Another trigonometric equation July 6th 2009, 04:49 PM #1 Oct 2008 Another trigonometric equation If I had the equation 2cosx + sinx = 1 then I could express it as √5 cos (x-26.6) = 1. I have no problem when it's like this However, if it was 2sinx + cosx = 1, how could I do it? would it be √5 sin (x - 63.4) = 1? If I go further to find the values of x here, I calculate sin^(-1) of 1/√5 which is 26.6, and then for x I get 90 and 396.8, but because it's not under 360 I take that value away and get 36.8. So, I said x was 36.8 and 90. Whenever I put these back into the equation though, it doesn't result in 1, it results in two. Does anyone know why this is and can point out where I'm going wrong? Thanks if you can help If I had the equation 2cosx + sinx = 1 then I could express it as √5 cos (x-26.6) = 1. I have no problem when it's like this However, if it was 2sinx + cosx = 1, how could I do it? would it be √5 sin (x - 63.4) = 1? If I go further to find the values of x here, I calculate sin^(-1) of 1/√5 which is 26.6, and then for x I get 90 and 396.8, but because it's not under 360 I take that value away and get 36.8. So, I said x was 36.8 and 90. Whenever I put these back into the equation though, it doesn't result in 1, it results in two. Does anyone know why this is and can point out where I'm going wrong? Thanks if you can help sinA*cosB + cosA*sinB = sin(A+B) $2\sin x+ \cos x =1$ Let $2 = r \cos y$ and $1=r \sin y$ $r^2 = \sqrt{2^2+1^2}=\sqrt{5}$ $\tan y = \frac{1}{2} \Rightarrow y = 26.6^{\circ}$ so it becomes, $r[\sin x \cos y+\cos x \sin y]=1$ $\sin (x+y)=\frac{1}{r}$ $\sin (x+26.6)=\frac{1}{\sqrt 5}$ Did you get your mistake now?? July 6th 2009, 06:09 PM #2 Super Member Jun 2009 July 6th 2009, 06:25 PM #3 Aug 2008
{"url":"http://mathhelpforum.com/trigonometry/94551-another-trigonometric-equation.html","timestamp":"2014-04-16T10:52:02Z","content_type":null,"content_length":"38819","record_id":"<urn:uuid:471ed5f0-f767-49b3-bb8d-9f5cff509b6b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Toss a Coin Six Times Date: 02/07/98 at 16:59:43 From: Ruth Beldon Subject: Coin tossing probabilities A. Suppose a coin is tossed 6 times. What is the probability that 6 heads will occur? (Answer: 1/64) B. What is the probablity that 3 heads will occur? (Book answer: 5/16) 6/3 x 1/2 to 3rd power x 1/2 to 3rd power = 20x1/8x1/8 = 5/16 C. X = 2 6/2x 1/2 squared x 1/2 to 4th = 15x1/4x1/16 = 15/64 My question is: where did the 20 come from in part B and the 15 in part C? How was this answer arrived at? Thank you, R. Beldon Date: 02/07/98 at 18:29:05 From: Doctor Mitteldorf Subject: Re: Coin tossing probabilities Dear Ruth, The way you calculate probabilities for n coin tosses is to count the different ways (different combinations) that the event you're looking at could happen. Say there are 6 tosses. The first toss can be either heads or tails. The second can be either heads or tails. 2*2 = 4. The third can be either heads or tails... so you end up with 2^6 = 64 possibilities. Only one of these has all heads. But there are more ways that you could get 3 heads. It could be the first, second, and third, or the first, second and fourth that are heads. Or maybe the first, second and fifth. Here's a complete list: That's 20 possibilities out of 64, or 20/64 = 5/16. The answer is related to Pascal's triangle. The 6th row is The numbers add up to 64, and the middle one is 20. There is a formula for these numbers, which your book is referring to: rth number in nth row of Pascal Triangle (counting from zero): (n-r)! r! In your case, 6*5*4*3*2*1 in the numerator, 3*2*1 and 3*2*1 again in the denominator. -Doctor Mitteldorf, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/56589.html","timestamp":"2014-04-19T08:40:12Z","content_type":null,"content_length":"6855","record_id":"<urn:uuid:6214d5c3-95a1-49a5-bc0a-263e4e15a8c3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Complete Meaning and Complete ^ Dictionary | Wikipedia | Synonyms | Quotation | News Complete Meaning and Definition WordNet (r) 2.0 complete adj 1. having every necessary or normal part or component or step; "a complete meal"; "a complete wardrobe"; "a complete set pf the Britannica"; "a complete set of china"; "a complete defeat"; "a complete accounting" [ant: incomplete, incomplete] 2. perfect and complete in every respect; having all necessary qualities; "a complete gentleman"; "consummate happiness"; "a consummate performance" [syn: consummate] 3. having all four whorls or principal parts--sepals and petals and stamens and carpels (or pistils); "complete flowers" [ant: incomplete] 4. highly skilled; "an accomplished pianist"; "a complete musician" [syn: accomplished] 5. without qualification; used informally as (often pejorative) intensifiers; "an arrant fool"; "a complete coward"; "a consummate fool"; "a double-dyed villain"; "gross negligence"; "a perfect idiot"; "pure folly"; "what a sodding mess"; "stark staring mad"; "a thoroughgoing villain"; "utter nonsense" [syn: arrant(a), complete(a), consummate(a), double-dyed(a), everlasting(a), gross (a), perfect(a), pure(a), sodding(a), stark(a), staring(a), thoroughgoing(a), utter(a)] 6. having come or been brought to a conclusion; "the harvesting was complete"; "the affair is over, ended, finished"; "the abruptly terminated interview" [syn: concluded, ended, over(p), all over, terminated] v 7. come or bring to a finish or an end; "He finished the dishes"; "She completed the requirements for her Master's Degree"; "The fastest runner finished the race in just over 2 hours; others finished in over 4 hours" [syn: finish] 8. bring to a whole, with all the necessary parts or elements; "A child would complete the family" 9. complete or carry out; "discharge one's duties" [syn: dispatch, discharge] 10. complete a pass [syn: nail] 11. write all the required information onto a form; "fill out this questionnaire, please!"; "make out a form" [syn: fill out, fill in, make out] Complete Meaning and Definition Webster's Revised Unabridged Dictionary (1913) Would you like to add your own explaination to this word 'Complete'? • Complete: To be complete is to be in the state of requiring nothing else to be added. Complete may also refer to: Complete (Lila McCann album) ... • Complete metric space: In mathematical analysis , a metric space M is said to be complete (or Cauchy) if every Cauchy sequence of points in M has a limit that ... • Complete game: In baseball , a complete game (denoted by CG) is the act of a pitcher pitching an entire game himself, without the benefit of a relief ... • Complete graph: In the mathematical field of graph theory , a complete graph is a simple graph in which every pair of distinct vertices is connected by ... • Turing completeness: language , or cellular automaton ) is said to be Turing complete if and only if such system can simulate any single-taped Turing machine . ... • Completeness: In general, an object is complete if nothing needs to be added to it. This notion is made more specific in various fields. Logical completeness ... * There are many methods for predicting the future. For example, you can read horoscopes, tea leaves, tarot cards, or crystal balls. Collectively, these methods are known as 'nutty methods.' Or you can put well-researched facts into sophisticated computer models, more commonly referred to as a waste of time. - Scott Adams * There is no revenge so as forgiveness. - Josh Billings * Internet is so big, so powerful and pointless that for some people it is a substitute for life. - Andrew Brown Click here for more related quotations on 'Complete' • Complete Nutrition Brand Advocate Richard Moore Wins First Professional Long Drive Tournament OMAHA, Neb. -- Professional long-drive golfer and Complete Nutrition brand advocate Richard Moore won the Endless Summer Invitational Long Drive Event on March 31 in Costa Mesa, Calif., his first victory ... Read more on this news related to 'Complete' • Near-Complete T. Rex Skeleton Arrives at Smithsonian Joining a diverse roster of iconic American objects from Judy Garland's ruby slippers to the space shuttle Discovery, a nearly complete T. rex skeleton was welcomed to the Smithsonian this morning (April 15). The dinosaur is on loan to the Smithsonian's National Museum of Natural History for at least the next 50 years. Split up into many crates, the bones Read more on this news related to 'Complete' • ‘Complete the streets’ When Great Bend was first chartered, most people walked from point A to point B. It was the norm for home builders to install wide sidewalks and buffer strips between the walk and the road in most parts of the city. Read more on this news related to 'Complete'
{"url":"http://www.dictionary30.com/meaning/Complete","timestamp":"2014-04-20T13:19:18Z","content_type":null,"content_length":"27185","record_id":"<urn:uuid:b9fa6513-ae16-4714-ab22-48205d43ed97>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
question - My Math Forum December 14th, #3 2011, 09:17 PM Joined: Apr Re: question For the $10,000 you can just take it as is, using the 5% and 2 periods. The rate doesn't change during this time. From: USA For the $30,000, you need to first find the present value of where it will be in 4 years when the rate changes. That is, you have to "back into" stuff like this. We know it's worth Posts: 782 $30,000 in 7 years (FV). 4 years from now is a present value relative to the 7 years. So how many years will it be at 8%? First find the present value it will be at the 4-year mark, at 8%. Then take that value and find the present value of it for the first 4 years at the 5%. Does that make sense? Thanks: 1 You can't add the original numbers together as they are two individual things. Whether you'd want to add the answers together I don't know. The wording isn't clear on that point. Unless you're using a template that only has space for one number, I would list them individually.
{"url":"http://mymathforum.com/economics/23281-question.html","timestamp":"2014-04-20T05:42:11Z","content_type":null,"content_length":"30050","record_id":"<urn:uuid:fd77168e-2d87-4470-9fbc-68ea3ad82f19>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Double class I regularily deal with a huge number of elementary functions as rounding, transcendentals, exponential and logarithm and was unsatisfied with the average performance. So, I started to provide my own assembler subroutines executing with full floating-point performance on any Intel Pentium II+ or compatible system. On recent Pentium 4 computers, I was able to improve the average performance by two to five times. This archive is dedicated to any number cruncher using MATLAB 6.0 or newer. • An Intel-based computer architecture with an Intel Pentium II or better processor. • MATLAB 6.0 or newer. Unpack the folder '@double' from the archive. 1. The subfolder '@double' contains libraries that directly overload the built-in functions. This has the advantage, that existing MATLAB code automatically benefits from the increased performance. If '@double' is in a folder in your MATLAB path, all MATLAB functions benefit. If it is in a 'private' project folder, the project functions benefit. It is save to use only a subset of the class functions. Just say builtin(func,arguments) to call the original version of func. Please note that this works only for built-in functions - this means that which func reports "func is a built-in 2. You may rename the functions func into ffunc. Existing MATLAB code will work with the built-in functions unless you call ffunc. If you work with MATLAB R13 or newer, you may replace 'xor.dll' by 'xur.dll' for matching the representation of logical values (uint8). The libraries should always reside in a subfolder called '@double' to make sure they are not called for any data except of type double. They do not check the data type of passed arguments. Due to the overhead for calling external functions, the built-in functions work faster if called for matrices with less than about 4 to 8 elements. Therefore, if you know the size of your matrix, you can choose the appropriate function. The functions angle, mod and xor are not built-in but MATLAB scripts. Therefore, the functions within this package work much faster in any case as shown in the following figure. The function xur is an even faster version of xor. It can be used in any situation where its output, a logical (boolean) uint8 matrix, will not be transformed to double for computation. It serves in particular as a value selector in a statement like values=matrix(xur(a,b)). Figure 1: Benchmark on a 2GHz Pentium 4 Mobile system The results of the external functions slightly differ from the built-in functions. The accuracy of the external functions benefits from the floating-point registers with a 64bit mantissa on Intel processors. Intermediate values are kept in floating-point registers such that rounding takes place mostly once - when writing the result into the output matrix with a 53bit mantissa. In general, a statement of inverseFunction(function(value)) produces value with a relative error of less than 1.0E-13. The roundoff error of about 1.1E-16 leads to relatively important deviations in exponentiation. Note also that in particular addition/subtraction in the argument of a logarithm are critical operations due to a relative amplification of the rounding error. The logarithm itself works accurate over the full complex plane R x iR. For the sake of performance, the inverse transcendental functions are currently not implemented in that way. See also the summary about complex Transcendental functions: pi An exception is made for every periodic transcendental function, where the constant 2pi is truncated to a 53bit mantissa. This guarantees that a statement of the form sin(x) with real x always produces the expected value sin(rem(x,2*pi)) at the same accuracy. The reminder is computed explicitly and prescaled to a 64bit mantissa. Example 1: Copyright © Marcel Leutenegger, 2003-2007, École Polytechnique Fédérale de Lausanne (EPFL), Laboratoire d'Optique Biomédicale (LOB), BM - Station 17, 1015 Lausanne, Switzerland. This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; version 2.1 of the License. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. See '/FILES' in the source archive for a list of the original package contents. Any warranty is strictly refused. Don't rely on any financial or technical support in case of malfunction or damage. Comments are welcome. I will try to track reported problems and fix bugs. January 18, 2004 Initial release February 28, 2004 Bug fixed in rem(matrix,value): the result was stored back to the value causing an assertion failure. June 20, 2004 Service release thanks to a bug reported by Tom Minka in exp(matrix): the output was NaN for infinite input. This bug fix made me think about affine inputs. They are now all handled as particular values for two reasons: 1. The output is well defined. In cases with more than one possible solution, the function limit towards that value has been used. 2. The performance does not degreade but increases considerably (table look-up instead of calculation). Any floating-point operation producing an affine result tries to throw an exception. Even if the exception is masked as within MATLAB, the processor calls up an internal assist slowing down the computation to about 10%-20% of normal performance. May 2, 2005 Service release. Bux fixed in [c,s]=cis(matrix): the second output was set to an invalid dimension. Version information included. June 28, 2007 Source code released under GNU Lesser General Public License (LGPL) version 2.1. May 12, 2008 Optimized routine for calculating the exponential according to Agner Fog, "Optimizing subroutines in assembly language," at Copenhagen University College of Engineering. September 17, 2008 Dimensions of (empty matrix*scalar) matched on MATLAB's behaviour. Thanks to Paolo Bardella for reporting the issue. Known bugs May 23, 2005 Bug report by Tom Minka: with MATLAB 7, the double functions are also called for sparse input returning a broken output. As a workaround, you may rename the functions or include/exclude them in/from the MATLAB path. September 17, 2008 Dimensions of function(empty matrix,scalar) do not match with MATLAB's behaviour. Downloading these files, you accept the copyright terms. MATLAB is a registered trademark of The MathWorks, Inc. Pentium II is a registered trademark of Intel Corporation. Other product or brand names are trademarks or registered trademarks of their respective holders.
{"url":"http://documents.epfl.ch/users/l/le/leuteneg/www/MATLABToolbox/DoubleClass.html","timestamp":"2014-04-17T07:05:52Z","content_type":null,"content_length":"11959","record_id":"<urn:uuid:05c5a276-01d4-4f6b-ba17-6a7076a4f9b5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: using linux instead of osf Scott Locklin (locklin@lonsdale.lbl.gov) Thu, 28 Nov 1996 00:25:12 -0600 (CST) On Wed, 27 Nov 1996, Michal Jaegermann wrote: > > > > Don't quote me on the "Taylor" part. Actually, upon closer inspection > > there is a comment that simply says "polynomial of degree 13" Well, it's "taylor" to first order anyway; first dozen odd significant digits are plain ole Taylor Series around zero. > One cannot be sure without running tests with an actual code in a > "real life" situations, but I strongly suspect that a rational > approximation of a degree 3 could fare better. Digital doesn't seem to think so; there are no divisions in an objdump of trigonometric portions of DPML (if this is against the rules; somebody please tell me, because otherwise it's a nice way of getting hints on fast AXP code). I didn't look closely enough to see if they're using the same polynomial coefficients as the fdlibm code, but my guess is that they probably are (if the fdlibm guy did his homework correctly & got the "minimax" polynomial right). > > ---it doesn't say how they arrived at that polynomial. > There are some ways. :-) Things like Chebyshev polynomials and Pade > approximations should ring a bell. Manuals for Maple and Mathematica > are likely not a bad place to look for hints of a use in practice. > A venerable "Computer Approximations", by Hart, probably can also > be consulted. There's also Numerical Recipes, which is now online. The relevant section (in Fortran; sorry) section 5.11 onwards To unsubscribe: send e-mail to axp-list-request@redhat.com with 'unsubscribe' as the subject. Do not send it to axp-list@redhat.com Feedback | Store | News | Support | Product Errata | About Us | Linux Info | Search | JumpWords No Frames | Show Frames Copyright © 1995-1997 Red Hat Software. Legal notices
{"url":"http://www.alphalinux.org/archives/axp-list/1996/November1996/1291.html","timestamp":"2014-04-19T08:07:47Z","content_type":null,"content_length":"5656","record_id":"<urn:uuid:2b652783-b8f1-4d56-aef8-6d76bb0701d7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Noncommutative projective surfaces Seminar Room 1, Newton Institute We discuss recent work, joint with Toby Stafford, which describes a large class of noncommutative surfaces in terms of blowing up. Specifically, let A be a connected graded noetherian algebra, generated in degree 1, and suppose that the graded quotient ring Q(A) is of the form k(X)[t, t^-1; sigma] for some projective surface X with automorphism sigma. Then we prove that A can be written as a naive blowup of a projective surface Y birational to X. This enables one to obtain a deep understanding of the structure of such algebras.
{"url":"http://www.newton.ac.uk/programmes/NCG/seminars/2006122014001.html","timestamp":"2014-04-17T00:52:29Z","content_type":null,"content_length":"4396","record_id":"<urn:uuid:828ae2bf-f56c-4e46-9671-4d949ea59ebb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Orwell Temperaments Commas and Generators In terms of octave and generator, Orwell is defined by a generator which is a somewhat sharp (four or five cents worth) subminor third. While of some interest as a five-limit temperament, with a comma of 2109375/2097152, it really comes into its own as a seven-limit temperament, where it joins that important class of temperaments (including Meantone, Magic, and Kleismic) whose generators (in this case the approximate 7/6) are consonances of the system. As a seven-limit temperament, it is defined by the commas <1728/1715, 225/224>. The 1728/1715 comma is of particular significance for Orwell, since it tells us that three Orwell generators are an approximate 8/5, and so Orwell is closely allied to the planar temperament this comma defines. It also does well as an eleven-limit temperament, where in its best incarnation it is defined by the commas <99/98,121/120,176/175>. Here 99/98 is of particular significance, telling us that two generators give us our approximate 11/8. Melodic Properties of Orwell Orwell has a nine-note MOS defined by a chain of eight generators. It may be compared to the diatonic scale of Meantone in the sense that it strikes a happy medium between the blandness of the ten-note Miracle MOS and the unevenness of the ten-note Magic MOS, and its good melodic properties are one of the best features of Orwell.If we call the large step of this MOS L, and the small step s, then the MOS has step sizes LsLsLsLss, where s is a flat secor (in the sense that it serves as both a 16/15 and a 15/14) and the L, in the eleven-limit version, can be regarded as an 11/10; in any case Ls gives us an Orwell generator of about 7/6. The seven and nine limit harmonic resources of this scale may be considerably improved by permuting its steps; of particular interest here are the variant scales LssLLLsss, LssLsssLsL and LsLssLssL. Mapping to Primes Here's the matrix defining the mapping to primes for Orwell: (1 0) (0 7) (3 -3) (1 8) (3 2) To get a complete 7- or 11-limit chord, you need a string of at least eleven generators. Equal Temperaments covering Orwell Orwell in its five- and seven-limit versions is done very well by the generator 19/84 (and hence the name.) For the eleven-limit, 12/53 is preferable. The sequence of generators 7/31 < 19/84 < 12/53 < 17/75 < 5/22 shows the range of Orwell equal temperament generators worth considering. An Example for Orwell For an audible example of Orwell, you may listen to the Trio for Clarinet, English Horn and Banjo Except for a brief excursion into chromaticism, it is in the nine-note Orwell MOS, in the 12/53 version.
{"url":"http://lumma.org/tuning/gws/orwell.html","timestamp":"2014-04-18T15:39:39Z","content_type":null,"content_length":"4669","record_id":"<urn:uuid:5ddc8521-ccd1-44e3-9827-e68b35c33573>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f0f416be4b04f0f8a918df2","timestamp":"2014-04-19T04:55:19Z","content_type":null,"content_length":"36867","record_id":"<urn:uuid:f9eff572-5a5a-41e3-9f6e-7382372fe2c7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Narrow Search Earth and space science Sort by: Per page: Now showing results 1-10 of 30 During the last sunspot cycle between 1996-2008, over 21,000 flares and 13,000 clouds of plasma exploded from the Sun's magnetically active surface. These events create space weather. Students will learn more about space weather and how it affects... (View More) Earth through reading a NASA press release and viewing a NASA eClips video segment. Then students will explore the statistics of various types of space weather storms by determining the mean, median and mode of a sample of storm events. This activity is part of the Space Math multimedia modules that integrate NASA press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The modules cover specific math topics at multiple levels of difficulty with real-world data and use the 5E instructional sequence. (View Less) During the last sunspot cycle between 1996-2008, over 21,000 flares and 13,000 clouds of plasma exploded from the Sun's magnetically active surface. Students will learn more about space weather through reading a NASA press release and viewing a NASA... (View More) eClips video segment. Then students will explore the statistics of various types of space weather storms by determining the mean, median and mode of different samples of storm events. This activity is part of the Space Math multimedia modules that integrate NASA press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The modules cover specific math topics at multiple levels of difficulty with real-world data and use the 5E instructional sequence. (View Less) Students will learn about the Transit of Venus through reading a NASA press release and viewing a NASA eClips video that describes several ways to observe transits. Then students will study angular measurement by learning about parallax and how... (View More) astronomers use this geometric effect to determine the distance to Venus during a Transit of Venus. This activity is part of the Space Math multimedia modules that integrate NASA press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The modules cover specific math topics at multiple levels of difficulty with real-world data and use the 5E instructional sequence. (View Less) In this problem set, learners will analyze an image of carbon dioxide emissions in the continental US in a given year to answer a series of questions. Answer key is provided. This is part of Earth Math: A Brief Mathematical Guide to Earth Science... (View More) and Climate Change. (View Less) In this problem set, learners will become familiar with two measures of electricity: watts and kilowatt-Hours. They will calculate the electrical consumption of several household items, such as appliances, as well as its cost. Answer key is... (View More) provided. This is part of Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less) In this problem set, learners will calculate the parts-per-thousand measure for different scenarios, including ocean salinity as depicted in the image included. Answer key is provided. This is part of Earth Math: A Brief Mathematical Guide to Earth... (View More) Science and Climate Change. (View Less) In this problem set, learners will use a diagram of carbon fluxes, which shows the sources that contribute to current atmospheric carbon dioxide levels, to answer a series of questions. Answer key is provided. This problem is part of Earth Math: A... (View More) Brief Mathematical Guide to Earth Science and Climate Change. (View Less) In this problem set, learners will analyze a table of the length of day (hours) and the number of days per year on Earth in past eras. They will calculate future values, plot some of the data and identify the rate of increase. Answer key is... (View More) provided. This is part of Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less) In this problem set, learners will analyze a table of electrical consumption of appliances when not in use and consider the total consumption in kilowatt-hours (kWh), associated cost and their own consumption when appliances are in "instant-on" or... (View More) "stand-by" mode. Answer key is provided. This is part of Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less) This is a booklet containing 37 space science mathematical problems, several of which use authentic science data. The problems involve math skills such as unit conversions, geometry, trigonometry, algebra, graph analysis, vectors, scientific... (View More) notation, and many others. Learners will use mathematics to explore science topics related to Earth's magnetic field, space weather, the Sun, and other related concepts. This booklet can be found on the Space Math@NASA website. (View Less) «Previous Page123 Next Page»
{"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Mathematics%3ANumber+and+operations&educationalLevel=Middle+school&resourceType=Instructional+materials%3AProblem+set","timestamp":"2014-04-18T18:00:04Z","content_type":null,"content_length":"71639","record_id":"<urn:uuid:1ef6705a-ce30-4fb3-9507-9977e2d14eac>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Independent and Dependent Variables Examples Page 3 Fred and John are brothers. John, who is the older of the two and was largely deprived of attention as a young boy, is constantly trying to one - up Fred. Both brothers enjoy clothes shopping. However, because of John's competitiveness, every time Fred buys a new pair of jeans, John will go out and buy a pair that is $10 more expensive. Both pairs still look exactly the same, and all John is actually doing is demonstrating a lack of fiscal responsibility. Express in symbols the relationship between the amount of money Fred spends on a pair of jeans and the amount of money John spends on a pair of jeans. Let F be the amount of money Fred spends, and J the amount of money John spends.
{"url":"http://www.shmoop.com/algebraic-expressions/independent-dependent-variables-examples-3.html","timestamp":"2014-04-16T08:25:14Z","content_type":null,"content_length":"37484","record_id":"<urn:uuid:d9de0a46-6f98-405b-96e6-4942d6c142a1>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Independent Study (SpSt 997) Draft Implications of Magnitude Distribution Comparisons between Trans-Neptunian Objects and Comets Dr. C. A. Wood, Advisor December 1, 1995 Implications of Magnitude Distribution Comparisons between Trans-Neptunian Objects and Comets The population of observed trans-neptunian objects has a fairly well-defined magnitude distribution, however, the population of observed short-period comets does not. This analysis of the population distributions of observed trans-neptunian objects (T NOs) and short-period comets (SPCs) indicates that the observed number of TNOs and SPCs is insufficient to judge conclusively whether the trans-neptunian objects are related to the short-period comets or whether the TNOs are part of the Kuiper belt from w hich the SPCs are believed to be derived. Differences in the population distributions of TNOs and SPCs indicate that the TNOs are not representative of the Kuiper belt as a whole, even if they are part of the Kuiper belt. Further analysis of the populat ion distributions of comets and the TNOs has provided additional information and some predictions about the populations’ characteristics. This derived information includes the facts that: the six brightest SPCs for which H[10] magnitudes have be en calculated likely belong to the Oort cloud population (long-period) of comets instead of the Kuiper belt population of comets; Pluto and Charon are likely to belong to the TNO population instead of the major planet population; there are likely to be ~1 0^9 TNOs in the Kuiper belt, including many Pluto-sized objects, massing a total of ~10^25 kg in all. Several unusual objects have been discovered orbiting the Sun beyond the orbit of Neptune. Prior to these discoveries, a disk of cometary bodies, called the Kuiper belt, had been hypothesized to exist beyond the orbit of Neptune. This disk of com etary bodies is believed to be the immediate source of all short-period comets. The recently discovered trans-neptunian objects are now believed to be the first few members of the hypothesized Kuiper belt of comets to be discovered. I have attempted to judge whether the magnitude distributions of the observed trans-neptunian obje cts and the short-period comets are similar or different enough to state whether the observed trans-neptunian objects are indeed members of the Kuiper belt and whether the observed Kuiper belt is the source of short-period comets. Descriptions and judgments of the type made herein are important for several reasons: 1) descriptions of observed phenomena are important in and of themselves as information about the universe (and particularly the solar system) in which we live; 2) identification of the observed trans-neptunian objects as members of the Kuiper belt would confirm the Kuiper belt hypothesis; or negative identification could help reject the Kuiper belt hypothesis; 3) FONT> descriptions of the TNOs, comets, and the Kuiper belt would help establish their relationship to one another; and 4) information about portions of our solar system, particularly the outermost, least-changed portions, helps to ref ine and develop ideas about how our solar system formed and how it has evolved. Analysis of the orbital characteristics of comets reveals some interesting features. In particular, the distributions of cometary osculating orbital elements:^1 the semimajor axis distribution, orbital eccentricity distribution, and orbi tal inclination distribution of comets, all contain a common feature that is of particular interest; they display an asymmetry that seems to indicate two distinct populations of comets. Figure 1a shows the semimajor axis distribution of comets as it is often shown; the number of comets within each incremental semimajor axis range is plotted on a semi-log graph versus the inverse of the semimajor axis, 1/a, rather that the semim ajor axis, a, itself. Comets with 1/a values close to zero have long orbital periods, whereas comets with larger 1/a values have shorter orbital periods. Those comets with negative 1/a values appear to have unbounded, hyperbo lic orbits, and those with 1/a = 0 exactly appear to have marginally unbounded, parabolic orbits.^2 Comets with 1/a £ 0 have been observed rather poorly and have equally poorly determined orbits; these c omets most likely have highly elliptical, nearly parabolic orbits with 1/a ³» 0.^2 The semimajor axis distribution of comets appears to be divided into two populations: those that have very small 1/a val ues, and those that have 1/a values more evenly distributed up to 1/a » 1. Figure 1b shows the orbital eccentricity distribution of comets plotted throughout its allowable range. Comets that have orbital eccentricities, e, close to zero have nearly circular orbits, whereas comets with higher eccentricities have ellip tical (or even nearly-parabolic or hyperbolic) orbits. Here again, the orbital inclination distribution of comets appears to be divided into two populations: those that have very low i, and those with i more evenly distributed from 0 to 1.< /P> Figure 1c shows the orbital inclination distribution of comets plotted throughout its allowable range. Comets that have orbital inclinations, i, close to zero orbit close to the orbital plane of, and in the same direction as, the major bodies i n the solar system, whereas comets with higher inclinations orbit in all directions about the Sun. Yet again, the orbital inclination distribution of comets appears to be divided into two populations: those that have very low i, and those with i more evenly distributed from 0° to 180°. The nature of the bimodality of the semimajor axis, orbital eccentricity, and orbital inclination distributions of comets becomes apparent when the orbital elements of comets belonging to the short or long, a, i, or e groups are co mpared with one another. It has been found that those comets with short a are also those which have small e and small i, and those comets with long a are also those which have larger e and larger i. Some example s of individual comets and their orbital elements are given in Table 1. No such bimodality appears in the distributions of the remaining spatial orbital elements; this lack of bimodality would be expected from the azimuthal symmetry of the solar system. The orbital element distributions of comets for the remaining orbital elements: the longitude of the ascending node, W, and the argument of perihelion, w, are plotted in Figures 1d and 1e, respectivel y. Table 1: Comet Orbital Elements The bimodal distribution in the semimajor axes, orbital eccentricities, and orbital inclinations of comets is evidence for two separate populations of comets.^2 One population consists of those with short a, small e, and small i. These short-a comets necessarily have short orbital periods as well. Members of the short-a, small-e, small-i comet population are therefore referred to as short-period comets. The other, long a, larger e , and larger i population of comets necessarily have long orbital periods. Members of the long-a, larger-e, larger-i comet population are referred to as long-period comets. A somewhat arbitrary division point of P = 200 y ears has been chosen with which to classify comets; those comets with P < 200 years are referred to as short-period comets, whereas those comets with P ³ 200 years are referred to as long-period comets.^2 The division of comets into two distinct populations can be understood as a consequence of the origin hypothesis of comets and the solar system as a whole. The solar system is believed^3 to have originated from a large cloud of gas and dust in interstellar space. Triggered by some as-yet-unknown event, this cloud of mostly hydrogen and some helium gas began to collapse under the influence of its own gravity. This collapsing cloud had a small, random amount of angular momentum which prevent ed it from collapsing uniformly but instead allowed it to collapse more along its axis of rotation than perpendicular to its axis of rotation. This form of collapse produced a thin rotating disk of gas and dust with a much larger concentration of matter at the center of the disk. The central, high-density portion of the disk collapsed and ignited to form the Sun. Immediately surrounding the infant Sun, smaller concentrations of gas and dust collapsed and swept up matter surrounding them to form the pla nets, moons, and asteroids. Matter in the outermost, colder portions of the disk condensed into a multitude of cometary bodies. The hypothesized disk of cometary bodies left over from the original condensation of the solar system is named the Kuiper belt, after Gerard Kuiper who first postulated its existence in 1951.^4 The Kuiper belt is believed to extend from outs ide the orbit of Neptune to 100 AU or more outwards from the Sun (see Figure 2). Occasional interactions of the Kuiper belt objects with the outer planets Neptune, Uranus, Saturn, and Jupiter, are believed^5 to have ejected a relatively small proportion of the Kuiper belt objects into highly eccentric orbits. These eject ions would occur randomly in all directions, and would boost the semimajor axes of the ejected objects to very large values. The ejected bodies are believed^ 5 to have formed a spherical cloud of cometary bodies extending to tens of thousands of AU outwards from the Sun (see Figure 2). The hypothesized spherical cloud of cometary bodies is named the Oort cloud, after Jan Oort who first postulated its existence.^5 The existence of the Oort cloud was first hypothesized by Oort in 1950^5 based upon an analysis of 1/a values, as updated here in Figure 1a. Oort postulated that the spherical cloud of comets (which would bear his name) was the immedi ate source of all long-period comets. The large semimajor axes, random inclinations, and random arguments of perihelion of the long-period comets strongly indicate an origin in a large spheroidal cloud, many thousands of AU across. Similar arguments based upon the orbital element distribution of comets were made by Kuiper^5 to postulate the existence of the Kuiper belt. The Kuiper belt is believed to be the immediate source of all short-period comets. The relatively s mall semimajor axes and orbital inclinations of short-period comets strongly indicate an origin in a thin disk just beyond the orbit of Neptune. The existence of the Kuiper belt was simply a hypothesis constructed to explain the origins of comets until a series of discoveries began in 1977 when an object was discovered by Kowal^6 orbiting the Sun beyond the orbit of Saturn. The objec t orbited well beyond where any asteroid should be, in an orbit somewhat similar to that of a comet, but it didn't appear to be a comet. The object, temporarily named 1977 UB, was considered to be an unusual minor planet;^7 it was eventually gi ven the official designation (2060) and the name Chiron. This large, dark-red object would remain a lone anomaly until a second object, 1992 AD, was discovered 15 years later. 1992 AD, soon designated (5145) and named Pholus, also orbited in a way simil ar to a comet where no asteroids were to be found, and it didn't appear to be cometary. Pholus' orbit carried it from just within Saturn's orbit to just beyond Neptune's orbit. A new category of objects in the solar system had been found: the Centaurs. Since the discovery of Pholus, four more Centaurs have been discovered (see Table 2).^8 The Centaurs are believed^9 to be large comet progenitors which have been perturbed by other objects and injected into the in ner solar system from the Kuiper belt. The hypothesis that the Centaurs are cometary bodies has been partially confirmed by the recent discovery of gaseous CO emissions (a uniquely cometary trait) from Chiron as it approaches perihelion.^10 Si nce this discovery, (2060) Chiron has been given the additional designation of Comet 95P/Chiron.^10 Table 2: Kuiper Belt Candidate Objects The first object actually residing in the Kuiper belt was discovered in 1992. Object 1992 QB1 was found to orbit more than 40 AU from the Sun, more than 1.3 times as far from the Sun as Neptune. Five more trans-neptunian objects were discovered in 19 93; twelve more were discovered in 1994; and twelve trans-neptunians have been discovered so far in 1995. A total of 30 trans-neptunian, Kuiper belt candidate objects with at least approximate orbits have been discovered so far (see Table 2) .^8 The Hubble Space Telescope was recently used in an attempt to detect some of the fainter members of the Kuiper belt.^11 In a series of images covering a small region of the sky, the Hubble detected 29 trans-neptunian objec ts. Although these objects were found to be orbiting well beyond Neptune, the images were not sufficient to calculate orbits for the objects or to warrant assigning them designations. An illustration of the orbits of the outer planets, the Centaurs, and the TNOs is shown in Figure 3. Observational evidence, including the TNOs and Centaurs which have been discovered, tends to support the existence of the Kuiper belt. The hypothesis that the observed trans-neptunian objects are indeed the first few members of the Kuiper belt to be o bserved is examined in this study. Evidence to test this hypothesis has been obtained by comparing the observed magnitude distribution of the TNOs with the observed magnitude distribution of short-period comets. Procedure & Results I expect that the magnitude distributions of populations of objects will be similar if the populations are related to one another. For this reason, I have calculated magnitude, size, and mass distributions for the trans-neptunian objects and magni tude distributions for the short-period and long-period comets. Comparisons between the magnitude distributions of these populations have provided some insight into the relationships among these objects. I have derived extrapolations of these distributi ons based upon the magnitude and size distributions. The theoretical magnitude and mass distributions were derived from the theoretical size distribution as described below. The size distribution of any particular population of solar system bodies of one particular type should be of the form: where: c[>](r) is the cumulative number of objects with radius ³ r, c[>0] is a positive real constant, and s is a positive real constant. This distribution should form a straight line of negative slope when plotted on a log-log graph. The theoretical size distribution c[>](r) can be fitted to the observed size distribution data to provide an extrapolation into size regimes that are underrepresented or totally absent in the observational data. Care must be taken to fit the ideal distribution curve to those portions of the data sets which are not significantly underrepresented. In choosing a fit to the distributions throughout this study, I have used the least-squares fit to those data points, in the well-represented portion, which result in the highest correlation coefficient. Size measurements must be obtained for the population of objects under consideration before their size distribution can be calculated. Size measurements are not available for the trans-neptunian objects (TNOs) or for most comets, h owever. Size estimates must therefore be made based upon the objects' brightness’ and distances from the Sun and the Earth. Albedo values must be assumed for the TNOs in order to estimate their sizes. I have established the relationship between the magnitude and radius of an object by making several assumptions. The objects under consideration have been assumed to be spherical with a constant (average) albedo over their surfaces. The intensity of light reflected to the observer has been assumed to fall off as the square of both the Sun-to-object and object-to-Earth distances. I have derived the resulting magnitude-radius relationship to be: where: r is the object radius, R is the Sun-to-object distance, D is the object-to-Earth distance, a is the albedo, f is the fractional projected illuminated area that is visible from Earth, m[1] is the apparent magnitude, and g is a positive constant for the entire solar system. I have determined g for the solar system by averaging the values obtained for g for each of several solar system bodies. The solar system bodies used and each of their g values are given in Table 3. The average value of g (which was used hereinafter in this analysis) was found to be g = 656 km/AU^2. The standard deviation for this value of g is s = 40.5 km/AU^2. Three standard deviations were added to or subtracted from the nominal value of g as well as using the minimum or maximum albedos to obtain estimates for the minimum and maximum sizes of objects. The TNO size distributions for each of the minimum, nominal, and maximum albedo values: 0.01, 0.03, and 0.10, respectively, are shown in Figure 4a. These albedo values were chosen to reflect the observed albedos of Chiron, Pholus, Pluto, Charon, and Triton. Table 3: g Values of Solar System Bodies Extrapolations of the TNO size distribution to sizes not represented in the observed population may be made by extending the fitted curve described by Equation 1. Extrapolations to smaller sizes in particular can be used to estimate the total number o f TNOs that may be present. The extrapolated size distributions for the albedo values previously used are shown in Figure 4b. Extrapolations based upon the TNOs which have been observed would only indicate the number of TNOs within the distance range that has been observed; TNOs which may be more distant from the Sun and the Earth may not be detected. Extrapolations based up on the observed TNOs alone may therefore misrepresent the population of TNOs in the hypothesized Kuiper belt. This misrepresentation may be corrected by calculating and similarly extrapolating the magnitude distribution of the TNO population. I have derived the expected magnitude distribution of a population of objects, which conform to the size distribution described by Equation 1 and whose magnitude-radius relationship is described by Equation 2, as outlined in Appendix A. The population of objects is assumed to be uniformly distributed (the number of objects of a given size per unit volume is constant) throughout a (negligibly) thin disk which extends from an inner radius of R[i] to an outer radius of R[o]. The point of view of the observer is also assumed to be sufficiently close to the Sun so that D » R. The derived magnitude distribution of such a population of objects is given as: where: c[<](m[h]) is the cumulative number of objects with "heliocentric" magnitude £ m[h] (the "heliocentric" magnitude, m< SUB>h, is the apparent magnitude which would be observed from the viewpoint at the Sun), b is a positive constant parameter which is based upon the size of the disk, and the other parameters are as defined previously. < /DIR> The magnitude distribution described by Equation 3, including appropriate values for a, R[i], and R[o], has been fitted to the observed magnitude distribution of the TNOs to obtain corrected va lues of c[>0] and s. The observed and extrapolated TNO absolute magnitude and heliocentric magnitude distributions are shown in Figures 5a, b, c, and d. The corrected values of c[>0] and s have been used in Equation 1 to recalculate the extrapolated size distribution of TNOs so as to reflect the contribution from unseen TNOs throughout the hypothetical Kuiper belt and to corre ctly reflect the magnitude distribution. The corrected value of s obtained from the observed heliocentric magnitude distribution is s = 2.97; for R[i] = 30 AU and R[o] = 200 AU, the corrected values of c [>0] obtained from the observed heliocentric magnitude distribution are c[>0] = 2.87´10^9, 3.13´10^10, 2.77´10^11, for a = 0.10, 0.03, 0.01, respectively. Extrapolations of the corrected size distribution of TNOs have been used to reestimate the cumulative number of trans-neptunian objects larger than a given size. The corrected extrap olations of the TNO size distribution is shown in Figure 6. Estimates of the mass distribution of TNOs have also been made based upon the corrected size distribution of TNOs. The expected mass distribution of objects with the properties previously assumed and the size distribution described by Equation 1 has b een derived as described in Appendix B. The mass density distribution of individual objects of any size throughout the disk, r(r,R), is assumed to be a constant, r, a nd the value of the parameter s is assumed to be less than 3 (a slightly different equation would be obtained for other values of s). The derived mass distribution of such a population of objects is given as: where: M[<](r) is the cumulative mass of all objects with radius £ r, r is a the (constant) mass density of each object, and the other paramet ers are as defined previously. The derived mass distribution of the TNOs is shown in Figure 7. The magnitude distributions of short-period comets should follow the same distribution as described by Equation 3, but with different values for the parameters c[>0], s, and b, as should all other di stinct populations of solar system objects, with the H[10] magnitudes of comets being used instead of (and equivalent to) their absolute magnitudes. The H[10] magnitudes of many comets have been obtained from the Houston Comet Catalo gue.^12 The H[10] magnitude distributions of short-period and long-period comets and fitted extrapolations to the distributions is shown in Figure The (uncorrected) trans-neptunian object (TNO) size distribution shown in Figure 4a displays some interesting features. In the well-represented, larger-radius portion, the distribution contains two distinct (log-log) linear portions, perhaps indic ating two distinct populations of objects. The three largest objects seem to fall along one population line, and the third- through the eighteenth-largest objects fall very well along another population line. I believe that the true population line for the TNOs (if there is indeed only one) probably lies somewhere in-between the two population lines apparent in the TNO size data because the larger-radius population line is due to only three data points; not very much upon which to make a case for multip le TNO The underrepresented portion of the TNO size distribution, where the distribution deviates from an ideal population distribution, contains a step-like effect in which small groups of data points seem to follow their own (log-log) linear population line s, with discontinuities between each of these data point groups. This step-effect could be due to the low precision of the TNO apparent magnitudes from which their sizes are estimated; quantization of the apparent magnitude data would lead to quantizatio n in the estimated sizes. I believe that when more accurate measurements of the TNOs are available, and when more individual TNOs have been observed, this quantization effect should vanish. The fitted extrapolations of the TNO size data, as shown in Figure 4b, provide an estimate for the total number of TNOs which may exist. Assuming the TNOs are relatively highly reflective (a = .10) and using three standard d eviations of error in my g estimate gives at least a billion (10^9) TNOs that have radii of at least 1 km. Assuming the TNOs are poorly reflective (a = .01) and again using three standard de viations of error in my g estimate gives up to a trillion (10^12) or more TNOs that have radii of at least 1 km. These estimates, while generally indicative of the great number of Kuiper belt members (far outnumbering any other population of objects in the Solar System), are not very precise and only give an order of magnitude estimate of the number of bodies populating the Kuiper belt and the Solar System. The TNO magnitude distribution, as shown in Figure 5, is somewhat better behaved than the TNO size distribution. The TNO heliocentric magnitude distribution (see Figures 5b and 5d) fits an extrapolated population line fairly well. The only anomaly th at is apparent is a series of four or five TNOs which all have roughly the same heliocentric magnitudes (23.0 to 23.1) instead of gradually getting brighter. It seems as if a portion of the population line were 'broken' off from the rest and 'bent' downw ards towards dimmer magnitudes. The adjustment in heliocentric magnitude which would be necessary to 'straighten-out' the TNO heliocentric magnitude distribution is well within the typical observational errors associated with the TNOs. Such a correction adjustment may occur when more precise measurements of the TNOs have been made. The TNO absolute magnitude distribution is shown in Figure 5a. Although the TNO absolute magnitude distribution is not expected to follow the magnitude distribution described by Equation 3, it is expected to follow a similar, (semilog) linear distribu tion. An extrapolation of the TNO absolute magnitude distribution is therefore shown in Figure 5c as well. The TNO absolute magnitude distribution shows the same features as the TNO size distribution in Figure 4a, as would be expected. The TNO size distribution extrapolation, corrected to reflect the observed TNO heliocentric magnitude distribution and to include the full extent of the Kuiper belt, is shown in Figure 6. This size distribution is based upon an assumed Kuiper disk ext ending uniformly from an inner radius, R[i], of 30 AU to an outer radius, R[o], of 200 AU. Adjusting the inner or outer boundaries of this hypothesized Kuiper disk would not affect the slope of the population distribution lines, but would only shift the population lines to greater or lesser cumulative numbers of objects. For these values of R[i] and R[o] (this value of R[o] is rather speculative), the estimated number of obj ects at least 1 km in radius is very close to that obtained from the uncorrected size distribution in Figure 4b; there are many billions (10^9) of TNOs at least 1 km in radius. One notable difference between the corrected and uncorrected size distribution extrapolations is apparent; the corrected size distribution extrapolation predicts substantially more TNOs of greater sizes. The corrected size distribution extrapolation i ndicates that several Pluto-sized objects (perhaps hundreds) exist throughout the Kuiper belt. This extrapolation bolsters the idea that Pluto and Charon are really trans-neptunian, Kuiper belt objects that were captured into their present, Neptune-reson ant orbit by indicating that there should be many other similar-sized objects as well; if so, one would expect a few of them to be occasionally captured into Neptune-resonant orbits, or by Neptune itself. The TNO mass distribution extrapolation is shown in Figure 7. This TNO mass distribution is based upon the same Kuiper disk used to generate the corrected size distribution extrapolation in Figure 6, with the constant mass density of individual TNOs e qual to 0.5 g/cm^3. I chose this particular density to be the same as the most likely density of comet Shoemaker-Levy 9 as determined by Asphaug^13 because short-period comets are believed to originate in the Kuiper belt, as described earlier. The TNO mass distribution extrapolation is remarkably flat; the slope of the TNO mass distribution is sufficiently close to 1 that, within the error of measurement, it could be greater than 1 or identically equal to 1. This corresponds to the value of s used in Equations 1, 3, and 4 being greater than or equal to 3, contrary to the assumption s = 3 used to derive Equation 4 in Appendix B. The alternate versions of Equation 4 which would result for s ³ 3 are similar enough to that resulting from s < 3 so as to not significantly change the cumulative mass distribution within the Kuiper belt. As for the total mass present in the Kuiper belt, the calculated TNO mass distribution extrapola tion indicates that there is approximately "a large terrestrial planet's worth" of mass (~10^25 kg) in the Kuiper belt. The H[10] magnitude distributions of short-period and long-period comets for which H[10] magnitudes have been calculated is shown in Figure 8a along with the extrapolated fits to their data. The long-period comets fit very well to an ideal population distribution line, with a sharp drop-off corresponding to the underrepresented and relatively underobserved data portion. The short-period comets however, do not fit to any particular population distribution line very well. There is a nearly (semilog) linear data portion within the main body of the data, shown along the solid population line in Figure 8b. This population line is far different from the population line obtained by fitting to the entire (supposedly) well-represented port ion of the short-period comet data. The population line fitted to the entire well-represented data portion is shown as the dotted line in Figure 8b. The six brightest short-period comets: P/Schwassmann-Wachmann 1, P/Olbers, P /Pons-Brooks, P/Halley, P/Sw ift-Tuttle, and P/Holmes, at H[10] magnitudes: 5.6, 5.5, 5.1, 4.6, 4.0, and 0.5, respectively, are much brighter than, and don’t follow the magnitude distribution trend of, the other short-period comets. Aside from the actual values of their per iods, these six comets fit very well into the long-period comet population; I suspect that they actually belong in the long-period comet distribution. Perhaps these six comets are not part of the same population of comets which are thought to originate i n the Kuiper belt, the short-period comets, but instead have been misclassified because of their orbital periods and they actually belong to the population of comets which are thought to originate in the Oort cloud, the long-period comets. Further analys is of the orbital elements and possible evolutionary history of certain short-period comets may indicate that some short-period comets evolved into their present orbits from long-period, Oort cloud-originating, orbits; I suspect that such short-period com ets may have once been long-period comets. The best-fit population lines for the trans-neptunian objects, long-period comets (LPCs), short-period comets (SPCs), the best-fit population line and maximum slope population line for the short-period comets (without the six brightest comets), and the (base-10 semilog) slopes of these lines are shown in Figure 9. The slope of the population line for the long-period comets is nearly identical to that of the short-period comets with the six brightest SPCs removed from the SPC population and added to the LPC population. This not only supports the generally accept ed idea that the SPCs and LPCs are related, but also supports the idea that those six brightest SPCs discussed previously are indeed misclassified as part of the Kuiper belt (short-period) population of comets instead of the Oort cloud (long-period) popul ation of comets. I expected that the slope of the cumulative population distribution line of the TNOs would be similar to that of the short-period comets, thereby supporting the hypothesized relationship between TNOs, the Kuiper belt, and short-period comets. The popu lation lines for these groups of objects are significantly different however. The SPCs do not seem to have any clearly-defined population line (hence the three different population lines given), which makes it difficult to compare with the TNO population line. The TNO population line does not fall along a slope near to any of the three possible SPC population lines. Although many more H[10] magnitude measurements of many more short-period comets could refine the SPC population line to be close r to that observed for the TNOs, I interpret the population difference as follows: while the observed trans-neptunian objects are likely to be the first of many in the Kuiper belt, the trans-neptunian objects that have been observed may not be representat ive of the Kuiper belt as a whole. I expect that the trans-neptunian object population distribution will more closely approach that of the short-period comets once many more TNOs farther out in the Kuiper belt have been observed. I expect that the TNO p opulation distribution is shallower than what we have observed as yet; the small portion of the Kuiper belt that we have observed is more heavily populated than the main, outer portion of the Kuiper belt. I have determined the TNO incremental semimajor axis distribution, as shown in Figure 10a, to help support this hypothesis. The TNOs are strongly clustered around those semimajor axes which have resonant orbits with Neptune’s orbit. In fact, a third of the TNOs are clustered around the 2:3-resonance at 39.45 AU, as is the Pluto-Charon pair. This clustering at the 2:3-resonance in turn supports the notion that Pluto and Charon are Kuiper belt/trans-neptunian objects that were captured into resonance with Neptune. I expect that more TNOs will be found clustered around, or totally absent from, the strongest Neptune-resonant orbits. Outside of these resonant orbits, I expect the TNO population distribution the better reflect the short-period comet dis tribution. I have also plotted the TNO absolute magnitudes vs. their semimajor axes in Figure 10b to support the proposed segregation and/or clustering of TNOs. Not only is the resonant clustering apparent in Figure 10b, but the observed TNOs appear to be mostly in the 7.0 to 7.7 magnitude range, with the exception of eight TNOs of brighter or dimmer magnitudes clustered around the Neptune-resonant orbits. Five dimmer TNOs are clustered around the 2:3-resonance, two brighter TNOs are just beyond the 3:5-resonan ce, and one dimmer TNO seems out-of-place at the 3:4-resonance. As a result of this study, I conclude the following: • The population of observed trans-neptunian objects is not numerous enough or distributed far enough in distance from the Sun and Neptune, and the short-period comet population doesn’t have a well-enough-defined population distribution line, to judge w hether the observed trans-neptunian objects are related to the short-period comets. • Additional observations of trans-neptunian/Kuiper belt candidate objects and of short-period comets are needed in order to more conclusively establish their relationship, and the existence of the Kuiper belt. • The observed trans-neptunian objects may not be representative of the Kuiper belt as a whole, (assuming the Kuiper belt exists and they are part of it), probably because of resonance-effects with • The six brightest short-period comets for which H[10] magnitudes have been calculated, P/Schwassmann-Wachmann 1, P/Olbers, P/Pons-Brooks, P/Halley, P/Swift-Tuttle, and P/Holmes, likely belong to the Oort cloud (long-period) population of come ts instead of the Kuiper belt (short-period) population of comets because of their greater-than-expected magnitudes, their poor fit with the rest of the short-period comet cumulative magnitude distribution, and the similarity of the short-period and long- period comet cumulative magnitude distributions when these six comets are included in the long-period comet population instead of the short-period comet population. • Assuming the population distributions calculated for the observed trans-neptunian objects are approximately valid when applied to the Kuiper belt as a whole: □ There are many billions (10^9) of trans-neptunian objects throughout the Kuiper belt. □ There are several, perhaps hundreds, of Pluto-sized trans-neptunian objects throughout the Kuiper belt. □ The total mass of trans-neptunian, Kuiper belt objects is ~10^25 kg, about the same as a large terrestrial planet. □ The relatively large concentration of trans-neptunian objects around the 2:3 Neptune resonance, the prediction of several Pluto-sized objects throughout the Kuiper belt, and the similarity of Pluto’s orbit and the trans-neptunian object’s orbits, all support the idea that Pluto and Charon are simply the largest yet known trans-neptunian, Kuiper belt objects. 1 Orbital element data for comets obtained from: B. G. Marsden & G. V. Williams, Catalogue of Cometary Orbits 1995, Minor Planet Center, Smithsonian Astrophysical Observatory , Cambridge (1995) 2 L. Kresak, "Discoveries, Statistics, Observational Selection," L. L. Wilkening, ed., Comets, Arizona (1982), pp. 56-82 3 M. L. Kutner, Astronomy: A Physical Perspective, Wiley, New York (1987), pp. 507-512 4 R. Cowen, "Frozen Relics of the Early Solar System," Science News, vol. 137, pp. 248-250 (1990) 5 P. R. Weissmann, "Comets at the Solar System’s Edge," Sky & Telescope, pp. 26-29 (January 1993) 6 G. Hahn & M. E. Bailey, "The Changing Face of Chiron," Astronomy, pp. 45-48 (August 1990) 7 A. Stern, "Chiron: Interloper from the Kuiper Disk?," Astronomy, pp. 28-33 (August 1994) 8 Orbital and observational data for Centaurs and Trans-Neptunian objects obtained from: Minor Planet Center Computer Service, Minor Planet Circulars, and Minor Planet Electronic Circulars, Smithsonian Astrophysical Observatory, Cambridge (19 95) 9 W. A. Arnett, Internet URL: http://seds.lpl.arizona.edu/nineplanets/nineplanets/kboc.html, Lunar and Planetary Laboratory, University of Arizona (1995) 10 B. G. Marsden & G. V. Williams, International Astronomical Union Circular Number 6193 (28 July 1995) 11 R. A. Kerr, "Home of Planetary Wanderers is Sized Up for the First Time," Science, vol. 268, p. 1704 (23 June 1995) 12 J. R. Bollinger & C. A. Wood, Houston Comet Catalogue, Lunar and Planetary Institute, Houston (1984) 13 E. Asphaug & W. Benz, "Density of Comet Shoemaker-Levy 9 deduced by modelling breakup of the parent ‘rubble pile’," Nature, vol. 370, pp. 120-124 (14 July 1994) Appendix A Derivation of Cumulative Magnitude Distribution c[<](m[h]) Given a number density (number of objects of a particular radius, r, at a particular distance from the Sun, R, per unit volume): n(r,R) º n(r); (i.e. - independent of R) the total number of objects, n(r),of a given radius, r, throughout a thin disk of inner radius R[i] and outer radius R[o] is: and given a cumulative size distribution (cumulative number of objects greater than radius r): c[>0] and s are positive constants; Integrating this expression gives the following: Assuming the radius-magnitude relation: the expression for n(r) can be rewritten as the number density of objects of a particular apparent magnitude, m[1], at a particular distance from the Sun, R, per unit volume: Appendix A Assuming that D » R, f º 1 ((m[h]),of a given heliocentric magnitude, m[h], throughout a (negligibly) thin disk of inner radius R[i] and outer radius R[o] is: The cumulative heliocentric magnitude distribution (cumulative number of objects with heliocentric magnitude less than m[h]) is: Appendix B Derivation of Cumulative Mass Distribution M[<](r) Given a number density (as derived in Appendix A), n(r), of spherical objects of uniform individual mass density, r, distributed uniformly throughout a thin disk, the disk mass d ensity, m(r), of objects of a given radius, r, is: M[<](r), is: Recalling (from Appendix A) that e is an arbitra rily small constant. Integrating by parts to get M[<](r) gives: º 3, and ¹ 3. Appendix B For s º 3 then, the cumulative mass, M(r[1], r[2]), of objects with radii greater than r[1] and radii less than r[2] is: ; for 0 < s < 3, the cumulative mass, M[<](r), of objects with radii less than r is: and similarly, for s > 3, the cumulative mass, M[>](r), of objects with radii greater than r is: Copyright © 1995, Alexander J. Willman, Jr. All rights reserved. MS Word and Postscript versions of this paper are available. The URL of this page is: http://www.princeton.edu/~willman/tno-comet This page was last updated: 1997 May 1 You may contact Alex at: willman@princeton.edu
{"url":"http://www.princeton.edu/~willman/tno-comet/","timestamp":"2014-04-20T08:02:50Z","content_type":null,"content_length":"51787","record_id":"<urn:uuid:b857186e-1faf-4f4d-be5c-dc128e133bfb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Ready? Here we go. Start with a solenoid. Run current through it and you've got yourself an electromagnet. The field inside is given by the formula … At the same time, a solenoid is also a device for capturing flux. Φ[B] = NBA The static situation is certainly interesting enough, but when it comes to flux, what we really care about is the time rate of change. This is what gives us electromagnetic induction, or an induced electromotive force, or whatever you want to call it. This situation is described by Faraday's law. Let's walk through these equations again, but with a time-varying twist. A solenoid with a changing current running through it will generate a changing magnetic field. This changing magnetic field is then captured by the very solenoid that created it. A captured field is called flux and a changing flux generates an emf — in this case, a self-induced or back emf. ℰ = − dΦ[B] = − N ⎛ μ[0] N dI ⎞ A dt ⎝ ℓ dt ⎠ Rearranging things a bit gives us this equation … which may not look like much, until you realize that the terms in the first fraction are largely determined by the geometry of the solenoid. Had we chosen a different configuration of wires, the same basic thing would have happened. The self-induced emf in a circuit is directly proportional to the time rate of change of the current (dI/dt) multiplied by a constant (L). This constant is called the inductance (or more precisely, the self inductance) and is determined by the geometry of the circuit (or more commonly, by the geometry of individual circuit elements). For example, the inductance of a solenoid (as determined above) is given by the formula … The symbol L for inductance was chosen to honor Heinrich Lenz (1804–1865), whose pioneering work in electromagnetic induction was instrumental in the development of the final theory. If you recall, Lenz' law states that the induced current in a circuit always acts in a manner that opposes the change that created it in the first place. This observation is why there's a minus sign in all the different versions of Faraday's law. Lenz' gave us the minus sign and we honor him with the symbol L. Inductance is best defined by its role in the equation derived from Faraday's law of induction. Some people don't like this and prefer definitions written in the subject-verb-object form of a simple In English, we would read this as "self inductance (L) is the ratio of the back emf (ℰ) to the time rate of change of the current producing it (dI/dt)." As I already said, I don't particularly like this kind of definition, but it does help us to determine the appropriate units. ⎡ H = V = J/C = (kg m^2/s^2)/(A s) = kg m ^2 ⎤ ⎣ A/s A/s A/s A^2 s^2 ⎦ The unit of inductance is the henry, named after Joseph Henry (1797–1878), the American scientist who discovered electromagnetic induction independently of and at about the same time as Michael Faraday (1791–1867) did in England. Faraday published his findings first and so gets most of the credit. Henry also discovered self inductance and mutual inductance (which will be described later in this section) and invented the electromechanical relay (which was the basis for the telegraph). A circuit with a self inductance of one henry will experience a back emf of one volt when the current changes at a rate of one ampère per second. Inductance is something. Inductance is the resistance of a circuit element to changes in current. Inductance in a circuit is the analog of mass in a mechanical system. ℰ = − L dI ⇔ cause of = resistance × rate of ⇔ F = m dv dt change to change change dt inductive loop detector Traffic at some intersections is controlled with the aid of inductive loop detectors (ILD). An ILD is a loop of conducting wire embedded just a few centimeters below the pavement. When a vehicle passes through the field, it acts as a conductor, changing the inductance of the loop. A change in the loop's inductance indicates the presence of a car above. This information can then be used to activate traffic signals, monitor traffic flow, or issue automated citations. inductance is a function of geometry solenoid (A cross sectional area, N number of turns, ℓ length, n number of turns per length) Φ[B] = N B A Φ[B] = N µ[0]NI A Φ[B] = µ[0]AN^2 I dΦ[B] = µ[0]AN^2 dI dt ℓ dt L = μ[0]AN^2 = μ[0]Aℓn^2 coaxial conductors (a inner radius, b outer radius , ℓ length) Φ[B] = ⌠ B · dA b b Φ[B] = ⌠ µ[0]I ℓ dr = µ[0]Iℓ ⌠ dr ⌡ 2πr 2π ⌡ r a a Φ[B] = µ[0]ℓ ln ⎛ a ⎞ I 2π ⎝ b ⎠ dΦ[B] = µ[0]ℓ ln ⎛ a ⎞ dI dt 2π ⎝ b ⎠ dt L = µ[0]ℓ ln ⎛ a ⎞ 2π ⎝ b ⎠ toroid (A cross sectional area, R radius of revolution, N number of turns) Φ[B] = N B A Φ[B] ≈ N µ[0]NI A Φ[B] ≈ N µ[0]NA I dΦ[B] ≈ µ[0]AN^2 dI dt 2πR dt L ≈ μ[0]AN^2 rectangular loop (w width, h height, a wire radius) Φ[B] = N x w x w µ[0] ⎡ ⌠ y dr ⌠ x dr ⌠ y dr ⌠ x dr ⎤ Φ[B] = N NI ⎣ ⌡ + ⌡ + ⌡ + ⌡ ⎦ 2π r r r r a a a a Φ[B] = 2 µ[0]N^2 ⎡ y ln ⎛ x ⎞ + x ln ⎛ y ⎞ ⎤ I 2π ⎣ ⎝ a ⎠ ⎝ a ⎠ ⎦ dΦ[B] = µ[0]N^2 ⎡ y ln ⎛ x ⎞ + x ln ⎛ y ⎞ ⎤ dI dt π ⎣ ⎝ a ⎠ ⎝ a ⎠ ⎦ dt L = µ[0]N^2 ⎡ y ln ⎛ x ⎞ + x ln ⎛ y ⎞ ⎤ π ⎣ ⎝ a ⎠ ⎝ a ⎠ ⎦ This formula doesn't quite work since it ignores edge effects. You can find the exact formula (as well as scripts that will calculate inductance for you) online at several electrical engineering
{"url":"http://physics.info/inductance/","timestamp":"2014-04-24T08:51:47Z","content_type":null,"content_length":"44893","record_id":"<urn:uuid:2589e5c3-61f1-45fc-aecd-e600b1c6a9dd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/supatsarah/answered","timestamp":"2014-04-20T11:05:30Z","content_type":null,"content_length":"108789","record_id":"<urn:uuid:93087437-d085-4093-b56c-d728f49c2f12>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Conversion To Metric (Metric Conversion) - Your Online Conversion To Metric Calculator Welcome to Conversion To Metric. Our website is all about fast and convenience. Whether you're researching or doing your assignments at work or in your school, you always want to get the quick and easy results for converting units from English to Metric or vice versa. Conversion to Metric is your online conversion calculator we are committed to give you simple ways to convert the units, of Area, Length, Weight, Volume, Pressure and Temperature just a click of your mouse. We are trying our best to be the solution of anyone need of converting different units online. Conversion of units refers to conversion factors between different units of measurement for the same quantity. The process of making a conversion cannot produce a more precise result than the original quoted figure. Appropriate rounding of results is normally performed after conversion. Metric systems A number of metric systems of units have evolved since the adoption of the original metric system in France in 1791. The current international standard metric system is the International system of units. An important feature of modern systems is standardization. Each unit has a universally recognized size. Natural systems While the above systems of units are based on arbitrary unit values, formalised as standards, some unit values occur naturally in science. Systems of units based on these are called natural units. Similar to natural units, atomic units (au) are a convenient system of units of measurement used in atomic physics. Conversion of units involves comparison of different standard physical values, either of a single physical quantity or of a physical quantity and a combination of other physical quantities Starting with: just replace the original unit [Z]i with its meaning in terms of the desired unit [Z]j, e.g. if , then: Now ni and cij are both numerical values, so just calculate their product.
{"url":"http://conversiontometric.com/","timestamp":"2014-04-19T17:02:11Z","content_type":null,"content_length":"5876","record_id":"<urn:uuid:b608c0d9-c611-489c-bfa0-97d8cbbfc989>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
How do I check if a functor has a (left/right) adjoint? up vote 53 down vote favorite Because adjoint functors are just cool, and knowing that a pair of functors is an adjoint pair gives you a bunch of information from generalized abstract nonsense, I often find myself saying, "Hey, cool functor. Wonder if it has an adjoint?" The problem is, I don't know enough category theory to be able to check this for myself, which means I can either run and ask MO or someone else who might know, or give up. I know a couple of necessary conditions for a functor to have a left/right adjoint. If it doesn't preserve either limits or colimits, for example, I know I can give up. Is there an easy-to-check sufficient condition to know when a functor's half of an adjoint pair? Are there at least heuristics, other than "this looks like it might work?" ct.category-theory adjoint-functors add comment 9 Answers active oldest votes The adjoint functor theorem as stated here and the special adjoint functor theorem (which can also both be found in Mac Lane) are both very handy for showing the existence of adjoint First here is the statement of the special adjoint functor theorem: Theorem Let $G\colon D\to C$ be a functor and suppose that the following conditions are satisfied: (i) $D$ and $C$ have small hom-sets (ii) $D$ has small limits (iii) $D$ is well-powered i.e., every object has a set of subobjects (where by a subobject we mean an equivalence class of monics) (iv) $D$ has a small cogenerating set $S$ (v) $G$ preserves limits Then $G$ has a left adjoint. Example I think this is a pretty standard example. Consider the inclusion CHaus$\to$Top of the category of compact Hausdorff spaces into the category of all topological spaces. Both of these categories have small hom-sets, it follows from Tychonoff's Theorem that CHaus has all small products and it is not so hard to check it has equalizers so it has all small limits and the the inclusion preserves these. CHaus is well-powered since monics are just injective continuous maps and there are only a small collection of topologies making any subspace compact up vote 54 and Hausdorff. Finally, one can check that $[0,1]$ is a cogenerator for CHaus. So $G$ has a left adjoint $F$ and we have just proved that the Stone-Čech compactification exists. down vote accepted If you have a candidate for an adjoint (say the pair $(F,G)$) and you want to check directly it is often easiest to try and cook up a unit and/or a counit and verify that there is an adjunction that way - either by using them to give an explicit bijection of hom-sets or by checking that the composites $$G \stackrel{\eta G}{\to} GFG \stackrel{G \epsilon}{\to} G$$ and $$F \stackrel{F \eta}{\to} FGF \stackrel{\epsilon F}{\to} F$$ are identities of $G$ and $F$ respectively. I thought (although I am at the risk of this getting excessively long) that I would add another approach. One can often use existing formalism to produce adjoints (although this is secretly using one of the adjoint functor theorems in most cases so in some sense is only psychologically different). For instance as in Reid Barton's nice answer if one can interpret the situation in terms of categories of presheaves or sheaves it is immediate that certain pairs of adjoints exist. Andrew's great answer gives another large class of examples where the content of the special adjoint functor theorem is working behind the scenes to make verifying the existence of adjoints very easy. Another class of examples is given by torsion theories where one can produce adjoints to the inclusions of certain subcategories of abelian (more generally pre-triangulated) categories by checking that certain orthogonality/decomposition properties hold. I can't help remarking that one instance where it is very easy to produce adjoints is in the setting of compactly generated (and well generated) triangulated categories. In the land of compactly generated triangulated categories one can wave the magic wand of Brown representability and (provided the target has small hom-sets) the only obstruction for a triangulated functor to have a right/left adjoint is preserving coproducts/products (and the adjoint is automatically triangulated). I wish I could double-vote! I think this is a very nice expansion of your original answer. – Andrew Stacey Nov 17 '09 at 8:27 2 Cool! Stone-Čech from abstract nonsense! – Harrison Brown Nov 17 '09 at 16:32 What a lovely answer. – Alex Collins Nov 17 '09 at 19:39 Well, in my limited experience Brown representability is very useful in pratice when you work with homotopy categories of model categories (which are complete cocomplete). The relevant functors on those big categories are sometimes hard to describe explicitely, although it may be possible to compute them on (some subclass of) compact objects. – Simon Pepin Lehalleur Aug 9 '10 at 18:49 add comment Lots of people-who-are-fond-of-adjoint-functor-theorems have responded to this post saying "adjoint functor theorems". Let me give a more mundane and rather different answer which fits much better into my world view. In my experience (which may differ from others), the true answer is that category theorists have these adjoint functor theorems which work well in some cases, but whose problem, as I see it, is simply that they are quite general. I am certainly not a category theorist but I am a "working mathematician" and my experience with these general theorems has been quite negative. For example take the notion of free groups. I was talking to another staff member here once and they said they'd just set an UG project on constructing free groups and I said "can't you just say 'done by adjoint functor theorem'?" and we both laughed because we knew it was true. And then I actually went and looked up SAFT and checked that it could construct free groups---and it can't, because the category of groups (hardly an exotic or esoteric category!) does not have a small cogenerating set. As I write, the top answer here has 15 votes and a lovely statement of SAFT but if you can't get free groups with it then you surely have to question its usefulness. In fact, although this sort of goes against the grain of what most people are saying here, in my up vote experience you would be crazy trying to invoke adjoint functor theorems to construct free groups: you're much better off making them yourself, not least because making them yourself will 43 down teach you much more about how the objects work. My experience is that things like SAFT are almost always justified with the statement "Stone-Cech compactification!". I have heard this justification, and only this, so often now that the excuse is wearing thin. So here is my answer: Yes, there are some general theorems. But if you're not a category theorist then in my experience they have limited applicability. You're better off thinking about things on your own, saying "hmm, here's an object with structure X, how might one naturally build an object with structure Y from it?". If you can go both ways you might well have constructed a pair of adjoint functors, and you could then try to check this by doing mathematics rather than waving general category-theory theorems which have specifically beein designed with a one-size-fits-all purpose in mind, and which don't apply to such exotic categories as the category of groups. 1 I agree with much of this. SAFT, in particular, seems to have few uses - though there are significant uses other than Stone-Cech. Regarding free groups, SAFT doesn't guarantee existence, but GAFT (General AFT = "the" AFT) does, as per my answer. – Tom Leinster Nov 18 '09 at 8:49 I agree that SAFT has somewhat limited applicability and that it is often a good idea to try to actually build adjoints (as one often needs to understand them not just know they exist) and that this can be very enlightening. But I think that knowing the general categorical machinery exists is worthwhile in case one wants to use it. I also think that the feel of the proof of AFT is not unlike how one often goes about building these things by hand at least in some sense. In my opinion there are lots of general machines to which this comment and your answer applies. – Greg Stevenson Nov 18 '09 at 10:06 1 Greg: I disagree with your statement "the proof of AFT is not unlike how one often goes about building these things by hand". But that might well be because we're sampling from very different sample spaces. Here's an example of adjoints that has been fundamental in my mathematical career: pushforward and pullback of sheaves of abelian groups on schemes. Here pushforward and pullback are adjoints but neither construction, it seems to me, looks (to me) at all like what goes into AFT. [continued in a sec] – Kevin Buzzard Nov 18 '09 at 18:52 Here's what this example looks like (to me). Given f:X-->Y a map of schemes, then for a sheaf F on X one does ones best to define a sheaf f_*F on Y. Conversely given a sheaf G on Y one does one's best to define a sheaf f^*G on X. Now, after making the constructions, one does some mathematics and proves that the constructions are adjoint. The point I'm trying to make is that AFT ideas do not, it seems to me, go into the constructions. The work is in checking adjointness and so in some sense the mathematics seems to be elsewhere and not AFTish. But perhaps you are thinking of other examples! – Kevin Buzzard Nov 18 '09 at 18:55 @buzzard: It is probably true that our sample spaces are somewhat different. I mostly wanted to make it clear that I didn't mean the exact proof; it is probably the first place a lot of people run across the idea that "if you need to build something, take everything close enough and then beat it into submission with limits/colimits" which I think is useful. Also your example looks different to me. The first step is to build the inverse image which I think is best done in general via left Kan extension. Then one can play around and find out what one wants for sheaves of modules. – Greg Stevenson Nov 18 '09 at 20:41 show 1 more comment Other people have mentioned the Adjoint Functor Theorems. Here's a different perspective. There's a famous Cambridge exam question set by Peter Johnstone: Write an essay on (a) the usefulness, or (b) the uselessness, of the Adjoint Functor Theorems. I agree with the undertone of the question: the Adjoint Functor Theorems (AFTs) aren't as useful as you might think when you first meet them. They're not useless: but my own experience is that the range of situations in which I've had no easy way of constructing the adjoint, yet have been able to verify the hypotheses of an AFT, has been very limited. Perhaps more useful than knowing the AFTs is knowing some large classes of situation where an adjoint is guaranteed to exist. Here are two such classes. ${}$1. Forgetful functors between categories of algebras. Any time you have a category $\mathcal{A}$ of algebras, such as Group, Ring, Vect, ..., the forgetful functor $\mathcal{A} \to \ mathbf{Set}$ has a left adjoint. What's not quite so well-known is that you don't have to forget all the structure; that is, the codomain doesn't have to be Set. For example, the functor $\mathbf{AbGp} \to \mathbf{Group}$ forgetting that a group is abelian automatically has a left adjoint. The functor $\mathbf{Ring} \to \mathbf{Monoid}$ forgetting the additive structure of a ring automatically has a left adjoint. The forgetful functor $\mathbf{Assoc} \to \mathbf{Lie}$, sending an associative algebra to its underlying Lie algebra (with bracket $[a, b] = a\cdot b - b \cdot a$) automatically has a left adjoint. (That might not look so much like a forgetful functor, but that's only because the bracket on an associative algebra isn't given as a primitive operation in the usual definition of associative algebra: it has to be derived from the other operations.) up vote The same can be said if you talk about topological groups, rings, etc, basically because Top has all small limits and colimits. 35 down vote All that is a consequence of the General AFT (= 'the' AFT in some people's usage). To my mind it's the principal reason why it's worth learning or teaching the General AFT. ${}$2. Kan extensions. Let $F: \mathbf{A} \to \mathbf{B}$ be any functor between small categories. Then there's an induced functor $$ F^{*}: {[\mathbf{B}, \mathbf{Set}]} \to {[\mathbf{A}, \ mathbf{Set}]} $$ defined by composition with $F$. (Here ${[\mathbf{B}, \mathbf{Set}]}$ means the category of functors from $\mathbf{B}$ to $\mathbf{Set}$, sometimes denoted ${\mathbf{Set}}^ The fact is that $F^{*}$ always has both a left and a right adjoint. These are called left and right Kan extension along $F$. The same is true if you replace $\mathbf{Set}$ by any category with small limits and colimits. This is really useful, though that might not be obvious. For example, suppose we're interested in representations of groups. A group can be regarded as a one-object category, and the category of representations of a group $G$ is just the functor category $[G, \mathbf{Vect}]$. Now take a group homomorphism $f: G \to H$. The induced functor $$ f^{*}: [H, \mathbf{Vect}] \to [G, \mathbf{Vect}] $$ sends a representation of $H$ to a representation of $G$ in the obvious way. And it's guaranteed to have both left and right adjoints. These adjoints turn a representation of $G$ into a representation of $H$, in a canonical way. I believe representation theorists call these the 'induced' and 'coinduced' representations, at least in the case that $G$ is a subgroup of $H$ and $f$ is the inclusion. Exercise: let $G$ be a group. There are unique homomorphisms $G \to 1$ and $1 \to G$, where $1$ is the trivial group. Each of these two homomorphisms induces a functor "$f^{*}$" between the category $[G, \mathbf{Set}]$ of $G$-sets and the category $[1, \mathbf{Set}] = \mathbf{Set}$ of sets. These two functors each have adjoints on both sides. So we end up with six functors and four adjunctions. What are they? The existence of Kan extensions is best derived from the theory of ends. In fact, ends allow you to describe them explicitly. 4 Is there a good place to learn about ends and coends with good (and not too sophisticated) examples? The page on the n-lab didn't help me that much because it goes very quickly into the enriched context and I think that your wonderful notes on category theory don't cover it. – Gonçalo Marques Aug 20 '10 at 13:52 add comment Obligatory n-lab reference: adjoint functor theorems Figuring out when functors had adjoints or not was something I did a lot of in Comparative Smootheology (section 8). Edit: Thought I'd expand on my comment to Andrew Critch's answer. A simple application of the Special Adjoint Functor Theorem is to universal algebra where it becomes: Theorem Let $D$ be a category that has finite products, is co-complete, is an $(E, M)$ category where $E$ is closed under finite products, is $E$-co-well-powered, and its finite products commute with filtered co-limits. Let $V$ be a variety of algebras. Let $F$ be a category with co-equalisers. Let $G : F \to DV$ (here, $DV$ is the category of $V$-algebra objects in $D$) be a covariant functor. Then the following statements are equivalent. 1. $G$ has a left adjoint. 2. The composition $|G| : F \to D$ of $G$ with the forgetful functor $DV \to D$ has a left adjoint. up vote 13 In particular, if we take $D$ to be $Set$, the category of sets, then we obtain the following (which can be found in any text book on universal algebra), in which the variety of algebras down vote $V$ is identified with its category of models in $Set$: Corollary Let $F$ be a co-complete category, $V$ a variety of algebras. For a covariant functor $G : F \to V$, the following statements are equivalent. 1. $G$ has a left adjoint. 2. $G$ is representable by a co-$V$-algebra object in $F$. 3. $|G|$ is representable by an object in $F$. (And, of course, all of this can be turned round for adjoint pairs of contravariant functors) In further particular, if $G : F \to V$ preserves underlying sets then $|G|$ is representable (by the initial $F$-object) and so $G$ has a left adjoint. Thanks Andrew. I would like to be able to double-vote your expansion; it is the other example I had in mind and I'm glad you added this since you've given a much cleaner and more complete statement than I would have. – Greg Stevenson Nov 17 '09 at 9:14 Thanks for this; I wish I could accept both your answer and Greg's, since they're both clear and useful! You have my +1, anyway. – Harrison Brown Nov 17 '09 at 16:35 I have to admit that I'm not overly happy with the "can only accept one answer" imposition by the software - I'd like to accept more than one on some of my questions. However, I think you made the right choice here. Stone-Cech outweighs anything else in my view. – Andrew Stacey Nov 17 '09 at 18:38 add comment I think frequently the easiest to check sufficient conditions are the following, which also make precise why "this looks like it might work" is so often successful: Theorem: A functor $G: C\to D$ is a right adjoint functor (i.e. has a left adjoint) if and only if for each object $Y$ in $D$, there exists an initial morphism $\phi_Y:Y\to G(I_Y)$ from $Y$ to $G$. Moreover, once you find such an initial morphism from each $Y$ to $G$, the association $Y\mapsto I_Y$ extends in a unique way to act on morphisms defining a functor $F: D\to C$, which moreover is left adjoint to the original functor $G$. This is well-known and easy to prove (well, depending on who you ask), but is non-trivial and involves many steps, which are explained relatively well here. (Essentially one is recovering the entire adjoint situation from just one functor and a unit transformation.) Once you know it, you can really take confidence in "follow your nose"-style adjoint construction. It doesn't involve having an "initial guess" for the left adjoint (as a functor), but actually constructs it for you in a way that is uniquely determined by the limited data of the initial morphisms --- really unique, not just up to natural isomorphism. up vote 10 down vote As an example of how this can be useful, think of the inclusion functor $U$ from $AbGrp$ to $Grp$. It's easy to see that any group $H$ has an abelianization $Ab(H) = H/[H,H]$ in $AbGrp$ with a map $H\to Ab(H)$ satisfying an initial (universal) property. But then by the above theorem, we can automatically extend this association in a unique way to act on morphisms as well, defining an abelianization functor $Ab$ which is left adjoint to the inclusion $U$. This same trick expedites the construction of adjoints in pretty much any situation you can think of. Edit: Sometimes this theorem is used as an alternative definition for adjoint functors in terms of universal morphisms. However you look at it, the real utility is knowing that this "weak", and in fact asymetric, condition actually implies the "stronger", symmetric definitions of adjoints via hom-sets or units/counits. I think it's really worthwhile to sift through the three different characterizations of adjoints given on Wikipedia. 2 To me, this result says "A functor has an adjoint if you can construct the adjoint." Freyd's adjoint functor theorem is more like "A functor has an adjoint if it looks like it has an adjoint" which seems more useful. – Andrew Stacey Nov 17 '09 at 7:39 No! It says "if you can build just a tiny bit of an adjoint, than the rest of it falls into place." I edited my answer to elaborate on this, because I think this fact doesn't get enough attention in general :) – Andrew Critch Nov 17 '09 at 8:04 1 Isn't this "just" the characterization in terms of initial objects in comma categories? The work lies in showing uniquness of various morphisms, I'd have thought. In the course I took (waves at TL) it wasn't clear that this was a faster way to construct the free group functor than, say, applying GAFT. It all depends how (over)confident one is that there really is an adjoint pair – Yemon Choi Nov 17 '09 at 8:09 Well secretly you are building the unit and showing it gives bijections on hom-sets. It is a nice fact, and I am all for using the various definitions of adjunction as the situation warrants, but I feel like this still boils down to a sufficient condition for G to have an adjoint is that you can build one. Am I missing something? (By the way - I don't want this to sound snide - it is a good answer) – Greg Stevenson Nov 17 '09 at 8:12 @Yemen, it is "just" that :) @Greg, not quite: you don't build the unit, you start with it. Then you build a functor to make the unit an actual natural transformation. – Andrew Critch Nov 17 '09 at 8:40 show 2 more comments There are two parts to this answer. 1. First, a functor must be continuous (cocontinuous) to have a left (right) adjoint. Most of the times, it is easy to check that a functor does not preserve (co)limits and thus it cannot have a a left (right) adjoint. 2. (co)continuity is not enough to actually prove that a functor has the required adjoint, but it is almost good enough. Let me elaborate on this. If you have a functor $F:P\to Q$ between complete partial orders (and thus cocomplete) then it is an easy exercise to construct a left adjoint by taking a $\sup$ of an appropriate subset. This can be generalized in a straightforward way to any functor by taking an appropriate (co)limit. The bad news is that this (co)limit is in general over a large category so it may not exist. This is where the so-called solution-set conditions come in; they are way to trim down this large category to a small one. As many people already said there are various variations of this type of conditions, from the more general but also very cumbersome to check solution-set condition to easier conditions which up vote combine some form of well-poweredness (each object has only a set of subobjects -- or quotients, whatever the case may be) with the existence of a small separating (or generating) set. One 10 down that guarantees the existence of a right adjoint and that sticks out particularly in my memory is the existence of a small dense subcategory -- check chapter V of Kelly's book on enriched vote category for the precise details. It is particularly memorable, because many categories come with god-given small dense categories like presheaf categories (courtesy of Yoneda) and sheaf categories (because dense composed with left adjoint is dense). Later edit: many people have complained about the limited usefulness of the adjoint functor theorems in that in many cases there is a direct, and thus much more enlightening, construction. But there are situations where such a direct construction is not available. One that I came across recently is when studying P. Johnstone's book Stone spaces, more precisely chapter III and the section on Manes' theorem about the monadicity of the category of compact Hausdorff spaces. In the sequel, P. Johnstone proves another result due to Manes, the fact that category of algebras (in the sense of universal algebra) in the category of compact Hausdorff spaces is also monadic. He remarks that one has to use the GAFT (and Beck's monadicity theorem) in this case, because there is no easy direct description of the left adjoint. Later in the book (somewhere, I am quoting from memory and do not have the book by me), he argues why there is no simple recipe for the left adjoint. add comment There's the Freyd Adjoint Functor Theorem. A right adjoint functor is continuous (commutes with limits) and a left adjoint functor is cocontinuous (commutes with colimits). So, if a functor has a left adjoint then it is continuous up vote 5 because it is a right adjoint. The adjoint functor theorem is a partial converse to this fact in the case that the domain category is complete (has all small limits) and the functor down vote satisfies a "smallness condition". I did say "easy-to-check..." :) – Harrison Brown Nov 17 '09 at 6:30 It isn't always "easy" although this depends on your definition ;) but they are surprisingly checkable in some situations. I'll try and think of some good examples where this is used and edit my answer to include them if that would help. – Greg Stevenson Nov 17 '09 at 6:35 I think by "easy" I mean "quickly verifiable?" Analogously to checking that some limit really doesn't commute, to check that it's not. But if you could illustrate an example, yeah, that'd be fantastic. – Harrison Brown Nov 17 '09 at 6:54 1 This is usually very easy to check. It's at least as easy as finding the initial morphism in Andrew Critch's answer because it's often easier to prove a general condition than find a specific instance. For example, for the adjoint of the inclusion functor you don't even need to know that abelianisation exists! It's enough that the inclusion AbGrp to Grp preserves the underlying sets and the adjunction follows for free. – Andrew Stacey Nov 17 '09 at 7:35 It's not all that hard to verify the definition directly in a lot of cases. Do you mean "not tedious"-to-check? – marc Nov 17 '09 at 18:16 show 1 more comment Here is one situation where left and right adjoints always exist. Let $\mathcal{C}$ and $\mathcal{D}$ be semisimple $k$-linear categories such that • $\mathcal{C}$ has only finitely many simple objects, and • for any simple object $X$ of $\mathcal{C}$ or $\mathcal{D}$, $\operatorname{End}(X) \cong k$. up vote 1 down The second condition is automatic if $k$ is algebraically closed. Under these circumstances, any $k$-linear functor $\mathcal{C} \to \mathcal{D}$ has both a left and a right adjoint, vote and their constructions are quite explicit. This fact is useful for working with fusion categories over an algebraically closed field (which automatically satisfy the conditions on $\mathcal{C}$). add comment Certainly the fundamental theorem of the addition and the existence theorem of Freyd. However, this theorem is the base of a whole literature (eg categories reflective), to investigate whether and under what circumstances the conditions of the theorem of Freyd occurred (See, for example, "Abstract and Concrete Categories - The Joy of Cats" in http://www.tac.mta.ca/tac/reprints/index.html, or the old H. Herrlich, Topologische Reflexionen und Coreflexionen. Lecture Notes in Mathematics 78). up vote 0 down vote But there Is another smart and strategic way to the existence or nature of a functor as added, because the theory of monads and tripleability, we can lift an adjoint to a functor posted a diagram with suitable assumptions (there are several theorems respect). See "Triples, algebras and cohomology Jonathan Mock Beck "in http://www.tac.mta.ca/tac/reprints/index.html add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory adjoint-functors or ask your own question.
{"url":"http://mathoverflow.net/questions/5786/how-do-i-check-if-a-functor-has-a-left-right-adjoint?sort=oldest","timestamp":"2014-04-18T18:39:14Z","content_type":null,"content_length":"126033","record_id":"<urn:uuid:a5b15401-d5d2-4314-8ebc-2e95aca998fe>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
A strange exercise with matrix norm! March 22nd 2009, 01:15 AM #1 Mar 2009 A strange exercise with matrix norm! Let the maximum norm $||x||_{\infty}=max|x_i|$ (example: $||(2,-4,1)||_{\infty}=4)$ How can we compute the corresponding matrix norm defined as: $||A||_{\infty}=max\frac{\|Ax||_{\infty}}{\|x||_{\i nfty}}$ where x is not equal to 0 if $A=\begin{pmatrix}1 & 2\\3 & -4 \end{pmatrix}$ I would be grateful if someone show me a step-by-step solution. Thanks in Advance An equivalent way of writing the norm is that $\|A\|_\infty = \max\{\|Ax\|_\infty:\|x\|_\infty\leqslant1\}$. Using that definition, let $x = \begin{bmatrix}a\\b\end{bmatrix}$, with $\max\{|a|,|b| \}\leqslant1$. Then $Ax = \begin{bmatrix}1&2\\3&-4\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix} = \begin{bmatrix}a+2b\\3a-4b\end{bmatrix}$, and $\|Ax\|_\infty = \max\{|a+2b|,|3a-4b|\}$. Now think about how to maximise that expression subject to the conditions $|a|\leqslant1$, $|b|\leqslant1$, and see if you get the answer 7. March 22nd 2009, 10:41 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/79886-strange-exercise-matrix-norm.html","timestamp":"2014-04-18T15:11:23Z","content_type":null,"content_length":"35805","record_id":"<urn:uuid:fee0b95b-8d91-4b96-8714-82af6a18268e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
parametric equation December 6th 2006, 09:20 PM parametric equation If Lsub1 has parametric equation x=1+3t, y=1+t, z=4-t, and Lsub2 has parametric equation x=7-6t, y=-2t, z=3+2t, then Lsub1 and Lsub2 are parallel. The answer is true. Please show me why it is true. Thank you very much. December 6th 2006, 10:19 PM Hello Jenny, your equations describe two straight lines in $\mathbb{R}^3$ $L_1: [x,y,z]=\underbrace{[1,1,4]}_{\text{fixed point}}+t\cdot \underbrace{[3,1,-1]}_{\text{direction}}$ $L_2: [x,y,z]=\underbrace{[7,0,3]}_{\text{fixed point}}+t\cdot \underbrace{[-6, -2, 2]}_{\text{direction}}$ By comparison you can see, that $[-6, -2, 2]=(-2) \cdot [3,1,-1]$. That means the direction vectors are collinear: They have the same direction but different length. Therefore $L_1$ and $L_2$ are at least parallel. To proof if they are actually the same you have to show that the fixed point of $L_1$ belongs to $L_2$ too. December 6th 2006, 11:43 PM Thank you very much , earboth! :) December 7th 2006, 04:54 AM Hello, Jenny! If $L_1$ has parametric equations: . $\begin{Bmatrix}x\:= & 1+3t\\ y\:= & 1+t \\ z\:= & 4-t\end{Bmatrix}$ and $L_2$ has parametric equations: . $\begin{Bmatrix} x\:= & 7-6t \\ y\:= & -2t \\ z\:= & 3+2t\end{Bmatrix}$ then $L_1$ and $L_2$ are parallel. Two lines are parallel if their direction vectors are parallel. . . (They do not have to be collinear.) $L_1$ has direction vector: $\vec{u}\:=\:\langle 3,1,\text{-}1\rangle$ $L_2$ has direction vector: $\vec{v}\:=\:\langle\text{-}6,\text{-}2,2\rangle \:=\:\text{-}2\langle3,1,\text{-}1\rangle$ Since $\vec{v} = -2\vec{u}\!:\;\;\vec{u} \parallel \vec{v}$ . . . . Q.E.D. December 7th 2006, 08:51 AM Two lines are parallel if their direction vectors are parallel. . . (They do not have to be collinear.) L_1 has direction vector: \vec{u}\:=\:\langle 3,1,\text{-}1\rangle L_2 has direction vector: \vec{v}\:=\:\langle\text{-}6,\text{-}2,2\rangle \:=\:\text{-}2\langle3,1,\text{-}1\rangle Since \vec{v} = -2\vec{u}\!:\;\;\vec{u} \parallel \vec{v} . . . . Q.E.D. Hi Soroban, Thank you very much! :)
{"url":"http://mathhelpforum.com/calculus/8526-parametric-equation-print.html","timestamp":"2014-04-19T10:13:36Z","content_type":null,"content_length":"10796","record_id":"<urn:uuid:f915aec1-5125-49ab-ac43-348e20c387a0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Golden Angle Illustration showing the golden angle. The golden angle is the smaller of two angles created by dividing the circumference of a circle according to the golden section. The ratio of the length of the larger arc to the smaller arc is equal to the ratio of the entire circumference to the larger arc. The golden angle is approximately 137.51°.
{"url":"http://etc.usf.edu/clipart/42900/42979/golden-angle_42979.htm","timestamp":"2014-04-20T01:08:45Z","content_type":null,"content_length":"12399","record_id":"<urn:uuid:c0b66f92-ac2a-4b81-80da-da915982c1ab>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Decimals to Fractions Worksheet Change Decimals to Fractions Objective: I can change decimals to fractions. Decimals are special fractions. The place of the decimal point tells us the denominator of these fractions. The first place to the right of the decimal point has a denominator of 10. The second place to the right of the decimal point has a denominator of 100. The third place to the right of the decimal point has a denominator of 1000. Read this lesson on converting decimals to fractions if you need to learn more about changing decimals to fractions. Related Topics: More Math Worksheets Fill in all the gaps, then press "Check" to check your answers. Use the "Hint" button to get a free letter if an answer is giving you trouble. You can also click on the "[?]" button to get a clue. Note that you will lose points if you ask for hints or clues! We hope that the free math worksheets have been helpful. We encourage parents and teachers to select the topics according to the needs of the child. For more difficult questions, the child may be encouraged to work out the problem on a piece of paper before entering the solution. We hope that the kids will also love the fun stuff and puzzles. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"http://www.onlinemathlearning.com/decimals-to-fractions.html","timestamp":"2014-04-18T08:04:06Z","content_type":null,"content_length":"68611","record_id":"<urn:uuid:c5a27f20-82c2-4280-bfec-8af11b268e8c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: When a ball is thrown straight up, is there any point at which the ball has zero acceleration? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f515ebde4b019d0ebafc11e","timestamp":"2014-04-17T09:46:06Z","content_type":null,"content_length":"39496","record_id":"<urn:uuid:e3725fc9-7428-4202-883e-009753f98ce2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Q: Provide .H for numpy arrays? Dag Sverre Seljebotn dagss@student.matnat.uio... Thu Oct 14 12:05:28 CDT 2010 On 10/14/2010 04:30 PM, Nico Schlömer wrote: >> In any case, I think that you need to raise this issue on the list for discussion. > Raise! > Now here's what for discussion: > I noticed that one difference between numpy arrays and matrices is > that ".H" (transpose + conjugation) is only implemented for matrices. > ".T", however, being structurally completely equivalent, is > implemented for both. > While an actual use case for ".H" would be mass dot-products for > multivectors. Right now, I guess what most people go with is > ".T.conjugate()" where it's needed. > Something that may play a role here is the fact that .vdot() does -- > as opposed to .dot() -- not allow for dot-products with multivectors. Does this belong on the numpy-discuss list? I think the proposal needs further details. ".T" is NOT completely equivalent because no copying takes place. Modifying "arr.T" modifies "arr" as well, while the ".H" of the matrix class makes a copy of the data. There is an alternative. Each array view could have a flag saying whether it is conjugated or not, and then "arr.H" would return a "conjugated view". This would be much more useful. Any routines actually accessing the data (item assignment, storing to disk, ufuncs...) would have special cases added to do the conjugation in the operations instead of having to copy the data. This would play very nice with constructs such as np.dot(arr.H, arr), because the underlying BLAS can take flag to conjugate the data (which to my knowledge is not available from NumPy currently). Of course, it is likely a lot of work. But the existance of the possibility of this path in the long run makes me negative towards the proposal of just implementing "arr.H" the easy way (making a copy) in the short run because it would make it impossible to introduce something much more useful later on. A naive implementation of "arr.H" would not work well with gigabyte-sized arrays on most computers, and is always available as "arr.T.conjugate()" anyway, which is more explicit about making a copy. Just my cents, Dag Sverre More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-October/026985.html","timestamp":"2014-04-17T04:16:54Z","content_type":null,"content_length":"5015","record_id":"<urn:uuid:c4bd8048-2060-4b85-9539-ab5c691ca448>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
all.equal {base} Test if Two Objects are (Nearly) Equal all.equal(x, y) is a utility to compare R objects x and y testing ‘near equality’. If they are different, comparison is still made to some extent, and a report of the differences is returned. Don't use all.equal directly in if expressions---either use isTRUE(all.equal(....)) or identical if appropriate. all.equal(target, current, ...) ## S3 method for class 'numeric': all.equal((target, current, tolerance = .Machine$double.eps ^ 0.5, scale = NULL, check.attributes = TRUE, ...) attr.all.equal(target, current, check.attributes = TRUE, check.names = TRUE, ...)) R object. other R object, to be compared with target. Further arguments for different methods, notably the following two, for numerical comparison: numeric ≥ 0. Differences smaller than tolerance are not considered. numeric scalar > 0 (or NULL). See ‘Details’. logical indicating if the attributes of target and current (other than the names) should be compared. logical indicating if the names(.) of target and current should be compared. all.equal is a generic function, dispatching methods on the target argument. To see the available methods, use methods("all.equal"), but note that the default method also does some dispatching, e.g. using the raw method for logical targets. Numerical comparisons for scale = NULL (the default) are done by first computing the mean absolute difference of the two numerical vectors. If this is smaller than tolerance or not finite, absolute differences are used, otherwise relative differences scaled by the mean absolute difference. If scale is positive, absolute comparisons are made after scaling (dividing) by scale. For complex target, the modulus (Mod) of the difference is used: all.equal.numeric is called so arguments tolerance and scale are available. The method for the date-time class "POSIXct" by default allows a tolerance of tolerance = 0.001 seconds. attr.all.equal is used for comparing attributes, returning NULL or a character vector. Either TRUE (NULL for attr.all.equal) or a vector of mode "character" describing the differences between target and current. Chambers, J. M. (1998) Programming with Data. A Guide to the S Language. Springer (for =). all.equal(pi, 355/113) # not precise enough (default tol) > relative error d45 <- pi*(1/4 + 1:10) all.equal(tan(d45), rep(1, 10))) # TRUE, but all (tan(d45) == rep(1, 10)) # FALSE, since not exactly all.equal(tan(d45), rep(1, 10), tol = 0) # to see difference Documentation reproduced from R 3.0.2. License: GPL-2.
{"url":"http://www.inside-r.org/r-doc/base/all.equal","timestamp":"2014-04-19T05:25:51Z","content_type":null,"content_length":"26019","record_id":"<urn:uuid:b2deb9b2-c83f-4b13-9dc3-690288609277>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Indispensability of the natural numbers Timothy Y. Chow tchow at alum.mit.edu Tue May 18 09:15:44 EDT 2004 On Tue, 18 May 2004, Vladimir Sazonov wrote: > Whatever are our beliefs, the mental concept of the natural numbers > is something vague, anyway. It is only illusion of something solid. Do you agree that our mental concepts of symbols and rules are also vague illusions of something solid? We cannot directly observe schoolchildren applying *rules*; we only observe them doing specific things like making marks on paper, and it requires an act of mental abstraction to interpret our observations of children by saying, "Ah! These children are applying *rules*!" A rule cannot be weighed on a scale or poked with a thermometer. I see nothing more solid about this mental abstraction than about the mental abstraction from "1, 2, 3, ..." to the natural numbers. > Your further considerations (which I will not quote) are actually > about some AWFUL WORLD DISASTER if a contradiction would appear. No, if you think this, then you have not read my article carefully enough, but have merely assumed on a superficial reading that it is just like many other articles on the topic that are superficially similar. > The problem would be only how to change our intuition on natural > numbers. The same problem on sets was resolved quite efficiently (even > if only temporarily) after Russel's paradox. I see no essential > difference. My remark about nonstandard models of arithmetic was an attempt to illustrate an essential difference. In the case of set theory, there are many candidates for replacing any particular version of set theory that we might use temporarily. In the case of the natural numbers, there is no candidate in sight. If you disagree, name one. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-May/008177.html","timestamp":"2014-04-16T22:14:28Z","content_type":null,"content_length":"4209","record_id":"<urn:uuid:2ed1d167-a994-46b8-9ed3-9463eefb604d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
H Interpolation and interpolation hash searching Res Rep 76-02 - J. Assoc. Comput. Mach , 1978 "... ABSTRACT When open addressing IS used to resolve collisions in a hash table, a given set of keys may be arranged in many ways, typically this depends on the order in which the keys are inserted It is shown that arrangements minimizing either the average or worst-case number of probes required to ret ..." Cited by 3 (0 self) Add to MetaCart ABSTRACT When open addressing IS used to resolve collisions in a hash table, a given set of keys may be arranged in many ways, typically this depends on the order in which the keys are inserted It is shown that arrangements minimizing either the average or worst-case number of probes required to retrieve any key in the table can be found using an algorithm for the assignment problem. The worst-case retrieval time can be reduced to O(log2(M)) with probablhty 1- e(M) when storing Mkeys In a table of size M, where ~(M) ~ 0 as M ~ ~ We also examine insertion algorithms to see how to apply these ideas for a dynamically changing set of keys
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3780975","timestamp":"2014-04-17T15:35:26Z","content_type":null,"content_length":"12318","record_id":"<urn:uuid:2eda3563-810c-4390-a72d-2e5124fe1ca9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Melrose, MA Precalculus Tutor Find a Melrose, MA Precalculus Tutor ...My schedule is flexible as I am a part time graduate student. I am new to Wyzant but very experienced in tutoring, so if you would like to meet first before a real lesson to see if we are a good fit, I am willing to arrange that.I was a swim teacher for 8 years at Swim facilities and summer camps. I also coached. 19 Subjects: including precalculus, Spanish, chemistry, calculus ...I have currently been teaching this subject for many years and am well versed in the changes to the subject requirements due to the Common Core. Over the past 8 years of teaching I have assisted students in becoming more organized. I have many students that are on IEP with many of them having organization as an area of weakness. 5 Subjects: including precalculus, algebra 1, algebra 2, study skills ...Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. I know the programs of high and middle school math, as well as the preparation for the SAT process. 14 Subjects: including precalculus, geometry, statistics, SAT math ...I also offer strategies to overcome anxiety and boost confidence regarding testing. I recommend starting to prepare at least 3-6 months prior to testing so there is less last-minute stress. My math background includes honors and AP courses through semesters of college Calculus. 15 Subjects: including precalculus, geometry, algebra 1, algebra 2 ...I have passed all the tests in these subjects and have been approved in each of them. I have a BS and an MS both in mathematics. I have used many topics from all of these subjects while getting my Bachelor's Degree and my Master's Degree. 10 Subjects: including precalculus, geometry, algebra 1, algebra 2 Related Melrose, MA Tutors Melrose, MA Accounting Tutors Melrose, MA ACT Tutors Melrose, MA Algebra Tutors Melrose, MA Algebra 2 Tutors Melrose, MA Calculus Tutors Melrose, MA Geometry Tutors Melrose, MA Math Tutors Melrose, MA Prealgebra Tutors Melrose, MA Precalculus Tutors Melrose, MA SAT Tutors Melrose, MA SAT Math Tutors Melrose, MA Science Tutors Melrose, MA Statistics Tutors Melrose, MA Trigonometry Tutors Nearby Cities With precalculus Tutor Belmont, MA precalculus Tutors Burlington, MA precalculus Tutors Chelsea, MA precalculus Tutors Danvers, MA precalculus Tutors East Boston precalculus Tutors Everett, MA precalculus Tutors Malden, MA precalculus Tutors Medford, MA precalculus Tutors Reading, MA precalculus Tutors Revere, MA precalculus Tutors Saugus precalculus Tutors Stoneham, MA precalculus Tutors Wakefield, MA precalculus Tutors Winchester, MA precalculus Tutors Woburn precalculus Tutors
{"url":"http://www.purplemath.com/Melrose_MA_Precalculus_tutors.php","timestamp":"2014-04-19T15:11:21Z","content_type":null,"content_length":"24139","record_id":"<urn:uuid:6c707c5f-3c76-4f1a-8599-3aaa1cc63c30>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Motion in Two Dimensions Next: Scalars and Vectors Up: Main physics index Previous: Motion in Two Dimensions In two dimensions, it is necessary to use vector notation to describe physical quantities with both magnitude and direction. In this chapter, we define displacement, velocity and acceleration as vectors in two dimensions. We also discuss the solution of projectile motion problems in two dimensions.
{"url":"http://theory.uwinnipeg.ca/physics/twodim/index.html","timestamp":"2014-04-20T05:53:03Z","content_type":null,"content_length":"3721","record_id":"<urn:uuid:45e4c13f-4abb-4509-a942-3cde0dbd0a7b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Half Moon Bay Algebra Tutor ...As an undergrad at Harvey Mudd, I helped design and teach a class on the software and hardware co-design of a GPS system, which was both a challenging and rewarding experience. I offer tutoring for all levels of math and science as well as test preparation. I will also proofread and help with technical writing, as I believe good communication skills are very important. 27 Subjects: including algebra 1, algebra 2, chemistry, calculus ...It included everything from astrophysics, physical chemistry, and mathematics. I also received my minor in applied language studies where I studied topics such as second language acquisition. After college I got licensed in Teaching English as a Second Language (TESL). So I have advanced knowle... 17 Subjects: including algebra 1, algebra 2, chemistry, calculus I am native Japanese, born and raised in Tokyo, Japan. I graduated from the University of Tokyo and have an M.S. from a U.S. university. I have tutored elementary to high school students and business professionals. 3 Subjects: including algebra 1, Japanese, ESL/ESOL ...A little about me: I was always that kid that would explain the really tough math problem or the extra credit on the test. It was fun, but I also found it helped me to understand the field better. Whether it is equilateral triangles or acceleration, there is no better way for me to test my knowledge than the questions of a curious student. 16 Subjects: including algebra 1, algebra 2, calculus, reading ...In that role, I led a discussion group of 5-15 people through a first course in Philosophy of Science. I also trained my students in composing well-written philosophy papers. I currently teach Executive Functioning (organizational and study skills) through a tutoring agency that I work with. 29 Subjects: including algebra 2, elementary (k-6th), geometry, SAT math
{"url":"http://www.purplemath.com/half_moon_bay_algebra_tutors.php","timestamp":"2014-04-20T07:01:24Z","content_type":null,"content_length":"24128","record_id":"<urn:uuid:7570225f-4b10-4aff-9245-65f684000f57>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Squaring on both sides December 6th 2013, 08:19 PM Squaring on both sides squaring on both sides a^2 + b^2=34ab But the answer is only 17+12(sqrt(2)). I know the extra root is because I squared on both sides.But how do we tell which is the right one? December 6th 2013, 08:49 PM Re: Squaring on both sides they both work. I don't know that one is any more "right" than the other. December 6th 2013, 08:54 PM Re: Squaring on both sides interesting question.. Cosmic Corporate Park 2, Cosmic Corporate Park 2 Noida, Cosmic Corporate Park 2 Resale, Cosmic Corporate Park 2 Yamuna Expressway December 6th 2013, 09:50 PM Re: Squaring on both sides thanks I realized my mistake.
{"url":"http://mathhelpforum.com/algebra/224880-squaring-both-sides-print.html","timestamp":"2014-04-19T02:45:02Z","content_type":null,"content_length":"5196","record_id":"<urn:uuid:7fd1e079-dfb9-4050-ab52-2777145068a4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
East Hills, NY Trigonometry Tutor Find an East Hills, NY Trigonometry Tutor ...In the past, for example, I've drawn pictures, sung songs, broken down information into bullet points, acted it out, made flash cards, and so many more! Whatever works best for the student works for me. I travel to student's homes (or any other location you prefer) to make the experience as convenient as possible for you. 37 Subjects: including trigonometry, chemistry, physics, calculus ...I have experience volunteering in an elementary school, and I was there to assist the teachers. I escorted the children to the nurse's, bathroom, etc. I also helped the children with their projects, for example the counting caterpillar. 13 Subjects: including trigonometry, English, Spanish, algebra 2 ...Trigonometry at the elementary level is the study of the properties of ratios of side lengths of a right triangle. Some of these properties carry over into general triangles, such as the Law of Sines and Law of Cosines. The usual trig course covers definitions of sine, cosine, tangent and the basic identities and addition formulas. 19 Subjects: including trigonometry, reading, writing, calculus ...I have had training in Orton-Gillingham, Wilson Reading Method, including Fundations and Simply Words, and Animated Literacy, for writing I turn to the work of Lucy Calkins. In Math I draw a great deal from the work of Marilyn Burns and also like to use TouchMath with my students. Because I am ... 39 Subjects: including trigonometry, English, reading, ESL/ESOL ...All must cooperate together in order to win and score goals. I was manager of a church soccer team in 2011 and the team made it to the semi-finals and lost due to penalties. Soccer is the sport that unites all people into one goal. 27 Subjects: including trigonometry, reading, chemistry, Spanish
{"url":"http://www.purplemath.com/East_Hills_NY_trigonometry_tutors.php","timestamp":"2014-04-16T22:37:53Z","content_type":null,"content_length":"24444","record_id":"<urn:uuid:812bf6f2-7b0f-41db-91e8-64605227655b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Wire Problem A standard optimization problem in first-semester calculus is to maximize and minimize the total area of two geometric figures made from a fixed length of wire cut into two pieces. This Demonstration illustrates that problem using a wire of length 10 and three pairs of geometric figures. Contributed by: Marc Brodie (Wheeling Jesuit University)
{"url":"http://demonstrations.wolfram.com/TheWireProblem/","timestamp":"2014-04-18T18:14:41Z","content_type":null,"content_length":"43275","record_id":"<urn:uuid:160079fc-4f2c-436e-be69-efc81e8a4b01>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Elliptic equations of critical Sobolev growth have been the target of investigation for decades because they have proved to be of great importance in analysis, geometry, and physics. The equations studied here are of the well-known Yamabe type. They involve Schrödinger operators on the left hand side and a critical nonlinearity on the right hand side. A significant development in the study of such equations occurred in the 1980s. It was discovered that the sequence splits into a solution of the limit equation--a finite sum of bubbles--and a rest that converges strongly to zero in the Sobolev space consisting of square integrable functions whose gradient is also square integrable. This splitting is known as the integral theory for blow-up. In this book, the authors develop the pointwise theory for blow-up. They introduce new ideas and methods that lead to sharp pointwise estimates. These estimates have important applications when dealing with sharp constant problems (a case where the energy is minimal) and compactness results (a case where the energy is arbitrarily large). The authors carefully and thoroughly describe pointwise behavior when the energy is arbitrary. Intended to be as self-contained as possible, this accessible book will interest graduate students and researchers in a range of mathematical fields. "This is an important and original work. It develops critical new ideas and methods for the analysis of elliptic PDEs on compact manifolds, especially in the framework of the Yamabe equation, critical Sobolev embedding, and blow-up techniques. This volume will have an important influence on current research."--William Beckner, University of Texas at Austin Table of Contents • Mathematical Notes Phillip A. Griffiths, John N. Mather, and Elias M. Stein, series editors Subject Area: • Mathematics
{"url":"http://press.princeton.edu/titles/7791.html","timestamp":"2014-04-17T18:26:54Z","content_type":null,"content_length":"15960","record_id":"<urn:uuid:42b157e9-a938-4c07-aa1d-99aeed52865f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Climatic Cycles and Tree-Growth/Chapter 7 From Wikisource ←Chapter VI Climatic Cycles and Tree-Growth by Chapter VIII→ Chapter VII Need for such analysis.—During these modern times of rainfall and sunspot records we may compare such records with tree-growth and obtain the interesting correlations exhibited in the last two chapters; but the tree records extend centuries and even thousands of years back of the first systematic weather or sun records of any kind. Without being over-precise or exhaustive, it is interesting to note that California weather records began about 1851. Records on the Atlantic coast began largely in the half-century before that date. London has a rainfall record since 1726, Paris since 1690, and Padua since 1725. Good sunspot records began about 1750, but the number of maxima and minima is known between 1610 and 1750, although the exact dates are uncertain. All this does not carry us very far back, but it serves as an excellent basis for the correct interpretation of the record in the trees. It would be possible to apply correlation formulas to the Arizona tree records and perhaps to the sequoias and construct a probable rainfall record for long periods of time, but apart from Huntington's study of the "Climatic Factor in History," the chief use of such a record would be in studying the laws which govern rainfall; and this is best done through cycles. We shall find that the sunspot cycle plays an important ro1e in rainfall. But we find traces of the solar cycle in nearly all of our tree groups, and evidently the way to read the trees is to study first of all their alphabet of cycles. Hence the best methods of identifying cycles must be used. Proportional dividers. — If a short series of observations is to be tested for a single period, it can be done by mathematics, but it will take many hours and give a result in terms so precise as often to deceive. This, for example, has been the difficulty with the mathematical solution of the sunspot curve. It seems to the writer that the safer way to solve such a curve is by a graphic process, plotting the curve and applying equal intervals along it. An extremely good instrument for this purpose is the multiple-point proportional dividers. By a system of pivots and bars, 16 or more points are maintained in a straight line and at equal intervals, while the space between two successive points may be drawn out from one-eighth inch to one inch. The remarkable persistence of the half sunspot period in the early Flagstaff trees was detected in this way. The projection of equal spacing on curves as long as 12 to 15 feet has been done by a 10-foot india-rubber band with small metal clips pinched on at regular intervals. As the band was stretched all the intervals were enlarged by equal amounts, and periodic phenomena were detected. Similar use could be made of the sharp shadows cast by the glowing carbon of an arc-light. The shadow of a transparent scale could easily be cast in all sizes upon a plotted curve. But all these methods of equal spacing on a plotted curve leave far too much to the individual judgment of the investigator. THE OPTICAL PERIODOGRAPH. A method of periodic analysis well adapted to the work in hand has been developed by the writer as the need for it became more and more evident. Along with the feeling of need for rapid analysis was the increasing recognition of the desirability of some process which would place mere individual judgment and personal equation as far in the background as possible. Schuster's periodogram. — In 1898, Schuster suggested the use of the word "periodogram" as analogous to the word spectrogram; that is, a periodogram is a curve or a photograph which indicates the intensity of time periods just as the spectrogram indicates intensity of space-periods or wave-lengths. The spectrogram commonly gives its intensities by varying photographic density along a band of progressive wave-lengths. For the periodogram Schuster made simply a plotted curve, of which the abscissæ represented progressive time-periods and the ordinates represented intensities. He made a mathematical analysis of the sunspot numbers and constructed a periodogram which is reproduced in figure 30. It shows periods at its crest at 4.38, 4.80, 8.36, 11.125, and 13.50 years. Fig. 30.—Schuster's periodogram of the sunspot numbers. The optical periodogram. — It is, of course, not necessary that the periodogram should take the form of a plotted curve with intensities represented by ordinates, nor yet need it be exactly like a spectrogram showing intensities by density. The first periodogram produced by the writer is shown in plate 9, a. It is an analysis of the sunspot numbers numbers. from 1755 to 1911. The existence of a rhythm in any specified period is indicated by a beaded or corrugated effect. A line across the corrugations gives in fact the rhythmic vibrations of the cycle. On a moment's examination this periodogram shows much of the information which has been under discussion for many years. The 11-year period is the most pronounced, but it is not so superior to all others as would be expected. It may be of any duration from 11.0 to 11.8 years, but 11.4 is a good average. There is obviously a period somewhere between 9.5 and 10.5 years and one between 8.0 and 8.8, but it is less conspicuous. Faint indications of periods are found near 14 years. The double of 8.4 is seen between 16 and 17 years. The double of the 10-year period shows near the 20 and at 22 the double of the 11 begins. The preliminary part of producing this periodogram is the construction of the "differential pattern" shown in plate 9, b. This pattern is the optical counterpart of a set of columns of numbers arranged for addition, as when one summates a series of annual measures on a 10-year period, for example. The series is arranged in order with the first 10 years in the first line, the second 10 in the second line, and so on. In the case of the pattern the lines are made indefinitely long, so that the optical addition may be done in other directions than merely straight downward, for by making the additions on a slant a different period comes under test. In order to produce this pattern the sunspot curve was cut out in white paper and pasted in multiple on a black background. The left end of each of the upper lines is the date 1755. Each successive line is moved 10 years to the left, so that passing from above vertically downward each line represents a date 10 years later than its predecessor. This continues from 1755 to 1911, and the lower 10 lines show the latter date at their right ends. It is not necessary that any of the lines should be full length, as we use only a part of each. By passing the eye downward from the top, a period near 10 years will show itself at once by a succession of crests in vertical alinement. If the crests form a line at some angle to the vertical, then the period they indicate is not exactly 10 years. It is more if the slant is to the right and less if to the left. The horizontal lines are spaced the equivalent of 5 years. Hence, if we measure the angle made between a vertical line and a line joining two crests in successive horizontal lines, we may easily calculate by simple formulas the period indicated. Since the photometric values of all the curves in the diagram are proportional to the plotted ordinates, the photographic summation of the whole pattern in a vertical direction is almost an exact analogue of a numerical summation. This summation is simply done by a positive cylindrical lens with vertical axis. This brings down on the plate a series of vertical lines or stripes. If, now, we cut across these lines with a horizontal slit, the light coming through this slit from one end to the other will be the summation of the diagram in the vertical. But the photographic summation may be done at any slant instead of only in the vertical, and therefore the sensitive plate may be made to summate these curves through a long range of periods. In order to get a long range of periods, the diagram was mounted on an axis with clock-work and slowly rotated in front of a camera with a cylindrical lens for objective, a horizontal slit in the focal plane, and a sensitive plate passing slowly downward across the slit by clock mechanism. In this way a full range of possible periods come under the summing process, and when a real period is vertical the crests of the curves form vertical lines which come down as a series of dots or beads in the slit. When no period is in the vertical the light coming through the slit is uniform. Of course, there is a practical limit to the different angles at which the diagram may be viewed. An angle too far in one direction, making the tested period very small, would require a great number of duplications of the curve, while too great an angle the other way, making the tested period very large, catches the curve in the nonsymmetrical form and introduces errors. In the periodograms actually made of the sunspot curve the minimum period tested was 7 years and the maximum 24. One notes especially that this is a continuous process and that all periods from the minimum to the maximum are tested. Application to length of sunspot period. — The interest in the sunspot period makes a special consideration of plate 9, c, worth while. This figure is a photograph of plate 9, b, taken out of focus for the purpose of calling attention to certain general features. In b the eye naturally turns to the sharp outlines and notes its minute details. In c the crests of b are changed into large blotches connecting somewhat with their nearest neighbors and varying in intensity. The alinement which they form in a nearly vertical direction is a graphic representation of the period. If the line were exactly vertical the period would be 10 years. The slant to the right shows more than 11. If the line were straight the period would be constant. It is evident that there are several irregularities in it. Having a number of exactly similar lines side by side, the irregularities are repeated in each and thus strike the consciousness with the effect of repeated blows. These irregularities are the discontinuities referred to by Turner in connection with his hypothesis. It is evident at a glance that the sunspot sequence divides itself into three parts, namely, a 9.3-year period, 1750-1790; then an interval of readjustment, 1800-1830, with a 13-year period; and lastly an 11.4-year period lasting to the present time (values approximate).^[1] But the latter is not perfectly constant, for after 1870 there is a change in intensity. The breaks thus shown and Turner's dates of discontinuity are compared in table 6. A. Periodogram of the sunspot numbers, 1755-1911. Corrugations show periods. The numbers give length of period in years. The white line is the year 1830 and shows phase. B. Differential pattern used in making the periodogram, consisting of the sunspot curve mounted in multiple. C. Same pattern photographed out of focus to show discontinuities in the vertical lines. D. Sweep of sunspot numbers, 1755-1911. E. Differential pattern of sunspot numbers made by the periodograph process. Table 6.—Discontinuities in the sunspot cycle. Periodogram. Turner. Between 1788 and 1804. 1796 Between 1830 and 1837. 1838 Between 1870 and 1884. 1868 By means of this diagram one can discover at a glance the origin of many of the periods which Michelson thought were illusory and in which opinion he was largely right. We can plainly see a 9.3-year period in the early part of the curve. Let us call this part of the sequence A[n] and its broken continuation near the center S[n], and the lower and later part giving the 11.4-year period C[n]. Thus we get at once three periods, 9.3, 11.4, and something over 13 years. If, now, we bring the average A[n] into line with the average C[n] as the periodograph does, we get 11.4 years. If we bring the average A[n-1] into line with the C[n-1], we get close to 10 years. If we bring into line A[n] and the heavier parts of C[n-2], we get 8.4 years or thereabouts. And at 5.6 years we find a period which is just half of C[n] and at 4.7 the half of A[n], and so on. It is like a checker-board of trees in an orchard; they line up in many directions with attractive intensity. But plate 9, c, helps remove some of the complexity of the sunspot problem. It shows us that while these various periods are apparent, they are improbable and needless complications. The diagram supplies a basis for profitable judgment in the matter. Hence to avoid just such awkward cases as the sunspot curve, a differential pattern is considered to be a necessary accompaniment of the periodogram in doubtful Production of differential pattern.—The work described above, consisting particularly in the production of a periodogram from the differential pattern, was done at Harvard College Observatory in 1913. The next fundamental improvement in the apparatus was in 1914, and consisted in a method of producing the differential pattern without all the labor of cutting out the curves. It was simply the combination of a certain kind of focal image called a "sweep" and an analyzing plate. A single white or transparent curve on a black background is all that is now needed as a source of light. An image of this is formed by a positive cylindrical lens with vertical axis. In the focal plane image so produced each crest of the curve is represented by a vertical line or stripe and the whole collection of vertical lines looks as if it has been swept with a brush unevenly filled with paint and producing heavy and faint parallel lines. Each of these lines represents in its brightness the ordinate of the corresponding crest. The sweep of the sunspot numbers is shown in plate 9, d. Any straight line whatever in any direction across this sweep truly represents the original curve, not as a rising and falling line but in varying light-intensity. A plate with equally spaced parallel opaque lines, called the analyzer or analyzing plate, is placed in the plane of this sweep. These lines may be seen in plate 9, e. When the analyzer is turned at a small angle to the lines of the sweep, each transparent line shows the full curve or a substantial part of it in its varying light intensities. 'These numerous reproductions are all parallel to each other, separated by equal dark lines, and each one is displaced longitudinally with reference to its neighbors, thus presenting the characteristics of the differential pattern. By twisting the analyzer with reference to the sweep while the two remain in parallel planes, different periods may be tested; for as the analyzer twists, each reproduction varies in respect to its length and its displacement from its adjoining neighbors above and below. When a period is formed it shows itself, just as in the original differential pattern, by rows of dark and light spots in alinement more or less perpendicular to the analyzing lines, as in plate 9, e. These light and dark rows are analogous to interference fringes and are identical with the elaborate but provokingly useless designs on a wire screen in front of its reflection in a window, or with the parallel fringes when two sets of parallel lines are held at a slight inclination to each other.^[2] Alinements are always best recognized by holding the paper edgewise and looking at the diagram at a low angle rather than in a perpendicular direction. The analyzing plate resembles a coarse grating with equally spaced parallel lines. Much difficulty was experienced in making it. It is most satisfactory if made on glass with strong contrast between the opaque and transparent parts. The grating now in use was produced by photographing a 10-foot sheet of coordinate paper upon which 165 lines of black gummed paper had been carefully fastened. The coordinate lines permitted the spacing to be done with exactness. The width of the transparent space throughout was three-tenths of the distance from center to center. This was carefully photographed by a good lens at different distances. Glass prints were made from each negative and are still in use.^[3] Theory. — The formula for the period is very simple: Let y = length of curve in years or other time-unit employed. l = length of curve image across sweep lines in centimeter or other unit of length. s = spacing center to center of analyzing lines in unit of length. Then ${l \over s}$ = number of analyzing lines in curve when lines are parallel to sweep. ${ys \over l}$ = number of years in 1 line when lines are parallel to sweep. Now, taking analyzing lines in figure 31 as horizontal, and letting the sweep be inclined as a small angle δ with the analyzing lines, the number of lines required to cross the sweep in the direction perpendicular to analyzing lines will be increased and hence the value in years between two analyzing lines will be decreased; hence $\scriptstyle {ys \over l} cos \delta$ = years per line from a to b. If the fringe is perpendicular to the analyzing lines, its period is the distance ab in years and we have for this special case: $\scriptstyle p_1={ys \over l} cos \delta$. Fig. 31.—Diagram of theory of differential pattern in periodograph analysis. If, however, the fringe takes some other slant, as the direction ac, making the angle θ with the analyzing lines, then the period desired is the time in years between a and c. That equals the time between a and b less the time from b to c. Now bc in years would equal $\scriptstyle \overline {ab} cot \theta$ except for the fact that the horizontal scale along bc is greater than the vertical scale along ab in the ratio $\scriptstyle {cos \delta} \over {sin \delta}$ and therefore a definite space interval along it means fewer years in the ratio of $\scriptstyle {sin \delta} \over {cos \ delta}$. Hence we have: bc (in years) = ab (in years) tan δ cot θ P = p[1](1 — tan δ cot θ) which is the period required. The separation of the fringes needs to be known at times in order to find whether one or more actual cycles are appearing in the period under test. In figure 31 $\scriptstyle \overline {ab} = s$ $\scriptstyle \overline {ad} = {s \over {sin \delta}}$ $\scriptstyle \overline {ac} = {{s sin (\theta - \delta)} \over {sin \delta}}$ which is the width required. THE AUTOMATIC OPTICAL PERIODOGRAPH. The present apparatus combines the two processes whose development has been described above. The second process developed is really the first one in the present instrument. The curve. — The curve is prepared by cutting it out in a thick coordinate paper. The space between the curved line and the base is entirely removed and the curve becomes represented by area. In order to make the density still greater, the paper is painted with an opaque paint so that the brilliant light passing through will come through only the curve itself and not the paper. A special window-shutter is made to occupy the lower 2 feet of the window, whose width is some 50 inches. The curtain can be drawn down to the top of this, excluding the light around the edges. This window-shutter has a door in the upper part to give access to the interior. Within this box is a sloping platform upon which a mirror 8 by 46 inches is placed. This mirror is about 35° from the horizontal position and when looked at from a horizontal direction it reflects the sky from near the zenith. On the side of this box toward the room is a slit 45 by 3 inches in size. This extends horizontally and is on a level with the mirror. Below this slit is a narrow groove for taking the lower edge of the curve paper and above this slit is a strip of wood on hinges, so that when the lower edge of the curve is placed in the narrow groove below, this hinged strip closes down on the top and holds the curve in place directly in front of the mirror. Looked at from a horizontal direction within the room, the curve is seen brightly illuminated by light from the sky not far from overhead. Track and moving mechanism. — About 7 feet from the curve the track begins and extends back 45 feet in a perpendicular direction. The track consists of 3 rails. The center rail is of uniform height and takes the single rear wheel, whose motion controls the movement of the film at the back of the camera. The right-hand rail is also uniform in height and supports one of the front wheels. The left rail is variable in height and supports the driving-cone, which serves as the other front wheel. The cone is 6 inches long and 3 inches in greatest diameter. It rests on a side rail whose elevation and distance from the center can be altered. The purpose of this particular mechanism is to vary the speed with which the camera travels along the track, for the time of exposure is approximately proportional to the square of the distance from the curve, and therefore when the camera travels from the near position to the far position it must slow down in rate as it goes along. The left rail, therefore, at the near position is close to the center and low down; in the middle and outer parts of the track it gets farther away and higher up, since the parts of the cone near the vertex travel on it. The axis of the cone carries a bevel gear meshing with another bevel attached to a vertical axis with a worm gear at the top, which the electric motor drives with a belt connection. In order to aid the motion of the camera, a cord passes from its back to the outer end of the track and by a system of pulleys and weights exerts a slight constant force. The motor is so connected that the camera travels away from the curve. The details here described may be seen in plate 10. The differential pattern mechanism. — The camera is divided into three separate compartments, to each of which access is obtained by a sliding door moving in grooves on the side. The front compartment produces the differential pattern. It is about 7 inches long by 5 inches wide in the clear and 4 inches high. It is nearly divided into two parts by a partition which comes down from the top at about 2 inches from the front end. This partition does not go down to the floor of the compartment, but leaves a space of about an inch. A hole 1.5 inches in diameter is cut through the front of this compartment a little above its center, and another hole of the same size to match is cut through this partition, while at the back of this compartment a large opening is made a little over 2.5 inches wide and about 2 inches high. The lens is carried on a special carriage consisting of a horizontal and a vertical part. The vertical piece has a hole 1.5 inches in diameter cut in it, and the lens is mounted over the hole. The lens now in use consists of a spherical lens concavo-convex 2 inches in diameter and 12 inches in focus placed on the inside, and a positive cylindrical lens of the same size and focus placed on the outside with axis vertical. The convex side of each lens is placed outward. The lens carriage is placed partly under the partial partition and the lens in its holder comes directly between the two holes mentioned. When the sliding door of the compartment is down, the compartment is sufficiently light-tight to fulfill all the requirements of a camera. The movable carriage of the lens is mounted on two small glass tubes and runs between guides. A spring at its back end pulls it toward the position of focus for distant objects, where its motion is stopped by a pin. A long screw is passed through a, hole in the bottom of the camera box and enters the bottom of this lens carriage, so that an automatic arrangement outside and underneath the camera can regulate the focus. This consists of a vertical axis with two lever arms. The upper lever arm is a short one connected to the screw which comes from the lens board. The lower lever arm is some 4 inches below the upper and goes off in a direction nearly at right angles; it carries on its end a wheel in a horizontal position. This wheel is so placed that it runs on an especially arranged track attached to the side of the center rail of the main track. By varying the elevation of this special focussing track in different parts of the main track, the focus of the lens can be automatically controlled. At the back of this first compartment is the analyzing plate, the same plate used in previous work. The spacing of its lines is 0.5 mm. from center to center. The proportionate transparent part is about three-tenths of the center-to-center measurement. The area covered by these lies is 1 by 3 inches, making about 156 lines. The photograph is transparent with dense black lines in it. The glass has been cut down to a convenient size, and this plate is mounted at the back of the first compartment with the film side of the plate toward the back. This plate is over the large opening at the back of the first compartment. The differential pattern is formed automatically by the lens on this plate. The plate is held in a fixed position with its lines nearly vertical but inclined about 12° to the lines of the sweep formed by the lens. This produces fringes more or less horizontal in direction. Varying periods are tested by changing the distance from the curve which alters the scale of the sweep while the analyzing lines are unchanged. As the scale of the sweep changes, the fringes appear to rotate about the center of the differential pattern. Immediately behind the analyzing plate are two condensing lenses described in the next topic. They bring the general beam of light to a focus about 6 inches back of the plate. For visual work a movable mirror, just back of the plate, reflects the beam outside the camera box, through an eyepiece to the eye. For photographic work a small total-reflection prism and simple lens are inserted about 5 inches back of the analyzing plate. These throw the beam outside into a special camera attachment in which ordinary films or plates may be used. The periodogram mechanism. — The remainder of the camera is especially for the purpose of producing the periodogram from the differential pattern. Almost in contact with the analyzing plate is a condensing lens consisting of two cylindrical lenses about 2 inches in diameter and 6 inches focus; these are mounted with vertical axes and with their convex sides toward each other. The aperture of the condenser is about 0.75 inch in vertical height and 1.75 inches in length. The purpose of these condensers is to coverge the light which comes through the analyzing plate on the slit at the back. The second compartment is nearly the same size as the first, namely, about 6.5 inches long. At its front end is the analyzing plate with the condensers and at its back in the same optical axis is a vertical slit about 1 inch long and 1 mm. wide. The sides of this slit are beveled so that the slit itself is at the back. In the middle of this compartment is a powerful cylindrical lens or combination of lenses with horizontal axis. This lens is made up of 4 separate positive cylindrical lenses, each 2 inches in diameter and 6 inches focus. These all have their convex sides toward the common center. They are mounted on a movable carriage of wood which slips in place or may be removed entirely. The aperture of this lens system is about 1.5 inches long by 0.75 inch high. The effect of the condensing lens and of this cylindrical lens is to cast in the plane of the slit an area of light whose size is essentially a repro- A. The automatic optical periodograph. B. Differential patterns of Sequoia record, 3200 years at 11.4. duction of the aperture of the objective, namely, 1 inch high by 0.25 inch wide, but the detail in this area of light is brought in focus by the cylindrical lens and integrates the horizontal lines of the differential pattern. When, therefore, the differential pattern shows a series of horizontal fringes, they become reproduced by a series of horizontal lines crossing the slit, while in the slit itself they appear as a series of dots. When a period is disclosed by proper position of the camera, it will produce horizontal lines on the analyzing plate. A series of black and white dots, therefore, go through the slit into the final compartment; but when the distance is such that the lines on the differential pattern are at some slant, then, the integration carried into the slit being still horizontal, the illumination in the slit is uniform. In this way the beaded or corrugated effect in the slit indicates a period at that particular distance from the curve. In order to read off periods directly in the final result without the necessity of making exact measures, an automatic signal or period indicator is introduced in this second compartment. Above the upper and lower ends of the slit are placed small pieces of mirror at 45°, and corresponding to these there are two small holes 0.25 inch in diameter in the side of the box. Outside of these holes again is a mirror at 45° reflecting light from the curve in the window. So long as the holes are open, direct light from the curve is reflected by the two sets of mirrors through the slit on to the film beyond, as will be described. A shutter is placed over the outer holes in the box with a lever carried down to the vicinity of the central rail. On the end of the lever arm is a wheel. At proper intervals small pieces of wood are placed in the side of the track, so that as the wheel passes over them the shutter is opened and light passes to the mirrors and makes a dot or a line on each side of the film in the third compartment. In this way marks can be placed on the film independent of the periodogram, and yet they can be spaced exactly to represent the different periods tested. Special periods, for example 5 or 10 years, etc., are indicated by the extra length and density of the marks produced. These appear on the margins of the periodograms in plate 11. The final compartment at the rear contains a drum on a vertical axis which is slowly rotated as the whole mechanism moves along the track. The rear wheel resting on the center rail is connected by gearing to the drum, so that 1 mm. on the drum represents 42.7 mm. or 1.7 inches on the track. This makes a convenient length for the final periodogram. The drum can be detached, carried to a dark room to have a film pinned to its periphery, returned in a special light-tight box, and mounted on its axis for an exposure. The times of exposure depend on characteristics of the curve under test, but it is necessary to allow about 35 minutes for the range from 4 to 15 years, and several times that for the range from 15 to 25 years. Plates 10, 11, and 12 illustrate the apparatus and the periodic analysis produced. Periodograms. — Plate 11, which has been arranged to illustrate the work of the periodograph, shows several of the early periodograms which are comparatively free from obvious instrumental defects. In each the range of periods is marked on the left margin. Periods are indicated by the vertical band or ribbon breaking up into a series of horizontal dots or beads. For example, plate 11, a, is a periodogram of the 5-year standard period made for the purpose of calibrating the work of the periodograph. The 5-year period is very prominent near the top of the diagram in the plate. At 10 years its first harmonic appears with double crest, showing still that it is a 5-year period. At 15 years the second harmonic shows with a triple crest, and at 7.5 years the 3/2 overtone is evident with 3/2 crests. These overtones are always readily distinguished from the fundamental on the differential pattern. The differential pattern of this 5-year standard is shown in plate 12, q. The instrument is set for analysis at 5.0 years. In this position the integrating lens sums up the rows of light crests as a series of dots on the periodogram. Plate 11, b, is the analysis of a mixed standard used for calibrating the instrument. The curve contains sharp triangular crests at intervals representing periods of 7, 9, 11, 13, and 17 years, all mixed together and no two starting intentionally from the same point. These are all separated in the periodogram and the overtones of some may be seen. Such overtones can be distinguished from the fundamental on the differential pattern. Plate 11, c, gives a periodogram of the sunspot numbers from 1610 to 1910, using before 1750 the probable times of maxima suggested by Wolfer. The best period is at 11.1 as usually quoted. If the variation from 1750 only is taken, the best period comes at 11.4. This periodogram shows a period at about 8.6. The degree of accuracy with which one can pick out the periodic point is a real criterion of the accuracy of the result selected. The differential pattern of this same series of sunspot numbers will be found in plate 12, a, in which the vertical rows of crests are readily distinguished. The sudden change in direction of the lines a little below the center of this and the two following periodograms is an instrumental defect due to slight unevenness in the track and therefore is without significance. Plate 11, d and e, give an analysis of the Arizona 500-year record. The chief points of interest are the well-defined double-crested 11.6-year period and the 19-year and 22-year periods. Other weaker periods may be seen from place to place. Resolving power of the periodograph. — The accuracy with which a period can be determined by the periodograph may be readily observed in the differential pattern and the periodogram. The pattern indicates a period by showing a row of light spots or crests in line. The accuracy A. Periodogram of standard 5-year period. B. Periodogram of mixed periods. C. Periodogram of sunspot numbers, 1610-1910. D. Periodogram of Flagstaff 500-year record, to show cycles between 4 and 15 years of length. E. Periodogram of same continued to 25 years. of the period is the accuracy with which the direction of this line can be ascertained. This depends on the length of the row of crests, on the shortness of each crest, and on their individual regularity or alinement. These characteristics may be noted in the plates and especially in plate 12, q. Expressed in other terms, these resolving features are respectively as follows: (1) Number of cycles covered by the given curve. (2) Shortness of maxima in relation to length of cycle; if the maximum is sudden and sharp, as in rainfall, the accuracy may be very great; if the maximum is long, as in a sine curve, the accuracy is less. (3) Regularity in the maxima and freedom from interference. These features all appear in the differential pattern and hence the accuracy of any period is its most evident feature and all observers can judge it equally well. It is exactly analogous to the accuracy of a straight line passed through a series of plotted points which theoretically ought to form a straight line but which do not do so exactly. The most important part of the constructed instrument which may alter the accuracy of analysis is the analyzing plate. The accurate spacing and parallelism of the lines is a mechanical feature and can be produced with care and attention to details, but the relation of width of transparent line to center-to-center spacing of the lines is a matter of judgment and the necessities of photography. As this relative width increases, the length of each crest in the pattern becomes longer and the row of crests becomes wider and less definite in direction. If the maxima in the curve under test are of the sine-curve type this relation is less important, for the light crests in the pattern will be long in any case, but for sharp, isolated maxima resolution is lost if the width of the transparent line is too great. In the instrument now constructed the ratio of transparent line to center-to-center spacing is 3:10, but a smaller ratio such as 1:10 could advantageously be used in certain cases if there is sufficient light to make photography easy. The accuracy in reading a periodogram is at once apparent on its face. When the number of cycles is great as in plate 11, a, the rhythmic or beaded effect is short and very limited in extent, as in the 5-year period there indicated, and the period is accurately told. But if the number of cycles is reduced (as in plate 11, b or c) the periodic effect in the photograph extends over a greater range and its center can not be told with the same precision. The accuracy of estimation in the periodogram is therefore the actual accuracy of the result. 1. ↑ In discussing the periodicities of sunspots (1906^2, pp. 75-78) Schuster divided his 150 years, from 1750 to 1900, into two nearly equal parts. He found in the first part two periods of 9.25 and 13.75 years acting successively, and in the second part, a period of 11.1 years. 2. ↑ Roever (1914), has used somewhat similar interference patterns to illustrate very beautifully certain lines of force. 3. ↑ A very superior analyzing plate has recently been made from a ruled screen such as is commonly employed in half-tone engraving.
{"url":"http://en.wikisource.org/wiki/Climatic_Cycles_and_Tree-Growth/Chapter_7","timestamp":"2014-04-16T20:50:59Z","content_type":null,"content_length":"73207","record_id":"<urn:uuid:7b6b214a-d01c-4b9c-9cc9-93e2ec03ee0e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
L'Hopital's Rule...! April 3rd 2008, 10:00 AM #1 Nov 2007 L'Hopital's Rule...! Determine if L'Hopital's rule applies in each case. Select all that apply. lim_y-->0 3^y/y^2 lim_x-->pi (sin(3x))/(x-pi) lim_t-->0 (t^2+3t)/(cosh(t)-1) lim_z-->0 (e^(2z)-1)/e^z lim_theta-->0 (arctan(theta))/(3(theta)) lim_x-->infinity (e^(-x))/(1+ln(x)) lim_x-->0+ (cot(x))/(ln(x)) lim_t-->infinity (ln(t))^2/t we haven't exactly gone over L'Hopital's rule yet, but im tryin to do it early, any help on how you can tell whether the rule will work or not, thanks... l'hospital's rule: $\lim f(x) =0\ and\ \lim g(x) =0$ or if: $\left| \lim f(x) \right| =\infty\ and\ \left| \lim g(x) \right| =\infty$ $\lim \frac{f(x)}{g(x)}=\lim \frac{f'(x)}{g'(x)}$ For your questions, just make sure that the limits of the quotients are of the forms $\frac{0}{0}$ or $\frac{\infty}{\infty}$ as described above. ok, so does that just mean that the if the limit of some function is 0 or infinity then l'hopital's rule applies? Not quite, it requires that the limit of the quotient of functions (you'll notice that all your examples are in or can be written in the form $\frac{f(x)}{g(x)}$) must be one of the indeterminate forms: $\frac{0}{0}$ or $\frac{\infty}{\infty}$ (0r $-\frac{\infty}{\infty}$). If the limit of the quotient were $\frac{0}{\infty}$, or $\infty$ or 2 or anything else, then L'hospital does not Last edited by teuthid; April 3rd 2008 at 10:31 AM. Reason: ammendment OK, do you think you could show me an example of one of the ones that i put up there...that would help, thanks maybe this will clear things up... $\lim_{x\rightarrow 0} \left(\frac{t^2+3t}{\cosh(t)-1}\right)=\frac{\lim_{x\rightarrow 0} (t^2+3t)}{\lim_{x\rightarrow 0}(\cosh(t)-1)}=\frac{0}{0}\Rightarrow$ L'Hospital's law applies $\lim_{x\rightarrow 0^{+}} \left(\frac{\cot(x)}{\ln(x)}\right)=\frac{\lim_{x\ rightarrow 0^{+}}(\cot x)}{\lim_{x\rightarrow 0^{+}}(\ln x)}=\frac{\infty}{-\infty}\Rightarrow$ L'Hospital's law $\lim_{x\rightarrow \infty} \left(\frac{e^{-x}}{1+\ln x}\right)=\frac{\lim_{x\rightarrow \infty} e^{-x}}{\lim_{x\rightarrow \infty} 1+\ln x}=<br /> \frac{0}{\infty}\Rightarrow$ L'Hospital's law does not apply Hello, mathlete! ok, so does that just mean that the if the limit of some function is 0 or infinity then l'hopital's rule applies? I wouldn't put it quite like that . . . If the limit (as written) is an indeterminate form, then L'Hopital applies. There are several indeterminate forms. The two basic forms are: . $\frac{0}{0}\text{ and }\frac{\infty}{\infty}$ . . . and L'Hopital can be applied. There are other indeterminate forms: . $\infty - \infty,\;0^0,\;0^{\infty},\;\infty^0,\;1^{\infty}, \;\hdots$ Often these can be transformed into one of the two basic forms, . . then L'Hopital can be used. ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ $1)\;\;\lim_{y\to0}\frac{3^y}{y^2} \;=\;\frac{3^0}{0^2} \;=\;\frac{1}{0}$ . . . not indeterminate $2)\;\;\lim_{x\to\pi}\frac{\sin(3x)}{x-\pi} \;=\;\frac{\sin(3\pi)}{\pi-\pi} \;=\;\frac{0}{0}$ . . . yes, indeterminate! $3)\;\;\lim_{t\to0} \frac{t^2+3t}{\cosh t - 1} \;=\;\frac{0^2 + 3(0)}{\cosh(0) - 1} \;=\;\frac{0+0}{1-1} \;=\;\frac{0}{0}$ . . . yes, indeterminate! $4)\;\;\lim_{z\to0}\frac{e^{2x}-1}{e^x} \;=\;\frac{e^0-1}{e^0} \;=\;\frac{1-1}{1} \;=\;\frac{0}{1} \;=\;0$ . . . not indeterminate Get the idea? April 3rd 2008, 10:11 AM #2 April 3rd 2008, 10:20 AM #3 Nov 2007 April 3rd 2008, 10:29 AM #4 April 3rd 2008, 11:13 AM #5 Nov 2007 April 3rd 2008, 11:36 AM #6 April 3rd 2008, 11:59 AM #7 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/calculus/33099-l-hopital-s-rule.html","timestamp":"2014-04-17T07:13:16Z","content_type":null,"content_length":"50802","record_id":"<urn:uuid:56c30cb2-54f4-438c-aed2-d5495b45eca1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
FST (functions, stats, and trigonometry) Number of results: 8,808 FST (functions, stats, and trigonometry) the axis of symettry is x=0 Wednesday, October 7, 2009 at 9:05pm by bobpursley FST (functions, stats, and trigonometry) what is the equation for the axis of symmetry for the function f(x)=IxI?(absolute value) Wednesday, October 7, 2009 at 9:05pm by Jared FST (functions, stats, and trigonometry) what about if it was moved, say (x,y)-> (x+3, y-1)? i understand where the line is visually but i don't know how to write the equation Wednesday, October 7, 2009 at 9:05pm by Jared FST (functions, stats, and trigonometry) What equation are you trying to write? (x,y)-> (x+3, y-1) represents a coordinate sytem shift (translation) Wednesday, October 7, 2009 at 9:28pm by drwls FST (functions, stats, and trigonometry) but what if the function is moved, then what is the line of symmetry? like (x,y)-> (x+3, y-1)? i understand where the line is visually but i don't know how to write the equation Wednesday, October 7, 2009 at 9:28pm by Jared this is trigonometry.. its the class i'm taking and my book is called, functions, statistics, and trigonometry. scott, foresman. Wednesday, November 12, 2008 at 11:12pm by jerson Thank you for your help. It's just that I didn't take functions in grade 11 and now I'm taking Advanced Functions in grade 12 and I'm difficulty with. Thanks for your help though. Sunday, November 24, 2013 at 4:10pm by Jessy Try reviewing your trig functions and post an attempt at some of these. We'll be happy to check your work. They're all pretty straightforward applications of the definitions of trig functions and Pythagorean Theorem. Monday, March 5, 2012 at 1:06pm by Steve Trigonometry functions ok. now what? Thursday, November 28, 2013 at 9:06pm by Steve Trigonometric Functions What's trigonometry? What's Kumon. Thursday, July 15, 2010 at 10:41pm by Mia Trigonometry functions sketch the angle 2pi/3 Thursday, November 28, 2013 at 9:06pm by David Trigonometry functions find the reference angle of -11pi/3 Thursday, November 28, 2013 at 9:08pm by David Trigonometry functions Choose the graph of r = 5 + 4 sin θ Tuesday, December 10, 2013 at 9:43am by steve trigonometry (not) ya i know that they are all functions but how am i supposed to figure that out? Thursday, November 13, 2008 at 7:28pm by y912f simplify sin7x-sin3x as a product of trig functions. Tuesday, April 19, 2011 at 8:16pm by casey How to write a math story that includes 4 examples... Possible math topics include: introduction to functions investigating quadratics quadratics highs & lows exponential functions fiancial applications of exponential function acute triangle trigonometry I'm taken MCF3M math ... Thursday, January 13, 2011 at 7:48pm by Tarkan We're learning about different kinds of functions and I don't really understand the difference between rational and algebraic functions. I know that rational functions are functions that are a ratio of two polynomials, and algebraic functions are any functions that can be made... Thursday, September 13, 2007 at 11:09am by Kelly Math - Trigonometry doesn't matter. All of the trig functions are positive in exactly 2 of the 4 quadrants. Tuesday, October 15, 2013 at 11:12am by Steve senior courses not that I know of i am taking algebra 2 and ap stats on saturday and next year i will take for 12th grade elementary functions Tuesday, April 28, 2009 at 10:05pm by bobby senior courses not that I know of i am taking algebra 2 and ap stats on saturday and next year i will take for 12th grade elementary functions Tuesday, April 28, 2009 at 10:05pm by bobby Math (FST Exponential & Logarithm) What is the range of y= -(2)^(x-3) Wednesday, March 17, 2010 at 7:58pm by Raven . Which of these standard trigonometric functions has the least period? A. cosine B. cosecant C. tangent D. secant Sunday, December 8, 2013 at 9:26pm by steve . Which of these standard trigonometric functions has the least period? A. cosine B. cosecant C. tangent D. secant Sunday, December 8, 2013 at 9:42pm by steve Plz gve me sltn..fst Friday, November 2, 2012 at 10:00pm by Simran kaur advanced functions/precalculus 1. The function f(x) = (2x + 3)^7 is the composition of two functions, g(x) and h(x). Find at least two different pairs of functions g(x) and h(x) such that f(x) = g(h(x)). 2. Give an example of two functions that satisfy the following conditions: - one has 2 zeros - one has ... Wednesday, January 15, 2014 at 2:33am by Diane What's the easiest way to remember how to draw the graphs for the common trigonometric functions, like sin cos and tan? Wednesday, November 18, 2009 at 5:59pm by Peter If you are taking calc2, then you must know what sine and cosine functions, etc. are. What I am calling trigonometry you may have been taught as "precalc" Sunday, August 15, 2010 at 9:35am by drwls For the sine and cosine functions, you need a calculator, a slide rule (if there is one in the attic) or a table of trigonometric functions. There are also web sites you can use. Sin 150 degrees = 1/ 2, exactly cos 150 degrees = -(sqrt 3)/2 You can prove that with geometry, but... Monday, February 4, 2008 at 7:02pm by drwls Having trouble with true/false questions in Trigonometry. They read as follows - True or False: For a trigonometric function, y=f(x), then x=F^-1(y). Explain your answer. True or False: For a one to one function, y=f(x), then x=f^-1(y). Explain your answer. True or False: For ... Thursday, March 24, 2011 at 2:46pm by Veronica Having trouble with true/false questions in Trigonometry. They read as follows - True or False: For a trigonometric function, y=f(x), then x=F^-1(y). Explain your answer. True or False: For a one to one function, y=f(x), then x=f^-1(y). Explain your answer. True or False: For ... Thursday, March 24, 2011 at 3:03pm by Veronica pre calc What you need to find out is what each function does, and consequently the range of each of the functions when x varies in the range (-∞infin;). For your information, the identity function is f(x) = x, thus when x=∞ f(x)=∞ also. You already know about the ... Friday, December 18, 2009 at 11:09am by MathMate if sec theta = 2,what are the other five functions in values of theta? Wednesday, January 25, 2012 at 6:58pm by trigonometry can you please help me understand relations and functions. Like how to figure out if an equation is a function or not a funcion. for example: 1) 3x+2y=6, 2) x=y^2+2 and 3) y=square root of 1-x thank Thursday, November 13, 2008 at 7:28pm by y912f Be sure you post what YOU THINK about these questions. Then someone here will be able to comment on your thinking. Thanks. Tuesday, January 1, 2008 at 8:50pm by Writeacher Math (trigonometry) Find the value of each of the remaining trig functions. Cscè=-4, ð<è<3ð/2 Monday, April 12, 2010 at 4:50pm by Anonymous Suppose tan theta=15pi/18 is less than theta 3pi/2 , what are the other trigonometric functions? Tuesday, July 9, 2013 at 8:58am by NOEL Trigonometry/ Please Help Given sine of alpha=2/3 and cosine of alpha is less than zero, find the exact value of the other five trigonometric functions. Thursday, January 13, 2011 at 5:30pm by CJ This is not a school homework help. I am practicing for a test where one of the objective says "Applying trigonometric functions in authentic contexts involving periodic phenomena". What am I supposed to study about this. Any hints? Thanks. Sunday, March 9, 2008 at 4:15pm by Brooke Math- Trigonometry The point, in on the terminal side of an angle in standard position find the exact value of the six trigonometric functions of the angle a. (8,15) b.(-9,-40) Sunday, May 20, 2012 at 10:35am by Laura given that sin alpha/2 = 2/3, and that alpha/2 is in quadrant II, determine the exact values of all trig functions for alpha. Sunday, April 15, 2012 at 12:06pm by kiko 1) Graph these functions using a graphing calculator or program: f(x) = x^2 + 5 g(x) = 2x + 5 h(x)= x^3 + 5 What similarities and differences can you see among them? 2) Given the above functions, do you think that all linear equations can be expressed as functions? Tuesday, August 18, 2009 at 7:42pm by Anonymous Find the area of a rectangular plot whoselength is 48m and a diagonal is 50m. Gve me soltn fst nd ans 672m^2? Monday, September 10, 2012 at 12:45pm by Simran kaur Trigonometry functions David, really now ! After getting help for so many trig question, don't you think it is time that you try some of these yourself? Especially the rather simple ones, like your last two? Let us know what you get and we'll check them Thursday, November 28, 2013 at 9:08pm by Reiny using csc = 3.75, solve for sides: a= 3 b= ? c= ? and the remaining five trigonometric functions: sin= (1/3.75) cos= ? tan= ? sec= ? cot= ? *b is not 1; and c is not 3.75 Monday, March 18, 2013 at 7:39pm by Brandi High School Pre Calculus Do you know what the parent functions of y = sqrt(x) and y = x^3 look like? The functions f and g are simply those parent functions with a horizontal shift. Saturday, January 16, 2010 at 8:02pm by Marth Trigonometry concerns triangles. That subject may be part of your book, but is not the subject of your question. Your question is about functions. Wednesday, November 12, 2008 at 11:12pm by drwls sin2x=12/13 cos2x = 12/13 now use the half-angle formulas to get the functions of x Thursday, November 28, 2013 at 2:22pm by Steve is there a good, relevant sites where i can get stats extra help? Thursday, October 11, 2007 at 7:45pm by Keirson Plz gve me fst reply nd ful soltn nd ans is 3000tiles ????? In English please Thursday, September 13, 2012 at 10:53pm by Reiny go to google and type degree for right triangle trigonometry.and click on the second option which is:Trigonometry of rigth triangle. Topics in trigonometry. you will get ur answer. sorry couldnt paste the whole thing here hope i could help u.:) Tuesday, February 10, 2009 at 4:10pm by vero A mini-computer system contains two components, A and B. The system will function so long as either A or B functions. The probability that A functions is 0.95, the probability that B functions is 0.90, and the probability that both function is 0.88. What is the probability ... Saturday, March 5, 2011 at 3:54am by Paula Functions in Math Im looking at internet sites and Im not quite understanding functions. I have a question like; A function f(x) has the properties i) f(1) = 1 ii) f(2x) = 4f(x)+6 Could you help me out. I know nothing about functions. Monday, October 15, 2007 at 3:54pm by Lena Math (FST Exponential & Logarithm) Find the exponential model using y=a(b)^x with (9, 120) & (12, 216) (round to 4 decimal places- show work) Wednesday, March 17, 2010 at 5:39pm by Shay Use inverse trigonometric functions to find the solutions of the equation that are in the given interval, and approximate the solutions to four decimal places. (Enter your answers as a comma-separated list.) cos(x)(9cos(x) + 4) = 4; [0, 2π) Wednesday, March 6, 2013 at 9:55pm by Katlynn Math (FST Exponential & Logarithm) What is the amount and type of the growth factor of the model: y=3(1.04)^x Determine the equation for the asymptote: y=2+ log(x-3) For the second question, I don't even have an idea what is to be Wednesday, March 17, 2010 at 5:33pm by Sandy Trigonometry functions better yet, visit wolframalpha.com: http://www.wolframalpha.com/input/?i=plot+r+%3D+5+%2B+4+sin+%CE%B8+ and you can play around with polar functions all day long Tuesday, December 10, 2013 at 9:43am by Steve There is the infinite series sin (x/3) = x/3 - (1/2)(x/3)^2 + (1/6)(x/3)^3 + ... -(-1)^n *1/n!* (x/3)^n (n-> infinity) Damon has made a good argument that there may be no closed form equation for sin (x/3) in terms of trig functions of x. I tired Googling sin(x/3) and found... Thursday, December 11, 2008 at 10:55pm by drwls Eight types of functions are graphed and explained at this tutorial site: http://www.analyzemath.com/Graph-Basic-Functions/Graph-Basic-Functions.html There is a place to click on the page to get it to work interactively. I don't know what you mean by Parents graphs. Monday, May 12, 2008 at 10:22pm by drwls Well, i tried to see the beauty in trigonometry but to me, it is just too HARD!!!!!!!! There r so many formulae in trigonometry and how to i know which one to use. After i stay in the desk for 15 mins, i just wanna throw this stupid book away. I cant gain anything even i try ... Saturday, July 19, 2008 at 1:03am by Tommy State the amplitude of the following functions: a) y = cos theta b) y = 1/2cos theta c) y = -2cos theta Thursday, October 29, 2009 at 4:11pm by Brittany algebra functions construct two composite functions, Evaluate each composite function for x=2. i do not fully understand functions yet can someone explain and show me what to do step by step f(x)=x+1 g(x)=x-2 Wednesday, October 16, 2013 at 9:21pm by tani Trigonometry is the branch of mathematics that deals with the solution of triangles through the use of the trigonometric functions sine, cosine, tangent and their reciprocals. The trig function values derive from the ratios of the "x" and "y" values of a point on a unit circle... Monday, March 3, 2008 at 9:37am by tchrwill In a right tringle,/_ABC has a value of 37 degree while its opposite side is equal to 5,where angle c=90 degree find: a.) Remaining sides of the triangle b.) /_ABC using any trigonometry functions Tuesday, July 12, 2011 at 4:12am by William john Use composition of functions to show that the functions f(x) = 5x + 7 and g(x)= 1/5x-7/5 are inverse functions. That is, carefully show that (fog)(x)= x and (gof)(x)= x. Wednesday, July 29, 2009 at 3:30am by Alicia Composite Functions Find the composite functions for the given functions. (6 marks) f(x) = 4x + 1 and g(x) = x2 f(x) = sin x and g(x) = x2 - x + 1 f(x) = 10x and g(x) = log x Saturday, January 19, 2013 at 4:28pm by Anonymous trigonometry (not) The equation in 1, after rearranging, can be considered y as a function of x or x as a function of y. 2. expresses x as a function of y and 3. expresses y as a function of x. I would say that they are all functions, but your teacher may have other ideas. Thursday, November 13, 2008 at 7:28pm by drwls can't be factored because different functions are involved. As if you had y^2+x-1 = 0 But, using a well-known identity connecting sin and cos, 2cos^2 + sin - 1 = 0 2 - 2sin^2 + sin - 1 = 0 2sin^2 - sin - 1 = 0 (2sinx+1)(sinx-1) = 0 Sunday, November 24, 2013 at 4:13pm by Steve Find the value of each of the remaining trig functions. Cscθ=-4, π< θ<3π/2 Monday, April 12, 2010 at 3:01pm by Julie exponential functions in math I found these sites http://www.purplemath.com/modules/expofcns.htm http://tutorial.math.lamar.edu/Classes/Alg/ExpFunctions.aspx http://www.regentsprep.org/Regents/math/algtrig/ATP8b/ exponentialFunction.htm and of course the excellent videos from the Khan Academy http://www.... Tuesday, April 2, 2013 at 8:07pm by Reiny Graph and label the following two functions: f(x)=(x^2+7x+12)/(x+4) g(x)=(-x^2+3x+9)/(x-1) 1. Describe the domain and range for each of these functions. 2. Determine the equation(s) of any asymptotes found in the graphs of these functions, showing all work. 3. Discuss the ... Monday, May 2, 2011 at 11:22am by Debra respected sir i want to know the table describing the values of angles of trigonometry functions. eg. sin,cos & tan 15,30,45,60,75,90 seperately in table format. thanking you! padam singh. Thursday, March 4, 2010 at 2:07pm by padam Use inverse trigonometric functions to find the solutions of the equation that are in the given interval, and approximate the solutions to four decimal places. (Enter your answers as a comma-separated list.) 10 sin^2 x = 3 sin x + 4; [0, 2π) Wednesday, March 6, 2013 at 9:03pm by Katlynn Use inverse trigonometric functions to find the solutions of the equation that are in the given interval, and approximate the solutions to four decimal places. (Enter your answers as a comma-separated list.) 10 sin^2 x = 3 sin x + 4; [0, 2π) Wednesday, March 6, 2013 at 9:07pm by Katlynn Use inverse trigonometric functions to find the solutions of the equation that are in the given interval, and approximate the solutions to four decimal places. (Enter your answers as a comma-separated list.) 10 sin^2 x = 3 sin x + 4; [0, 2π) Wednesday, March 6, 2013 at 9:50pm by Katlynn I'm having a lot of trouble with graphing trig functions. Can anyone tell me how to graph this equation on a graphing calculator? Sketch the graph of y= sin x in the interval 0 ≤ x ≤4π Tuesday, April 3, 2012 at 6:25pm by Peter pan by test time you need to know the "standard" angles with easy-to-recall trig functions 0,π/6,π/4,π/3,π/2 If you know those angles and their trig ratios, you will recall that tan π/3 = √3 Now, recall the bit about principal values of inverse trig ... Saturday, May 18, 2013 at 9:37am by Steve How do I evaluate the trigonometric functions of the quadratic angle? Thanks in advance 1) sec π 2) tan π 3) cos π For this problem, I don't understand why it's -1 4) csc π 5) sec 3π/2 Thursday, January 10, 2013 at 8:54pm by Sira how can you possibly have an assignment working with something you have not studied? If you have f(x) and g(x) as functions, then there are two simple composite functions: f(g(x)) and g(f(x)) Given your functions, f(g) = g+1 = (x^2+2x+1)+1 x^2+2x+2 g(f) = f^2+2f+1 = (x+1)^2 + ... Friday, October 18, 2013 at 5:17pm by Steve please tell teachers what the subject is - needs probabilty and stats I showed you how to do the last one, am leaving this more complicated version of the classic prisoner's dilemna problem for a stats teacher (I do physics) Saturday, September 29, 2012 at 6:53pm by Damon The reason they come up in trig is that any point in the plane can be located at some distance from the origin, and in some direction, given by an angle θ. All your normal trig functions can be applied to θ. Sunday, December 8, 2013 at 10:57pm by Steve Use the trig Identities to find the other 5 trig functions. Problem 7.)Tan(90-x)=-3/8 8.)Csc x=-13/5 9.)Cot x=square root of 3 10.)Sin(90-x)=-.4563 11.)Sec(-x)=4 12.)Cos x=-.2351 I need HELP! Tuesday, January 5, 2010 at 4:09pm by Jennifer Judy, Sam, Cindy -- or Whoever has posted the last 12 stats posts -- If you posted your ideas of the answers, a tutor might then help you. Wednesday, March 31, 2010 at 1:32pm by Ms. Sue Write equivalent equations in the form of inverse functions for a.)x=y+cos è b.)cosy=x^2 (can you show how you would solve) a.) x= y+ cos è cos è = x-y theta = cos^-1(x-y) b.) cosy=x^2 cos(y) = x^2 y = Cos^-1(x^2) Thursday, March 3, 2011 at 1:35am by anon However, I just bet your scientific calculator has arc sin (asin , sin^-1) and arc cos (acos, cos^-1) and arc tan functions. Get the manual out. Wednesday, January 7, 2009 at 6:47pm by Damon Write the following trigonometric expression as an algebraic expression in x free of trigonometric or inverse trigonometric functions sin(cos^-1 x) -1≤x≤1 Tuesday, December 4, 2012 at 4:46pm by Matthew Find the sum and difference functions f + g and f – g for the functions given. f(x) = 2x + 6 and g(x) = 2x - 6 f(x) = x^2 - x and g(x) = -3x + 1 f(x) = 3x^3 - 4 and g(x) = -x^2 + 3 State the domain and range for the sum and difference functions in #3. Find the product and the ... Monday, September 2, 2013 at 4:08pm by Bee This is not a trigonometry question. x^2 + 7x + (7/2)^2 = 11 + (7/2)^2 (x + 7/2)^2 = 23 1/4 x = -7/2 +/- sqrt(93)/2 Thursday, November 12, 2009 at 10:15pm by drwls Just expand it ... 8x^3 - 6x^4 - 5x^2 + 10x all done! Why are you calling this trigonometry? Sunday, December 26, 2010 at 5:55pm by Reiny Trigonometry question? Thanks a lot, sorry I am not too good at trigonometry Thursday, March 28, 2013 at 8:25am by Knights You need either a calculator or a table of trig functions. If you have a calculator, most have inverse trig functions, the yellow key and sin gives you the angle. The mode of the calculator can be set to work in degrees or radians. for example sin s=0.75438373 My TI-83 is in ... Wednesday, January 7, 2009 at 6:47pm by Damon In a right tringle ,angle ABC has a value of 37 degree while its opposite side is equal to 5,where angle c=90 degree Find A.)Remaining sides of the triangle B.)Angle ABC using any trigonometry Tuesday, July 12, 2011 at 4:23am by William john Computer programming Functions and procedures are used all the time in Modular programming. Can functions replace procedures? what are the advantages of using functions over procedures? Are there any advantages to using procedures over functions? Tuesday, August 17, 2010 at 6:10pm by Brenda can you explain to me in details please of how these type of questions can be answered without using a calculator to find the direct answer. Find the following functions correct to five decimal places: a. sin 22degrees 43' b. cos 44degrees 56' C. sin 79degrees 23'30' Thursday, January 13, 2011 at 11:33pm by anon Advanced Functions/Precalculus Trigonometry Questions 1.) Find the exaqct value of tan(11π/12) 2.) A linear trig equation involving cosx has a solution of π/6. Name three other possible solutions 3) Solve 10cosx=-7 where 0≤x≤2π Wednesday, December 4, 2013 at 2:13pm by john Operations with Functions write the function below as either a sum/difference/product/quotient of 2 r more functions a. h(x) = x^2+3x+9 How Would I do this? My teacher just did this f(x) = x^2 g(x) 3x+9 ^I dont really understand h(x)=(x+5)(x-3) how would I do this one? If h(x... Wednesday, September 26, 2012 at 7:35pm by Shreya Trigonometry(please Clarify) The confusion arose because the term inverse function has more than one meaning in mathematics. So there was an answer for each interpretation of the term. However, if you are in the process of studying inverse trigonometric functions, such as arc-sine, arc-cosines, etc, your ... Friday, March 4, 2011 at 12:06am by MathMate Consider the functions Consider the functions f(x)= 5x+4/x+3(This is a fraction) and g(x)= 3x-4/5-x(This is a fraction) a)Find f(g(x)) b)Find g(f(x)) c)Determine whether the functions f and g are inverses of each other. Wednesday, April 10, 2013 at 12:03pm by Kayleigh For each problem, construct two composite functions, . Evaluate each composite function for x=2 ok i have not dealt with composite functions can I please get some help with this as I have 20 questions to do can someone show me step by step on how to properly solve this ... Friday, October 18, 2013 at 5:17pm by Johnathan Math questions, please help me? :(? Find the composite functions f o g and g o f for the given functions. f(x) = 10^x and g(x) = log x State the domain and range for each: f(x) = 4x + 1 and g(x) = x^ 2 f(x) = sin x and g(x) = x^2 - x + 1 f(x) = 10^x and g(x) = log x If f = {(-2... Monday, September 2, 2013 at 4:08pm by Britt Will someone please help me find a website to answer the following questions? What are the various functions of a police agency? Compare how the functions of a police agency differ at the federal, state, and local levels. What would happen if the various functions and roles of... Sunday, July 8, 2012 at 10:24am by Picaboo Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=FST+(functions%2C+stats%2C+and+trigonometry)","timestamp":"2014-04-18T04:19:09Z","content_type":null,"content_length":"38481","record_id":"<urn:uuid:b43afd8d-1636-4175-8238-8525daa0769f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Direction of gravitational acceleration vector This question was asked in the context of Newtonian physics since this is not the special/general theory of relativity. The answer is simpler in the context of general relativity, that's why I referred to it. When invoking the equivalence between accelerated observers and gravity, it is probably easier to explain things from the PoV of a theory that takes the equivalence principle as a basis, than with forces and non-inertial frames. Clearly, the OP invokes the equivalence principle, and it is rather strange that he invokes it (how did he get to the equivalence principle in Newtonian physics in the first place) and then wonders what way the inertial force points. Because in Newtonian physics, *a priori* there doesn't need to be equivalence between an upward accelerating platform in space and the force of gravity on earth: it is only after working out the "pseudo force" (with the right sign of course) in the non-inertial platform frame, and compare it with the Newtonian force of gravity on the earth's surface, that one notices that any property of the "falling" object itself (such as its mass), drops out, and that this is hence a property intrinsic to the point in the coordinate system, and not to the "falling object". But in order to see this in the first place, one needs already to have worked out the pseudoforce in the non-inertial frame, with the right sign: hence one cannot wonder about its sign afterwards ! So clearly, the OP took the equivalence principle as a starting point: that, A PRIORI, a frame with constant acceleration g upward will be equivalent with a frame at the surface of the earth (neglecting tidal effects), and then wondered in what direction a force should be applied in order to take into account both equivalent phenomena (falling down on the platform, or falling down on earth). On earth, the OP seemed to understand: it comes from an "attractive force of gravity", but on the platform, he thought that, the platform accelerating upward, the pseudoforce should be upward too, and hence WONDERED about how the equivalence came about. It was therefor, in my opinion, easier to "get rid of the Newtonian force of gravity", and just say that the surface of the earth can be considered "accelerating upward" just as the platform. "Accelerating upwards" is just a property of the metric expressed in the coordinate system "at rest" (fixed to the platform, or fixed to the earth surface), so that the corresponding geodesics "bend Within Newtonian physics there is a force of gravity. Within Einstein's GR there is also a force of gravity. The force of gravity in GR is a frame dependant quantity unlike the Lorentz force. There are two classes of forces (1) inertial forces (2) non-inertial forces. Both of which Einstein held to be "real" forces. People who work in Newton physics like to refer to inertial forces to "pseudo-forces." But this is not Einstein's view. You can hold this view, but the concept of "force" is a bit silly in GR. Force is "interaction", and the property of freely moving test bodies is that they don't undergo any interaction: they follow geodesics. Now, if you happen to use a coordinate system in which these geodesics are CURVED, then you will say that, wrt your coordinate system, the freely moving test body undergoes accelerations, but it is a far cry to call that forces. This is like saying that a straight line is curved when looking at it in polar coordinates. The "inertial force" is the ABSENSE OF A FORCE WHICH WOULD BE NEEDED IN THE OPPOSITE SENSE TO HOLD THE FREELY MOVING TEST BODY ON A STAIGHT COORDINATE LINE. Just as the centrifugal force in a rotating frame is the opposite of a force needed to keep a body on a constant coordinate line (in this case, constant radius). And just as gravity, at the surface of the earth, is the opposite of the force needed to KEEP A BODY AT REST (or in "uniform motion") WITH RESPECT TO THE SURFACE OF THE EARTH. If it were not for the surface of the earth, we wouldn't call it a force. It's only a "force" when you want to keep something from doing its natural motion, which is falling down. So I would rather say that Einstein didn't instore pseudo-forces as real forces, but realised that gravity is a pseudoforce which appears when we insist on working in a Euclidean space.
{"url":"http://www.physicsforums.com/showthread.php?t=125867","timestamp":"2014-04-19T15:11:33Z","content_type":null,"content_length":"66675","record_id":"<urn:uuid:897b51aa-729c-4758-a647-fd095fbee7c9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Coppell Prealgebra Tutor Find a Coppell Prealgebra Tutor ...I also hold Master Reading, Special Education, and ESL certifications from Texas in grades Kindergarten through 12th grade. I believe that all children are capable of learning if given the opportunity and the correct method of instruction. I use a hands on approach to assist children in making learning come alive and relevant to them. 21 Subjects: including prealgebra, reading, English, writing ...Most people get through organic by memorization... I am not one of those people! Memorization has always been a weakness of mine. So instead, I learned to figure out how and why certain products are formed in organic. 17 Subjects: including prealgebra, chemistry, geometry, biology ...I am certified by the state of Texas to teach math and science for grades 4-8. However, throughout my years of educational experience, I have worked with students both older and younger than these grade levels. As a student teacher, I spend time in 1st and 3rd grade classrooms. 15 Subjects: including prealgebra, chemistry, geometry, algebra 1 ...I continued tutoring throughout my college days where I obtained 2 B.S. degrees in chemistry and material science. When I was in England studying for my M.S. degree in chemical engineering, I was a teaching assistant to 5 classes of about 25 undergraduate students, each in chemistry laboratory (... 22 Subjects: including prealgebra, chemistry, calculus, physics I am a recent graduate of Trinity University in San Antonio, Texas. Before transferring to Trinity, I attended the United States Naval Academy in Annapolis, Maryland for three years. I was an applied math major at the academy, and finished my math degree at Trinity University. 14 Subjects: including prealgebra, chemistry, ASVAB, SAT math
{"url":"http://www.purplemath.com/coppell_prealgebra_tutors.php","timestamp":"2014-04-20T06:33:38Z","content_type":null,"content_length":"23911","record_id":"<urn:uuid:a35d513b-8eb3-4491-bf4a-c9950a4541b4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x A. Marzal, E. Vidal, "Computation of Normalized Edit Distance and Applications," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 9, pp. 926-932, September, 1993. BibTex x @article{ 10.1109/34.232078, author = {A. Marzal and E. Vidal}, title = {Computation of Normalized Edit Distance and Applications}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {15}, number = {9}, issn = {0162-8828}, year = {1993}, pages = {926-932}, doi = {http://doi.ieeecomputersociety.org/10.1109/34.232078}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Pattern Analysis and Machine Intelligence TI - Computation of Normalized Edit Distance and Applications IS - 9 SN - 0162-8828 EPD - 926-932 A1 - A. Marzal, A1 - E. Vidal, PY - 1993 KW - character strings; words; normalized edit distance; finite alphabet; hand-written digit recognition; computational complexity; pattern recognition VL - 15 JA - IEEE Transactions on Pattern Analysis and Machine Intelligence ER - Given two strings X and Y over a finite alphabet, the normalized edit distance between X and Y, d(X,Y) is defined as the minimum of W(P)/L(P), where P is an editing path between X and Y, W(P) is the sum of the weights of the elementary edit operations of P, and L(P) is the number of these operations (length of P). It is shown that in general, d(X,Y) cannot be computed by first obtaining the conventional (unnormalized) edit distance between X and Y and then normalizing this value by the length of the corresponding editing path. In order to compute normalized edit distances, an algorithm that can be implemented to work in O(m*n/sup 2/) time and O(n/sup 2/) memory space is proposed, where m and n are the lengths of the strings under consideration. Experiments in hand-written digit recognition are presented, revealing that the normalized edit distance consistently provides better results than both unnormalized or post-normalized classical edit distances. [1] F. Casacuberta and E. Vidal,Reconocimiento Automático del Habla. Barcelona: Marcombo, 1987. [2] J. Di Martino, "Dynamic time warping algorithms for isolated and connected word recognition," inNew Systems and Architectures for Automatic Speech Recognition and Synthesis(R. De Mori and Y. Suen, Eds.). Berlin: Springer Verlag, 1985. [3] K. S. Fu,Syntactic Pattern Recognition and Applications. Englewood Cliffs, NJ: Prentice-Hall, 1982. [4] P. A. V. Hall and G. R. Dowling, "Approximate string matching,"ACM Comput. Surveys, vol. 12, pp. 381-402, 1980. [5] Y. Kitazume, E. Ohira, and T. Endo, "LSI implementation of a pattern matching algorithm for speech recognition,"IEEE Trans. Acoustics Speech Signal Processing, vol. 33, no. 1, pp. 1-5, Feb. 1985. [6] A. Marzal and E. Vidal, "On the computation of normalized edit distances revisited," Tech. Rep. DSIC-II/15/1991, Depto. de Sistemas Informáticos y Computación, Univ. Politécnica de Valencia. [7] W. J. Masek and M. S. Patterson, "A faster algorithm computing string edit distances,"J. Comput. Syst. Sci., vol. 20, pp. 18-31, Feb. 1980. [8] L. Rabiner and L. Levinson, "Isolated and connected word recognition--Theory and selected applications,"IEEE Trans. Commun., vol. C-29, no. 5, pp. 621-659, 1981. [9] H. Rulot and E. Vidal, "Modeling (sub)string-length based constraints through a grammatical inference method," in NATO ASI Series,Pattern Recognition Theory and Applications(P. A. Devijver and J. Kittler, eds.), New York: Springer-Verlag, 1987, vol. F30. [10] D. Sankoff and J. B. Kruskal,Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison. Reading, MA: Addison-Wesley, 1983. [11] P. H. Sellers, "The theory and computation of evolutionary distances: Pattern recognition,"J. Algorithms, vol. 1, pp. 359-373, 1980. [12] E. Vidal, F. Casacuberta, J. M. Benedi, M. J. Lloret, and H. Rulot, "On the verification of triangle inequality by dynamic time-warping dissimilarity measures,"Speech Commun., vol. 7, pp. 67-69, [13] R. Wagner and M. Fischer, "The string-to-string correction problem,"J. ACM, vol. 21, pp. 168-173, 1974. [14] Y. P. Yang and T. Pavlidis, "Optimal correspondence of string subsequences,"IEEE Trans. Patt. Anal. Machine Intell., vol. 12, no. 11, pp. 1080-1087, Nov. 1990. Index Terms: character strings; words; normalized edit distance; finite alphabet; hand-written digit recognition; computational complexity; pattern recognition A. Marzal, E. Vidal, "Computation of Normalized Edit Distance and Applications," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 9, pp. 926-932, Sept. 1993, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tp/1993/09/i0926-abs.html","timestamp":"2014-04-24T17:03:55Z","content_type":null,"content_length":"53460","record_id":"<urn:uuid:6b2aa447-6100-4f46-a870-857fd639093c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
T-Mobile G1 teardown In our Dev Phone 1 excitement last week, we somehow overlooked phoneWreck’s teardown of the T-Mobile G1. The complex slider mechanism is certainly worth looking out. One of the major oddities they point out is the inclusion of two vibration motors. One is mounted next to the SIM on the mainboard. While the other is mounted in the frame next to the earpiece. We wonder what was gained/solved by using two. The phone also includes a digital compass module. We’d like a more detailed explanation of how the Xilinx CPLD is used. From this article in 2006, it seems HTC uses them to generate custom clock signals and switching off devices for power management. 1. charlie says: surprised about the compass. those can be pretty expensive. neat article. 2. omikun says: From my experience with compasses they do not work very well inside buildings… or just my engineering building… 3. john says: Why two? How about realigning the resulting vibration as the sum of the two vibration vectors, preserving the integrity of the device. 4. Sikiş Videoları says: 5. very much 6. Sikiş says: good ? How about realigning the resulting vibration as the sum of the two vibration vectors, preserving the integrity of the device 7. Türk Porno says: ? How about realigning the resulting vibration as the sum of the two vibration vectors, preserving the integrity of the device 8. Türk Porno says: How about realigning the resulting vibration as the sum of the two vibration vectors, preserving the integrity of the device 9. Türkçe Porno says: ? How about realigning the resulting vibration as the sum of the two 10. Frogz says: how realigning sum…what???!! 11. Digitürk says: realigning the resulting vibration as the sum of the two vibration vectors, preserving the integrity of the device Leave a Reply Cancel reply Recent comments • rexxar on Re:load Pro, an Open Source Active Load • tekkieneet on Re:load Pro, an Open Source Active Load • rue_mohr on Re:load Pro, an Open Source Active Load • Chris C. on Re:load Pro, an Open Source Active Load • foxxpup on A 3D Printed Cryptex In case you missed it
{"url":"http://hackaday.com/2008/12/19/t-mobile-g1-teardown/","timestamp":"2014-04-17T01:47:19Z","content_type":null,"content_length":"80980","record_id":"<urn:uuid:54b04c26-3cb4-485d-852f-a653a428c3c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Belmont Hills, PA SAT Math Tutor Find a Belmont Hills, PA SAT Math Tutor ...At college level, he has tutored students from the Universities of Princeton, Oxford, Pennsylvania State, Drexel, Temple, Phoenix, and the College of New Jersey. Dr Peter offers assistance with algebra, pre-calculus, SAT, AP calculus, college calculus 1,2 and 3, GMAT and GRE. He is a retired Vice-President of an international Aerospace company. 10 Subjects: including SAT math, calculus, algebra 1, GRE ...I am a sociology/anthropology major at Swarthmore College. I have read numerous anthropological works. Moreover, I have done original ethnographic work in Costa Rica and am currently writing my thesis in anthropology. 32 Subjects: including SAT math, English, reading, writing ...I can present the material in many different ways until we find an approach that works and he/she really starts to understand. Nothing gives me a greater thrill than the look of relief on a student's face when he/she actually starts to get it and realizes that it isn't as difficult as was previo... 19 Subjects: including SAT math, calculus, econometrics, logic I'm a retired college instructor and software developer and live in Philadelphia. I have tutored SAT math and reading for The Princeton Review, tutored K-12 math and reading and SAT for Huntington Learning Centers for over ten years, and developed award-winning math tutorials. 14 Subjects: including SAT math, geometry, GRE, algebra 1 ...I focus on the area(s) where the student needs help guiding his/her progress while encouraging the student to learn and solve problems independently. It is my goal to make sure each of my students feels safe, comfortable, and appropriately challenged. As I hold myself to the highest standards, it is my policy to make sure that each student is completely satisfied with their tutoring session. 21 Subjects: including SAT math, reading, writing, chemistry
{"url":"http://www.purplemath.com/Belmont_Hills_PA_SAT_math_tutors.php","timestamp":"2014-04-21T02:20:45Z","content_type":null,"content_length":"24472","record_id":"<urn:uuid:376d7492-485e-4dbe-bea2-6c44e8ed5a49>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
RTI and Mathematics October 20, 2011 10:00 AM - 11:00 AM • Chuck Gameon, David Allsopp About this Talk This Talk has now concluded. Scroll below for questions and answers. Please take a few moments at the completion of this event to give us your feedback by taking our survey! Implementing RtI in mathematics presents unique challenges and new questions. Join David Allsopp and Chuck Gameon as they explore the application of multi-tiered systems of support in mathematics and answer your questions about key issues including effective instructional practices for interventions, strategies for screening and progress monitoring, and criteria to think about when selecting intervention curricula. They will also offer examples to illustrate the application of effective RtI practices that increase student achievement in Sue Jenkins Do you have suggestions for where to find the time for Tier 3 students to get interventions? Chuck Gameon Sue, finding time for Tier 3 intervention in reading or math can be problematic. You cannot make more time in a school day, so you have to try to look at the amount of time you have critically. It comes down to a matter of priorities. In our school we have maintained that reading and math are the two most important academic areas. Because of this, every other time slot (science, social studies, etc.) can be sacrificed to some degree to help a student achieve academic targets in reading or math. We try to avoid pulling students from library, PE, music, or their recess time. You can also look to before school or after school for interventions. Students needing Tier 3 supports may require a before school intervention period and an additional intervention time during the school day. I feel this is a school-based decision where everyone needs to provide input and be willing to support the decision of the staff. Donna Raspa Do you have any suggestions regarding websites to locate scientifically researched based math interventions? Chuck Gameon When looking for websites to review research-based math interventions you need to know about the What Works Clearinghouse . This website is a part of the U.S. Department of Education’s Institute of Education Sciences. It reviews and reports on existing research on programs for math (as well as other content areas, including behavior). What Works Clearinghouse has established rigorous standards to judge the research on programs and will give ratings to programs based on their effectiveness. Another website is Best Evidence . This sight also reviews programming for effectiveness. We have instituted an intervention period during the school day to provide more targeted instruction for children falling behind their peers in math skills. Our Special Education students continue to fall farther behind. Any advice? David Allsopp Ron, I am glad to know that you are providing more targeted instruction for your struggling students. However, it is difficult to answer your question without knowing more specifics like: What are the disabilities that students who fall behind have (e.g., learning disabilities, ADHD, emotional/behavioral disabilities, ASD, intellectual disabilities, physical disabilities, sensory impairments, etc.)? With what mathematical areas are they having the most difficulty? To what extent are research supported mathematics instruction practices being used for these students? With what particular mathematical concepts/skills are they having difficulty? These are just a few important considerations. I would recommend that you visit MathVIDS website to learn more about research supported effective mathematics instructional practices for students with disabilities (and other struggling learners). Katie Flowe Do you have any suggestions on how to set appropriate goals when using math concepts and applications to progress monitor? Chuck Gameon Katie, that is a very good question and one that is important when problem solving with students. You always want to set appropriate goals for the students. When you are problem solving, you need to remember the goal of everything you are doing for students at Tier 2 and Tier 3—to get them to reach grade level benchmarking targets (to achieve proficiency of math content). It is important to consider their present level of performance and the time frame you have to achieve the goal when setting appropriate goals so they are not too ambitious and are achievable. Another piece of the puzzle comes with experience. You will get an idea over time how much you can expect students to grow when provided with intervention support. This helps to set appropriate goals for progress monitoring. Since your question specifically asks about math concepts and applications (M-CAP) I am assuming you are using AIMSweb probes. A good place to start is to look at the AIMSweb norm chart. You will find percentile scores from all of the students who have taken the M-CAP probe. You can use the 50th percentile as a goal for your progress monitoring. What are good interventions for students who have trouble with math word problems? David Allsopp In order for students to have success solving word problems, they must be able to do a number of things including reading and comprehending the text of the word problem, recognizing the mathematical situation represented within the word problem (e.g., addition, subtraction, multiplication, division, etc.), identifying the relevant, and irrelevant information, applying a process or procedure for solving the problem, and being able to “estimate” the reasonableness of their answer. Of course, students also must have an understanding of and proficiency with the mathematical concepts, processes and procedures that a particular word problem addresses. So, first, a good diagnostic assessment should be done to evaluate students’ abilities to engage in these activities. Then, any intervention would need address whatever area or areas with which students are having difficulties. The most promising approach to teaching students to solve word problems is explicitly and systematically teaching them metacognitive strategies that assist them in learning how to engage in the thinking aspects of word problem solving. Montague and her colleagues at the University of Miami (FL) have done quite a lot of work in this area where they focus on helping struggling learners develop schemas for solving word problems including the use of graphic organizers. A number of research supported mnemonics have been shown to result in success as well. For example, the FASTDRAW strategy (see Mercer & Mercer, 2007 – Teaching Students with Learning Problems or the MathVIDS website ) provides an explicit structure for students to determine what needs to be solved, the important information, and setting up an equation (FAST) and them for how to solve the equation (DRAW). The STAR strategy is another example (see Maccini & Gagnon, 2006). Sharon LeBlanc What software is available for diagnostic testing and progress monitoring for math? Chuck Gameon Diagnostic testing or progress monitoring software/programs for math are available and the options continue to grow. Here are some computer-based options for diagnostic assessments that I know of: Diagnostic Online Math Assessment (DOMA) and Stanford Diagnostic Mathematics Test (SDMT). Math Access is advertised as a benchmarking, progress monitoring, and diagnostic assessment program. Progress monitoring programs: Easy CBM and Yearly Progress Pro, both are on-line progress monitoring programs. In reviewing some of the math assessment (progress monitoring or diagnostic) programs that are available, one program appears to be very interesting to me (although it is not a computer-based program). This program is Key Math 3. It requires 1 on 1 administration and it is for students in K-12. It takes 30-90 minutes to administer and it is designed to assess understanding and application of critical math concepts and skills from counting through algebraic expression. What it does offer is an instructional piece that aligns to the diagnostic assessment. It supports K-6 skill level with 30/40 minute lessons. It offers assessments for readiness and mastery and recommends either long-term supplemental support or you can choose a particular topic for targeted instruction and can be done with an individual student or small group. Kathy Kane What are some really good progress monitoring tools for middle school math? I understand the need for normed referenced tools for specific learning disabilities but what are some effective easy, tools for Tier 1 and Tier 2 when the student will probably not end up identified for special education? David Allsopp Kathy, I think that a lot of folks are finding themselves in the same quandary! A central tenet to RTI is implementing more intensive instruction for students who need it that is supplemental to the core mathematics curriculum. So, I agree with your belief that students who are below state benchmarks should not be removed from receiving instruction in the Tier 1 core curriculum. With respect to specific interventions, particularly at Tier 2, it is important to first ensure that these interventions target the foundational concepts (big ideas) that under gird the scope and sequence of concepts/skills that any particular core text or program emphasizes for any particular grade level. The National Council of Teachers of Mathematics (NCTM) Focal Points are a helpful aid for determining what these big ideas might be for any particular grade, grade range, and corresponding core text. Students who struggle oftentimes do so because they lack true conceptual understandings of the various mathematical operations they are asked to compute (e.g., that 2 x 3 = 6 really can be described as two groups of three totals six; ½ x ¼ = 1/8 can be described as one-half of one-fourth is one-eighth). Students can visualize these understandings by using materials to represent the meaning of mathematical representations (e.g., draw a line down the middle of an one-quarter circle piece and compare the area of one side of the divided one-fourth circle piece to an one-eighth circle piece). When students are able to visualize and describe the underlying concepts of the mathematics they do, then they build a strong foundation for problem solving and making meaningful cognitive connections among different mathematics concepts and skills. Therefore, the second important aspect of Tier 2 and 3 interventions is to ensure students are provided explicit and systematic math instruction utilizing practices that highlight the conceptual meaning of targeted big ideas that correlate to the core curriculum (e.g., concrete-representational (drawing)-abstract sequence of instruction, language experiences/verbal expressions, use of graphic organizers, teaching mathematics strategies, etc.). Laura Jones What process do you use to group students for Tier 2 and Tier 3 intervention? Chuck Gameon Laura, students are grouped for instruction based on data. You start the initial steps to this end with the benchmarking data. Your benchmarking program may give you enough information to begin your groupings but, typically, you will need to do some diagnostic assessments. Depending upon your programming options you could also do placement tests, either from your core math program or from optional intervention programs. If you are going to group your students for instruction you need to group them by skills they are missing. Don’t just group kids to group them. How can we help students get "caught up" once they've been left behind? Chuck Gameon You can help any student get “caught up” through a systematic plan—that’s really what RtI does for children. The first step would be to identify where they are performing. You must establish an instructional level and identify skills they have in relation to grade-level expectations. Once this is established you plan the educational programming for the student—the intervention. Within an RtI model it will be a research-based intervention that is specifically aligned to the student need. The staff member who will deliver the intervention will have been trained so they can offer this intervention to a high degree of fidelity. The plan will include a measurable goal, the programming, how often it will happen (eg. daily for 30 minutes), and how you will be monitoring the student growth (progress monitoring). The plan will be reevaluated every 4-5 weeks for effectiveness by examining the progress monitoring and fidelity data. The intensity of the intervention will be determined by the gap between the student and his/her grade level expectations. All students will need 60-90 minutes of quality math instruction for their core programming and the student who needs to “catch up” will need more. Obviously, there is a lot more to creating a system where this is embedded within all that you do in your school. The RtI process creates a system where student aren’t “left behind” because of the constant monitoring of student data. Is there ever a time to 'stop' interventions for high-school-age students and 'hand them' a calculator so they can learn the higher math and get the skills they need for the high stakes tests (most of which ALLOW calculators)? David Allsopp Julie, great “stop intervention” question! My initial response is that at the high school level, if the only reason a student is receiving Tier 2 or 3 intervention service is that they have difficulties with performing basic computations, then I wonder what is the focus of the intervention. If the focus is to make them fluent in memorizing basic facts or performing algorithms in written form only, then I would question the necessity of these things given their age and grade level, particularly if this is at the expense of students getting exposure to mathematics that is critical for them receiving a regular diploma (e.g., Algebra 1 & 2; Geometry). The most important consideration, in my opinion, is whether or not a student has the conceptual understandings of the mathematics that underlie the procedures that we often mistake for mathematical proficiency for K-12 mathematics. Let them use calculators if the only barrier for them is the algorithmic aspect of mathematics! However, if they don’t conceptually understand the mathematics behind the algorithm, then interventions should focus on these things, not recall of tacit information like the sum of a certain operational fact. How do you accelerate students while keeping them in their core class? Chuck Gameon Accelerating students in the regular classroom can require some extra effort by the regular education teacher. The first thing that must be addressed is the grade-level skills. Do you have a system that can assess whether this student has mastered the math skills that the other students are expected to have at that grade level? If not, you will need to do this. Oftentimes, math programs will have unit assessments or even end-of-the-year tests. If this student can pass these tests, then they should go on to the next year’s curriculum. You can repeat the process until you establish the student’s instructional level. Once this is done it is actually easier to address their specific learning needs. Like reading, math instruction should be within a small group setting for at least a portion of the math instructional block. This student, or others who have similar needs, can have small group instruction at their level. Rotating students through learning centers (systematically planned and implemented with a high degree of fidelity) within your classroom is another effective instructional practice that allows you to provide appropriate instruction to students who are “above grade level.” Differentiating instruction is yet another option to use for accelerating instruction. Differentiating instruction is a little more complex as it takes quite a bit planning effort. When looking at differentiating instruction, if you remember to focus on the content, product, process, or pace of the instruction it makes it a little easier. Where do I begin with progress monitoring at the middle school level? Over 75% of our students are below grade level in math. Chuck Gameon If 75% of the students in your middle school are below grade level, then you do not want to start with progress monitoring. You will need to start by looking at your core math program and what is offered every day in your math classes. If 75% of the students do not have the skills they need, you need to do something different for all kids. Instruction at the core level must also be addressed: □ Are students engaged in instruction? □ Does the teacher plan effectively? □ Is the program a research-based program? □ Have teachers been trained on the program? □ Is there any fidelity to the program? The list of questions could go on. The point is, look at where you are now and plan how you can systematically change to better meet the needs of the students. You definitely want to discuss the performance of your students with the 6th grade teachers (not assigning blame). In terms of school improvement, you always get “the most bang for your buck” by improving Jodie Giannakopoulos What are some Tier 2 and Tier 3 math interventions used with students in grades 4 through 6 that do not require additional staff? Chuck Gameon Jodie, when looking at interventions for students who need intensive supports, Tier 3, I believe it is imperative that they have interactions with staff members. You want them to receive the highest quality instruction from the most capable staff. This instruction needs to be systematic and explicit, which relies on a person to execute. The only option for Tier 3 interventions that does not require a staff member would be computer-aided instruction (CAI). This would be effective to some degree if you can find a program that is capable of meeting the specific needs of your students needing Tier 3 supports. There are research-based CAI programs available, however they are very costly pieces of software. I am thinking if you can’t afford to have extra staff work with children, then it may be cost prohibitive for you to purchase this software. Tier 2 students can benefit from CAI programs as well, but again, the cost. They can benefit from other software programs that are designed for practicing math skills as long as there is instruction before they go and practice on the computer. These programs are much more affordable. The next possible way to provide appropriate instruction for students needing Tier 2 supports would be to look at the core programming structure. If students are grouped for instruction, then they can receive appropriate instruction at their level. If you also incorporate learning centers for these students you can provide aligned practice opportunities as well as increasing the total number of times a student can interact with the concept. One last part to this would be looking at pre-teaching and re-teaching for the Tier 2 students from the core content. Tina Cole I am teacing math intervention for first grade. Students are working to correctly identify the numbers 11 to 20. The 11, 12, 13, 15, and 20 are difficult for them...mostly 12 and 20. Any ideas? David Allsopp Tina, without knowing more about the students it is difficult to provide specifics. However, here are a few ideas: Since identifying numbers is related directly to the number sense concept of magnitude or quantity it is important that students can ”visualize” the relative quantitative differences among numbers. So I would suggest doing number work with them where they use materials to build their number sense as it relates to quantity. For example - Is a group of ten teddy bear counters more or less than a group of twenty? How do you know? How many more teddy bears are in the group of twenty (count up strategy)? As you do this, use language cards with both the written number name and number symbol to explicitly associate different numbers of teddy bear counters (or any other appropriate counting objects). Students can practice placing these language cards next to the groups of objects, same the number, and then talk about why they think the number is the same as the number of objects in a group. Using number lines are also very helpful. Students can use counting objects to practice/ play instructional games with number lines where they place them along the number line to show different quantities from 1-20. Also they can start with a quantity (e.g., 3) and count up and back to reach another number (student counts up 5 to reach 8). As students do this, ask them how many more or less the target number is from the number at which they started (e.g., I counted five more to get to 8 from 3). One thing that young students often have difficulty with when it comes to identifying numbers greater than ten, like eleven through twenty, is that the language we use does not correspond to the quantity they represent. Interestingly in some languages they do. We use “eleven” for the actual quantity of “ten and one.” Our traditional number system represents a base ten process but the language we use does not in some instances. This is difficult for students oftentimes. Again, we can help students by helping them understand the words/language we use. So, for example, you can help students learn “other” ways to say eleven (ten and one), twelve (ten and two), etc. Then, students can play instructional games matching groups of objects to the other way to say the number and then the traditional way to say the number. So, a student may have a group of fourteen objects. They count the objects, choose the language cards that match. As they do this, students can be taught to count up to ten first and then count up from there. Again, this is all trying to help them make conceptual sense of quantity and to help them begin to understand how our base ten system operates. Overall, it is really important to help student make sense of the abstract representation of numbers to meaningful experiences by explicitly associating them to meaningful representations (e.g., materials, language, drawings) and allowing them multiple opportunities to work with these associations. Diane Cotton There are a myriad of researched based interventions for reading, but they are not as easy to find for math. Where can we get these interventions without having to buy a canned program? Chuck Gameon Diane, you are right. There are some research-based interventions for math available, but they are very few in comparison to reading. Remember, within an RtI model we also talk about “research-based curriculum and instruction.” Instruction as a basis for intervention is sometimes overlooked as a research-based practice. I have always maintained that good teaching is good teaching, no matter the content but there are research-based instructional practices specific to math. Before that, there are some interventions that you can do to intensify instruction. For example, change the group size. If you create a small group for instruction you are increasing the response opportunities and, most likely, the engagement of the students. Another example is to pre-teach or re-teach from the core program. This gives the student a longer period of time during a school day to attain the objective that you want. Effective instructional practices will go a long way in helping all students and, when intensified with a small group setting, will be even more powerful. There are a number of specific instructional strategies for math that David could address in his response. David Allsopp Diane, the term “research based intervention” can be difficult to interpret, as I am sure you are aware. To be honest, there are very few packaged programs that have an evidence base to suggest they are effective. Even those that do, only seem to have a small to moderate effect. So, thinking about mathematics interventions as instructional practice is a much more promising approach in my opinion. That is, integrating the use of research supported mathematics instruction practices that are targeted to address both the mathematical content and learning needs of individual students. Examples of such practices are: use of Explicit Systematic Instruction that incorporates concrete-representational-abstract sequences of instruction, teaching problem solving strategies, explicitly connecting language to mathematics, using graphic organizers to help student connect mathematical ideas, providing students multiple opportunities to apply new understandings in order to develop proficiency, engaging students in multiple ways to express what they understand and can do (i.e., communication, representation, problem solving, connection-making, reasoning/proof), etc. The MathVIDS website provides extensive information about these practices including video models and more. Also, check out the resources listed on the webpage for more ideas. Several provide explicit direction for how educators can critically evaluate what they are currently doing and how they can integrate these research supported practices in ways that meet the needs of their students and teaching contexts. Courtney Havens What are effective interventions that can be done at the middle school level? Our data always seems to show that the area of number sense is weak but teachers aren't sure how to address that other than through the use of flashcards. I'm not sure this is best practice.... Chuck Gameon Courtney, I have a question for you—what assessment are you using to tell you your middle school students are weak in number sense? If you will post this I would gladly respond further. A few issues come to mind with your question. First of all, flashcards will not help in any way to develop number sense. Flashcards are a means to try to facilitate the memorization of facts. In my mind number sense is almost the opposite. Here is my school’s definition of number sense: A person’s ability to use and understand numbers: • knowing their relative values (counting, recognizing numeral & quantity) • how to use them to make judgments (comparing & ordering) • how to use them in flexible ways when adding, subtracting, multiplying and dividing (composing & decomposing) • how to develop useful strategies when counting, measuring or estimating. Developing number sense is a process where students are given a variety of opportunities over a period of time to interact with numbers in various contexts using the Get Email concrete-representation-abstract instructional method. The second issue is teachers should not have to come up with interventions or supports for students on their own. RtI is a Updates collaborative effort where you are drawing on the expertise from all staff members. Collaborative problem solving is an essential component of the RtI process and is used as a decision making tool. You can find a list of research-based interventions for math at Sign up to receive the What Works Clearinghouse RTI Action Network . I have referenced in several of my response the importance of effective instructional practices as a means to positively affect student understanding of math. This is the first area I e-newsletter. would recommend you look at as a means of supporting your struggling learners and all students for your core (Tier 1) instruction. Heidi Erstad Can you recommend any specific screening tools for Math or share thoughts about what's key at different stages of development to predict future struggles in Math? David Allsopp Heidi, excellent question! I’ll try to respond to each part of the question separately. Screening tools: The National Center on Response to Intervention provides a great resource for evaluating possible screening tools a school might use for mathematics and other areas. Stages of development: This is a great… but quite complex question. So, I will try and address it as succinctly as I can. There is growing consensus that number sense and algebraic thinking have similar significance to mathematics success as phonemic/ phonological awareness have to reading success. So, development in number sense and algebraic thinking should be closely monitored. Another important aspect of development that I have found very helpful is the manner with which students move from acquisition of understanding to proficiency to maintenance to generalization to adaption (Stages of Learning). Oftentimes we measure students at only one point along this developmental continuum of learning and use that to determine whether or not students “have it” or “don’t have it.” I would suggest that any math RTI process incorporate evaluation practices that assess where students are along this Stages of Learning continuum, particularly as it relates to key foundational number sense and algebraic thinking concepts/skills. By doing this, instruction can be pinpointed to address where students actually are along this continuum for target mathematics concepts/skills. Also, level of mathematical conceptual understanding (concrete, representational/drawing, abstract) is also an important developmental consideration, particularly for foundational math concepts. Assessments that evaluate what students know, understand, and can do at each level of understanding provides educators with important information that can inform instruction (e.g., if a student has difficulty at the abstract level but has some understanding at the representational/drawing level, then instruction can be directed at enhancing their representational level understanding and explicitly associating it to the abstract in order to enhance the student’s abstract level understandings). Whether or not students can “recognize” (correctly choose an example) or “do“ (correctly represent a mathematical situation or complete a mathematical process/procedure) are also important developmental considerations. Oftentimes we think students know nothing about a particular concept/skill because they cannot “do” it. However, sometimes, students can recognize an example when provided choices. Again, such information can be used to inform instruction and make intervention decisions. A last consideration is to make sure that assessments evaluate students’ mathematical thinking. As adults we often assume that our students should think as we do. Well, we know that our brains develop continuously through early adulthood, particularly as it relates to executive functioning, which is part and parcel to critical thinking. Therefore, probing students’ thinking about target math concepts/skills is also an important consideration. Sometimes younger students think differently than adults about mathematics, but not necessarily incorrectly. Kathy Probst I'm looking forward to all the topics mentioned in the description. I am an Academic Intervention Services (AIS) elementary teacher looking to put some RTI practices into place but I'm also in a quandry. I usually reinforce classroom program to help students be successful before they need specific interventions. How do I blend the 2 concepts? My AIS students scored 2's on the NY State math assessment, which requires them to receive academic intervention services. Chuck Gameon Kathy, I feel I have to clarify a few things first in order to answer this question appropriately. The first question is what does “reinforce classroom program” mean? The second is what does 2’s on the NY State Math Assessment mean? Is it a 1-10 scale? Are they categorized as intensive or strategic? If you can respond to these questions I feel I can give you a better answer. I will try to broadly answer your question first. I would take a close look at your programming as an AIS teacher. Are you simply helping the students get their class work done or are you specifically targeting math skills and providing meaningful instruction to lead the students to mastery of those skills? In an RtI model you should be providing instruction through the use of a research-based, effective instructional program using research-based, effective instructional practices. Another issue that may be connected is the core math program that all students should benefit from (unless their instructional level is two or more years less than their grade-level peers) during the course of their regular classroom math period. Is this an evidence-based curricular product? If not, this could be part of the problem. The amount of training a teacher receives in using the program can also affect student learning. I already mentioned the importance of instruction, is this part of the problem? It sounds to me like the Academic Intervention Services piece, which you provide, is part of a system change that aligns to the RtI process. You are trying to prevent targeted/at-risk kids from failing—that’s a big piece of what the RtI process is all about. Mary Kay Glassman Please advise best practices for developing fact fluency and specific curriculum that accomplishes this. Chuck Gameon Mary Kay, I like to compare fact fluency to reading fluency. We all have a good understanding that fluency is related to automaticity of phonics skills (decoding and blending), high frequency word recognition, and vocabulary. Math fact fluency is similar in that we are looking at an application skill related to number knowledge (the relationships of symbol to quantity, relative value in relation to other numbers, etc.). If you want to build math proficiency with facts, then look for a program that builds the foundational skills to understanding numbers. A program that I have some experience with that does this is Number Worlds. The bottom line in making a recommendation concerning products is to identity what it is you want the students to get from a product and find the one that best meets those needs. Mary Kay Glassman Why are spiraling math curricula still so popular in private schools? Please specify best researched curriculum that builds, scaffolds, and teaches math facts to mastery. Chuck Gameon Mary Kay, I do not have any experience in a private school or working with a private school in Montana, so I am not sure why they are popular with those schools. Spiraling curricula has found a place in public schools because it is constantly reviewing previously taught material. The idea of a spiraling curriculum is wonderful as you want your students to have many opportunities to interact with content over time. The idea behind spiraling curriculum is it will help with retention of information/content. As far as best researched curriculum that builds, scaffolds, and teaches math facts to mastery—I will recommend visiting What Works Clearinghouse Best Evidence . Both of these sites review curriculum. I have my own opinions on math curriculum that I don’t feel are addressed by textbook companies. The first and foremost issue is they provide too much and really expect mastery of none. When you examine products, they simple move from topic to topic without teaching so students can fully understand what they are doing. I believe the integration of the Common Core Standards will be big help in this area, as are NCTM’s Focal Points because they give specific direction at grade levels and lead schools and teachers to teach students to understand math as opposed to understanding how to do an algorithm. In my own case, are school was looking to improve math achievement by improving math instruction during the 2008-2009 school year. At that time it was very difficult to find any research on instruction specific to math. I feel the programs available also reflected this fact. It was at that time that we decided we wanted our students to truly understand numbers so they could apply this understanding to solving problems (including math facts). We began to condense our curriculum through a collaborative review process to narrow the focus of math instruction at each grade level within the elementary school. At the same time we researched and found ways to create meaningful learning opportunities for the students using the concrete-representational-abstract method. There was also a book that we read as a learning community by Dr. Allsopp titled, Teaching Mathematics Meaningfully that also helped increase our understanding of math acquisition. We continue on this process of becoming better math teachers as a school. We have seen an increase in math achievement by 60% on our state achievement test. My recommendation for choosing a math product would be to be an informed consumer. Look critically at the instructional design and instructional methods, as well as the content of the different programs. You know the student population of your school. Select the best program to meet the needs of your students and teachers. If you have data on specific areas of math achievement that are problematic areas, find a program where this area of the instructional program is strong. You can also use future data to decide where the math program needs to be improved and target that area. Melisa Cellan Please break down the major skill areas in math (for example, reading has 5 main components). What are some of the best progress monitoring tools for each of these areas? David Allsopp Melisa, first of all, like reading, mathematics heavily involves communication of ideas. Interestingly, the five areas of effective reading practice - phonemic awareness, phonics, fluency, vocabulary, and comprehension – do have relevance to mathematics and can be used to help think about what might be parallel concept/skill areas for mathematics. A parallel to phonemic/phonological awareness in mathematics is the number sense aspect of the number and operations content strand (See the National Council of Teachers of Mathematics (NCTM) content standards). Closely related to number sense is algebraic thinking (See NCTM content standards). These two foundational areas of the K-12 mathematics curriculum are critical areas to emphasize, especially for students who struggle with mathematics. The recent National Advisory Mathematics Panel emphasizes the importance of these areas as the foundation for K-8 mathematics. Without deep understanding of these areas, students will have difficulty with conceptually grasping important concepts/skills related to other important areas like geometry, data analysis and statistics, and measurement (See NCTM content standards). Fluency as it relates to reading involves the accuracy, rate, and prosody of reading words in text. Prosody emphasizes the meaningfulness of the words being read in context. Well, fluency is also an important aspect to mathematical success for students. Like reading, fluency in mathematics also has to do with accuracy, rate, and prosody in context. Too often, the “context” aspect is left off of discussions about mathematical fluency. For example, the term “automaticity” is often used in mathematics. This places all of the emphasis on accuracy and rate. It leaves off the most important aspect – prosody (the meaningful application of target mathematical concepts and skills). Take basic facts for example. It is far more important that students can apply their understanding of facts within mathematical contexts where there use is actually necessary (e.g., using addition facts within double digit addition situations). Vocabulary in mathematics is very important. The relationship that students make between what they see (abstract symbols) and what they understand is intricately tied to language and the meaning that students can communicate (internally and externally) about the abstract mathematical representations. So, it is critical to emphasize language that students can use to communicate what they do and don’t understand about the mathematics they have learned and currently are learning. Importantly, vocabulary development doesn’t start with technically accurate terms for many students. Connecting students’ own vocabulary to technically accurate vocabulary is paramount for struggling learners. Comprehension in mathematics, as with reading, is multifaceted. Comprehension in reading is making meaning out of text. Comprehension in mathematics is making meaning out of and communicating meaning through mathematical representations that occur in abstract ways (e.g., numbers, math symbols, charts, graphs, tables, etc.). Comprehension (i.e., conceptual understanding) in mathematics has to be a primary emphasis across the Pre-K-12 curriculum. Therefore, meaning/conceptual understanding has to be and explicit area of emphasis if we hope to improve mathematical learning outcomes for our students. Now, I have touched on only one-half of the your question. Oftentimes, we are conditioned to think mostly about the “what” aspect of mathematics and forget about the “how” of mathematics. The NCTM emphasizes not only content standards (the “what” of mathematics) but also the “how” (ways of doing) mathematics. Going into this aspect may not be within the scope of this type of forum. However, I encourage you to investigate the Process Standards emphasized by the NCTM – Connections, Representations, Problem Solving, Communication, Reasoning/Proof. As to your second question, What are some of the best progress monitoring tools for each of these areas?, the only response that I can provide is that I do not know of any progress monitoring tools that systematically address the areas that I have described. However, I think that there are several that you might want to investigate and think about adapting: National Center for Response to Intervention Tools Chart: Reading and Math . I would also suggest that you consider developing progress monitoring probes that actually address what your school believes are best for your context and students. Amy Wiley Our elementary building is having difficulty finding time to schedule intervention for math. We have a Tier 2 block in our existing schedule but a lot of the students who are receiving additional assistance in reading (Tier 2 or Tier 3 intervention) also need intervention in math and there is no additional time. We are taking students out of special area subjects and some students are missing science and social studies. I am wondering how other schools are able to schedule intervention times for struggling learners particularly those needing assistance in both reading and math. David Allsopp Amy, this is such an important question. Many are trying to figure out what to do when there is only so much time in the day. To be honest with you, I don’t know of any particular process used by a school that addresses your concern very well, other than lengthening the school day or providing interventions before and after typical school hours. If this is done for every subject then students would be in school all day. And, of course, teachers, administrators, and staff would be expected to extend what they do all day with little or no additional recompense! However, this doesn’t mean that I think that there are not ideas that schools can try. First, I would suggest that grade level teams work closely together to inform each other about students, about what they are really emphasizing (or not), students’ performance, and issues they have from a teaching and learning perspective. My opinion is that commercially packaged curricula are having too much influence on what students experience and what teachers emphasize within and across grade levels. When grade level teams (within the elementary, middle, and secondary levels and between them) communicate and flexibly adjust what is done for students then students have greater potential for success overall. Second, students can benefit from building leaders who flexibly utilize the talents of their faculty and staff. Oftentimes, there is more talent and expertise within a particular school than one might think on first glance. In my opinion, schooling has become way too compartmentalized in nature, whether it be by subject, student, or teacher. When we begin by dividing a total number of students by number of faculty members hired (teacher units), then we have already set ourselves on a path that is very difficult to change. A teacher is assigned to teach a subject or subjects and based on that subject or subjects they are allotted a certain number of students “to teach.” Oftentimes, little thought is given to what each student actually needs! What begins to happen is that individual students either find success or they don’t by per happenstance in many cases. The effective use of universal screening and progress monitoring is critical to changing the emphasis – from populating student schedules based on teacher units (“teacher unit “need) to scheduling students based on “student need” first. Third, the APPROPRIATE use of collaborative teaching structures can help. There are a lot of different ways that administrators, faculty, allied professionals, and support staff can work collaboratively to provide truly differentiated instruction/learning experiences for students. If we first thought about what is it that our students need with regard to learning generally, and learning subject specifically, then maybe we could more effectively and efficiently work out structures where students can be provided the differentiated instruction that they need (and deserve). Fourth, continuous professional development has to be of paramount importance if we are ever going to be able to truly affect mathematics outcomes for students, particularly students who struggle. Kate Gearon What is your suggestion for progress monitoring the Test of Early Numeracy. There are four categories in that test so that is a lot of instructional time spent on monitoring. Chuck Gameon Kate, when you are looking at progress monitoring using Test of Early Numeracy (TEN) you should be focusing on one specific piece at a time. If a child is low on all four areas, start with one for the targeted area. Set a specific, measurable goal and plan an intervention to meet the goal. Align your progress monitoring to that goal and move forward. By doing this you are focused and your progress monitoring is only 1-2 minutes per week. Once you have achieved the goal by analyzing progress monitoring data, move on to the next low area. It is very easy to get too many interventions going for a student who is struggling in all areas. We have found it to be much more beneficial to stay focused to one specific area. Anne S. I have a son in NY state in 8th grade diagnosed with ADHD/Aspergers/NVLD/Dysgraphia. Math is a huge struggle due to slower processing and his difficulty with writing and his ability to line up problems. What technology if any, is available to help him with this issue currently in the classroom and with regard to state testing? David Allsopp At the 8th grade level, there really should be little need to require your son to do mathematics using only “paper and pencil.” Using either a handheld calculator or a calculator “app” (e.g., on a laptop computer, iPad, or smartphone) would address the difficulties that your son experiences with lining up numbers, etc. Also, there are a growing number of Internet sites that provide ways for students to input information for calculations, data analysis, graphing, and other purposes. An interesting site that you both might want to check out is the National Library of Virtual Manipulatives . It provides virtual manipulative experiences to help students better understand mathematics including number and operations, algebra, measurement, geometry, and data analysis & probability. When your son does need to use paper and pencil or “worksheets,” a simple accommodation can be made to lessen the spatial processing demands by reducing the number of problems on a page. Also, a simple “math window” can be made out of tag board where an area is cut out in a rectangular or square shape and placed on top of a worksheet. Then the student only sees what is in the cut out area. They can focus on those problems and then when finished slide the “window” to the next column or row of problems. With respect to state testing, I would suggest that you and your son visit his IEP to evaluate whether or not appropriate accommodations are identified for him for state testing purposes. This is a required section of the IEP. At a minimum, he should be getting extra time and the option of taking the test in an alternative setting. Also, most states allow use of calculators, particularly at the middle/secondary level. If he is not receiving appropriate accommodations then, I strongly encourage you and your son to request them. Hope these ideas are at least of some help. Julie Zollinger Who facilitates/provides your interventions for Tiers 2 and 3? Chuck Gameon Julie, Tier 2 and 3 interventions in a school can be provided by a variety of staff members. Ideally, Tier 3 interventions are provided by the more qualified staff (more training and expertise in teaching). These interventions are also typically more structured or scripted, which allows staff members who do not have the expertise to facilitate the interventions after training on the program. Anyone who is available and can be trained is utilized as an interventionists in my building. I will give you an example: We have a reading block in the morning and have no special classes (PE, music, library) during the block. I have utilized my librarian and my PE teacher to provide instructional support and interventions for classrooms during the mornings. Here’s a quick run-down of possibilities: Title I, paraprofessionals, aides, principal, special education teachers and aides, secretaries, etc. I hope I am making the point that, when you develop an RtI system in your school, everyone is involved and part of the solution for kids. I can’t answer this question without coming back to fidelity. You cannot give training on a program and then just let the staff member go unsupervised. You need to plan on spending time refining their skills with the intervention program to help them increase their expertise in delivery. Vicki Norris What interventions would you use for students who struggle with dyscalculia? Would "touch math" be a viable option, especially for younger children with this type of learning problem? David Allsopp Vicki, let me respond to the touch math question first. Touch math is a program that can assist some students to understand quantity of number and perform basic computations. It really incorporates kinesthetic and tactile experiences at a representational (drawing/picture) level of understanding. A drawback to this is when students are never scaffolded to full understanding and skill at the abstract level. In other words, students oftentimes continue touching points on numbers for years and years. So, if touch math is used, it needs to be used appropriately (for its purpose) which is to help students transition from concrete understandings of number and operations to abstract. With respect to interventions for dyscalulia and other learning disabilities, I am aware of only a few commercial programs and they really only address basic operations (addition, subtraction, multiplication, division) including fluency building. One is Great Leaps Math (http://www.greatleaps.com/) and the other is the Strategic Math Series (Peterson Miller and Mercer – Edge Enterprises, Inc.). Hot Math (Fuchs & Fuchs – Vanderbilt University) is also something you might want to investigate. You might also want to get the book "Teaching Mathematics to Students with Learning Disabilities" by Bley and Thornton (Pro-ed). This book has very good ideas for students with dyscalculia in particular. Please see responses to other questions that relate to effective mathematics interventions/instruction for additional information. James Carter How does RTI look in a math class? Chuck Gameon James, it really depends. You have to remember RtI is a process, not a program. You will create a system of support for students using research-based programs with quality instructional practices to a high degree. You will monitor student growth and analyze data constantly. This process continues. RtI is founded on 8 essential components (in Montana, some states less, but all about the same) that must be address in this process. The flexibility to make it your own system is built in as you are creating a system FOR YOUR STUDENTS and your teachers. It looks like good teaching and students learning every day. In reading, I think that we are getting better with screening and progress monitoring across the Tiers. I am not finding that to be the case in math. What are some important considerations a school-based leadership team should consider when trying to implement Tier 2 progress monitoring in math? In reading, we are working with CBM (DIBELS Next) for Tier 2 monitoring once to twice a month. What type of progress monitoring at Tier 2 works best? CBM? Chuck Gameon Mark, an important consideration would be what do you need your progress monitoring to do? What data do you need? Does the progress monitoring align to our instruction? Find the system that best meets your student needs. Curriculum based measures are the norm for progress monitoring. The one that works best is the one that works for you and your students. Becky Frankel Do you have a suggestion for a free and/or easy program to progress monitor Tier 2 and 3? Chuck Gameon Becky, I will address the free program--there is only one that I know of, Easy CBM . Easy to use--that's difficult as I think a number of them are easy to use, it's relative to training. It really comes down to choosing the one that will best meet the needs of your school in terms of usable foramative assessment data for problem solving. Kate Gearon As you look at the RTI triangle, the smallest percentage of you class should be in the Tier 3 catagory. When working in a low performing building where the triangle is actually reversed (most students need Tier 3 supports), how do we adopt this RTI model with so many needy students? David Allsopp Kate, it is likely that if so many students are in need of Tier 3 level supports, then your school might need to critically evaluate what is being done at the Tier 1 and 2 levels. In other words, it could be that the core instruction is not meeting the needs of your students. What benchmark, progress monitoring tools, and interventions are currently used that prove to have an impact on student academic growth? Chuck Gameon There are a number of tools in this category that have had an impact on student growth. Visiting with other schools you will always find a program (or several) that work for their students. The goal is to find what works for your students. Benchmarking and progress monitoring tools can all be reviewed on the Internet. Again, find the assessment tool that works for you. Research-based interventions can be found at What Works Clearinghouse Can you talk about effective progress monitoring tools that have been developed for Math to be used within an RTI setting? Chuck Gameon Sheila, there are a number of different effective progress monitoring tools that are available and they aren't all the same. You decide which tool will measure what you want it to measure. The important piece to the progress monitoring tool is if it is giving you usable data in order to problem solve for your students. There is no single program that works for everyone as you are creating a system that will help your student population in your school. Effective progress monitoring tools are sensitive to student growth in relatively short periods, can be administered consistently and scored consistently by a variety of staff members, results are easy to understand, and it is not too costly. Some schools want to use a computer-based tool, others use a pencil paper tool. Use the progress monitoring tool that works to have a positive impact on student growth. Additional Resources on RTINetwork.org Additional Resources • Allsopp, D.H., Alvarez McHatton, P., Ray, S.N.E., & Farmer, J. (2010). Mathematics RTI: A Problem Solving Approach to Creating an Effective Model. Horsham, PA: LRP Publications. • Allsopp, D.H., Kyger, M.M., & Lovin, L. (2007). Teaching Mathematics Meaningfully: Solutions for Reaching Struggling Learners. Baltimore: Paul H. Brookes Publishing, Co.
{"url":"http://www.rtinetwork.org/professional/rti-talks/transcript/talk/36","timestamp":"2014-04-19T22:05:23Z","content_type":null,"content_length":"80574","record_id":"<urn:uuid:7cd94bce-ffbf-4f99-814e-53a52c16d196>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
The Geometry Page Geometry Page Symmetry has always been attractive to mathematicians, and the most symmetric of all figures are the regular polyhedra, or Platonic solids. A regular polyhedron is defined as a finite polyhedron composed of a single type of regular polygon such that each element (vertex, edge and face) is surrounded identically. In three dimensions there are exactly five such polyhedra which don't intersect themselves, and four more that do. There are many other interesting such figures, many of which are defined by relaxing one or more of the conditions defining regular polyhedra. For instance, the figure above is composed of only regular triangular faces, but it has three types of edges and three types of vertices. (The three types of vertices are surrounded by 4, 6 and 10 triangles.) Click on the following link for more information on deltahedra. The following set of images are of some figures I have found which satisfy most or all the criteria defining regular polyhedra except that they are not finite. In other words, it would take an infinite number of polygons to complete such a figure which would then fill all of space with a latticework. Of course an infinite model cannot be completely constructed, but large enough sections can be built to show their geometry and prove their existence. The famous mathematician H.S.M. Coxeter calls these figures "regular skew polyhedrons" while J.R. Gott, III calls them Regular polyhedra are often represented with a notation called Schläfli symbols which consist of two numbers between curly braces. The first number is the number of sides on each polygon, and the second is the number of such polygons surrounding each vertex. For example, {4,3} is the cube because each vertex is surrounded by three squares. It's perfectly natural to apply this notation to infinite polyhedra too. Some images below are in low-res. Simply select the ones with blue borders to see a higher resolution version. The plastic pieces used to construct the models are called "Polydron" and are the best tools I've seen for exploring geometry. They are hard and smooth, and their hinged edges snap together very The {3,9} is a particularly pleasing figure. It can easily be described as a set of translations of a simple base figure consisting of four open octahedra connected to the four faces of a tetrahedron. The faces of the tetrahedra are not included, as are the faces of the octahedra which are parallel to the faces of that imaginary tetrahedron. The {4,6} is very interesting because it is not rigid even when tiled to infinity. Its movement has three degrees of freedom, one for each of the three coordinate axis. It's difficult to construct this model without it collapsing unless these DOF are locked down first by adding any three non-parallel square "braces" into parts of the lattice where they don't belong. A single such "impurity" in the crystal lattice is enough to entirely lock its flexibility in one direction. Vladimir Bulatov created a wonderful VRML viewer which can be used to view and interactively tile these sort of repeat units in three dimensions. It is currently known to work with the Cosmo 2.1 VRML viewer from SGI and may freeze your browser if you try to use a different one, but it's a really useful feature to be able to change the tiling parameters in order to better understand the structure of these models. You rotate the figure by clicking and dragging on the model, and you change the tiling parameters by clicking on the green pyramids, and change the size of the model with the 3D slider widget. When you have a VRML 2.0 viewer installed, click the following link to see an interactive VRML 2 version of the {4,6} above. There are many other interesting ways to construct a {4,6}. Here is another way which was suggested by Chaim Goodman-Strauss and John Sullivan: The image above is really part of a screen shot taken from the VRML version of that {4,6}. There are really several very intersting classes of {4,6} some of which contain uncountably infinite numbers of members. Chaim Goodman-Strauss and John Sullivan have been studying one such class in which all points in the 3D coordinate system with integer coordinates are used as part of each {4,6}. Here is a VRML 2 {4,5} which is not based on a cubic lattice. I don't have any Polydron hexagons, so the hexagons in models above and below are built from six triangles of the same color. You'll have to imagine that those internal edges don't exist. The {6,6} pictured above can easily be described as a set of translations of the truncated tetrahedron. Note that the spaces between the truncated tetrahedra are themselves truncated tetrahedra, thereby partitioning space into two identically shaped regions. I didn't discover the {6,4}, it was described to me by someone that saw it and the {4,6} described in a book by Coxeter. Like the {4,6}, it partitions space into two identical regions. The three figures above are truly "regular" in that all their edges are identical to the others in each figure. The following figures cannot make that claim and so may be called "semi-regular". The {5,5} is my most recent and surprising discovery. I'd looked in vien for a {5,4} and had all but given up on finding an infinite polyhedra composed of pentagons. Almost as a lark I tried with five at a vertex and very quickly found the above configuration. I would be surprised if there are any others. I later learned that this figure had been previously discovered and published by J.R. Gott in 1967. The symmetry of this figure is similar to that of the {3,10} below though it is less obvious. It contains a repeating zig-zag motif parallel to two of the three major planes. I only had enough pentagonal Polydron to construct the physical model above, so I also created a larger VRML 1 version of the {5,5} which you can view if your browser only contains a VRML viewer or plug-in, or you have a VRML 2 plug-in, you can use the tiling viewer to see the VRML 2 version of the {5,5}. This is only one possible configuration of a {3,10}. Notice that as each new layer is added, there are two possible choices as to how to translate the columns. One way to think of this is to notice how each vertex touches one corner of an octahedron from below and one from above. Looking downward at one of the holes in a layer left by, say, a blue octahedron from below, the red octahedra that share the vertices of that blue octahedron, can be arranged rotated sixty degrees in either the clockwise or counter-clockwise directions. The {3,10} shown above was created with all the red octahedra rotated counter-clockwise from the blue octahedra they sit on top of, and with the blue octahedra rotated clockwise from the red octahedra they are on top of. There are many classes of infinite regular polyhedra with the same layered form as the {3,10} above. Perhaps these classes have an analogous relationship to the general class of infinite regular polyhedra as the prisms and antiprisms have to the finite regular polyhedra. The {4,6} further above is also layered, but it has full cubic symmetry instead of having a "grain" in one dimension. There are many other classes of {4,5}'s and {4,6}'s with the same layered form as the {3,10} above. Linked here is a VRML model of a particularly interesting layered {4,6}. One interesting aspect of this model is that like the first {4,6} above, it is also flexible even when tiled to infinity. The images above and below are presented in cross-eyed stereo pairs for clarity. If they do not appear side-by-side in your browser, you will need to make your window larger so that they can be seen side-by-side for this effect to work. If you've never viewed stereo image pairs without a viewer, it takes some practice but is well worth the effort. For cross-eyed views like these, you need to cross your eyes until one image from each side exactly overlap in the center. You then try to hold their positions steady while you relax your focus until the image becomes sharp. The {3,8} shown here is quite beautiful. It forms the same lattice as the bonds between Carbon atoms in diamond crystal. You may also view a VRML 2 version of this {3,8} which I created using the viewer template that Vladimir used for the {5,5} above. Linked here is another VRML {3,8} which is essentially a cubic packing of snub cubes connected by their square faces leaving only triangles connected 8 at each vertex. The snub cubes are connected in alternating right and left handed versions. The VRML model colors the right handed ones red and the left handed ones green. I found this figure in the book "The Geometrical Foundation of Natural Structure" by Robert Williams although I doubt he realized that the figure constituted an infinite regular polyhedron. So just how many triangles can surround each vertex and still generate a non self-intersecting infinite polyhedron? I have no idea, but higher numbers are possible. Below is a {3,12} which can be generated from four intersecting copies of the {3,8} above shifted so that their hubs coincide (where "hubs" are where the Carbon atoms sit in the diamond model). I can't build that {3,12} out of Polydron because the dihedral angles become too acute but you can view a VRML 2 version of that {3,12}. Here is a different {3,12} which can be made from Polydron, from a vertex figure suggested by Don Hatch. It is particularly beautiful being composed of flat snaking paths of coincident triangles that stretch to infinity. The paths come in four different orientations, each shown here in a different color. The result is an impossible seeming figure that looks very much like the intersecting staircases in M.C. Escher's "House of Stairs" lithographs. Below is a view from the inside of the model (with one yellow triangle removed for clarity), seen as if you're walking along a red path. In this VRML 2 model, paths in each orientation are again rendered the same color but in alternating shades so that the individual triangles can be distinguished. Here's another version of this same VRML 2 {3,12} but with black edges added instead of alternating colors. Here's still another VRML 2 {3,12} from another vertex figure by Don Hatch. Finally, here is a unique and elegant {3,7} shown in stereo. It has the same diamond crystal topology as the {3,8} above, but with icosahedral hubs instead of octahedra: It's interesting to note that there are two types of hubs in this figure, distinguished here by red and green icosahedra where each type is always connected only to hubs of the other type. The differences between hub types are in the arrangement of their struts. You can call a hub either left or right handed in that if you follow an edge that extends straight out from one of the missing triangles which is the base of one strut, the strut adjacent to the other end of that edge is to the left in the green hubs, and to the right in the red ones. Here is the VRML {3,7}. Lastly, there exists a beautiful {3,9} which is something like two intersecting {3,7}s. You can visualize it if you imagine a new icosahedron placed in the center of the large voids such as the model above surrounds. Then imagine connecting all 8 neighboring icosahedra connected to that new one. Of course instead of just imagining it you can see the VRML {3,9}.
{"url":"http://www.superliminal.com/geometry/ogeometry.htm","timestamp":"2014-04-18T11:00:25Z","content_type":null,"content_length":"15725","record_id":"<urn:uuid:0ab31d6c-76cd-4a83-9559-9ef44faf70b6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Jump down to: Web Resources Bibliography of useful books and articles that relate to the use of model-eliciting activities (MEAs) Bransford, J., Brown, A.L., & Cocking, R.R. (Eds.) (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. Chamberlin, S. A., and Moon, S. M. (2008). How does the problem based learning approach compare to the model-eliciting activity approach in mathematics? International Journal for Mathematics Teaching and Learning.http://www.cimt.plymouth.ac.uk/journal/chamberlin.pdf Diefes-Dux, H.A., Imbrie, P.K., & Moore, T.J. (2005). First-year engineering themed seminar - A mechanism for conveying the interdisciplinary nature of engineering.Paper presented at the 2005 American Society for Engineering Education National Conference, Portland, OR. Kaufman, A., Mennin, S., Waterman, R., Duban, S., Hansbarger, C., Silverblatt, H., Obenshain, S. S., Kantrowitz, M. Becker, T., Samet, J, and Wiese, W. (1989). The New Mexico experiment: educational innovation and institutional change. Academic Medicine 64,285-294. Lesh, R. and Caylor, B. (2007). Introduction to the Special Issue: Modeling as Application versus Modeling as a Way to Create Mathematics. International Journal of Computers for Mathematical Learning 12,173-194. Lesh, R. and Doerr, H. M. (Eds.). (2003). Beyond constructivism: Models and modeling perspectives on mathematics problem solving, learning, and teaching. Mahwah, NJ: Lawrence Erlbaum. Lesh, R., & Doerr, H. M. (2003). Beyond constructivism: Models and modeling perspectives on mathematics teaching, learning, and problem solving. In R. Lesh & H. M. Doerr (Eds.), Beyond constructivism: Models and modeling perspectives on mathematics problem solving, learning, and teaching(pp. 3-33). Mahwah, NJ: Lawrence Erlbaum. Lesh, R. A., Hamilton, E. and Kaput, J. J. (Eds.). (2007). Foundations for the future in mathematics education. Mahwah, NJ: Lawrence Erlbaum. Lesh, R., Hoover, M., Hole, B., Kelly, A., & Post, T. (2000). Principles for developing thought-revealing activities for students and teachers. In A. Kelly & R. Lesh (Eds.), Handbook of research design in mathematics and science education(pp. 591-646). Mahwah, NJ: Lawrence Erlbaum. Moore, T. J., Diefes-Dux, H. A., & Imbrie, P. K. (2006). The quality of solutions to open-ended problem solving activities and its relation to first-year student team effectiveness.Paper presented at the American Society for Engineering Education Annual Conference, Chicago, IL. Moore, T.J., Diefes-Dux, H.A., & Imbrie, P.K. (2007). How team effectiveness impacts the quality of solutions to open-ended problems.Distributed journal proceedings from the International Conference on Research in Engineering Education, published in the October 2007 special issue of the Journal of Engineering Education, 96(4). Schwartz, D. L., & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2),129-184. Schwartz, D. L., Sears, D., & Chang, J. (2007). Reconsidering prior knowledge. In M. Lovett and P. Shah (Eds.), Thinking with Data(pp. 319-344). New York: Erlbaum Schwartz, D. L., Varma, S., & Martin, L. (2008). Dynamic transfer and innovation. S. Vosniadou (Ed.), Handbook of Conceptual Change(pp. 479-506). Mahwah, NJ: Erlbaum. Zawojewski, J., Bowman, K., & Diefes-Dux, H.A. (Eds.). (2008). Mathematical modeling in engineering education: Designing experiences for all students. Roterdam, the Netherlands: Sense Publishers. Web Resources which provide additional information on MEAs https://engineering.purdue.edu/ENE/Research/SGMM/Problems/MEAs_html - Website with many MEAs created for Engineering students. http://modelsandmodeling.net/ - Another website with information and examples of MEAs
{"url":"http://serc.carleton.edu/sp/cause/mea/references.html","timestamp":"2014-04-16T19:02:55Z","content_type":null,"content_length":"26929","record_id":"<urn:uuid:fd69735e-9886-4ad1-b12d-beeba2f929f8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation number Is there any way to number the equations, even manually if automatic numbering is not supported? QuickLaTeX supports equation numbering by native LaTeX rules: - Automatic numbering of displayed equations, e.g. $$...$$, etc. - Custom number for equation can be set using \tag - User can put label for equation by \label and reference formula by \ref further in the text.
{"url":"http://wordpress.org/support/topic/plugin-wp-latex-equation-number","timestamp":"2014-04-17T05:43:47Z","content_type":null,"content_length":"16316","record_id":"<urn:uuid:27854473-0530-4599-a796-4c565350b6c9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Mesa, AZ Math Tutor Find a Mesa, AZ Math Tutor ...Is English a challenge? I shall be happy to tutor your child in the Language Arts as well. (I am not trained to teach English as a Second Language.)The materials I use will keep your child's interest, and your child will learn. I shall take your child from the level (s)he is to where (s)he is supposed to be or even higher with patience and consistency. 7 Subjects: including algebra 1, prealgebra, English, reading ...As Assistant Professor in for a graduate degree program: Taught 5 years of Global Security Affairs, 4 years as instructor for international student program, 5 years instructing international regional & cultural studies, 4 years directing culture and language program. B.A. in History, 1980. Courses in Government. 8 Subjects: including SPSS, elementary (k-6th), special needs, world history ...The last 6 years I taught 4th grade. I can teach any elementary subject. Currently, I am a music teacher at the elementary level. 39 Subjects: including prealgebra, SAT math, piano, algebra 1 ...When I was 5, I tested to a high school level. After taking my entrance test for Mesa Community College, I tested out of reading, meaning I don't have to take any reading classes because my reading comprehension is at or above college level. I have tutored in reading, recently, with much success. 16 Subjects: including prealgebra, English, reading, writing ...I also worked as a quality assurance reviewer for a company creating a Geometry Course for online high schools. I also have an interesting perspective on Geometry; Geometry is Mathematics' answer to the board-game Clue. You have to collect all of the correct information and arrange it appropria... 15 Subjects: including differential equations, discrete math, trigonometry, algebra 2 Related Mesa, AZ Tutors Mesa, AZ Accounting Tutors Mesa, AZ ACT Tutors Mesa, AZ Algebra Tutors Mesa, AZ Algebra 2 Tutors Mesa, AZ Calculus Tutors Mesa, AZ Geometry Tutors Mesa, AZ Math Tutors Mesa, AZ Prealgebra Tutors Mesa, AZ Precalculus Tutors Mesa, AZ SAT Tutors Mesa, AZ SAT Math Tutors Mesa, AZ Science Tutors Mesa, AZ Statistics Tutors Mesa, AZ Trigonometry Tutors
{"url":"http://www.purplemath.com/Mesa_AZ_Math_tutors.php","timestamp":"2014-04-20T16:36:10Z","content_type":null,"content_length":"23490","record_id":"<urn:uuid:ce429d0b-43b1-4307-a69b-c5a66b990165>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Coloured Graph Decompositions Waterhouse, M A (2005). Coloured Graph Decompositions PhD Thesis, School of Physical Sciences, The University of Queensland. Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials) Name Description MIMEType Size Downloads THE18769.pdf Full Text application/pdf 4.43MB 0 Author Waterhouse, M A Thesis Coloured Graph Decompositions Centre or School of Physical Sciences Institution The University of Queensland Publication 2005 Thesis type PhD Thesis Supervisor Professor Peter Adams Dr Darryn Bryant Total pages 237 Collection 2005 Language eng Subjects 230101 Mathematical Logic, Set Theory, Lattices And Combinatorics 780101 Mathematical sciences Let G be a graph in which each vertex has been coloured using one of k colours, say C[1], c[2], …, c[k]. If a graph H in G has n[i] vertices coloured c[i] i = 1, 2, ... , k, and |n[i] –n [j]| < 1 for any i , j Є {1, 2, . . . , k}, then H is said to be equitably k-coloured. An H-decomposition ɧ of G is equitably k-colourable if the vertices of G can be coloured so that every copy of H in ɧ is equitably k-coloured. In Chapters 2 to 5, we consider equitably colourable decompositions. In Chapter 2, we completely settle the existence question for equitably k-colourable v-cycle decompositions of K[v], for 1 < k < v, and for any prime p we completely settle the existence question for equitably k-colourable i-perfect p-cycle decompositions of K[p], where 1 < k < p and 1 < i < (p - 1) /2. We also completely settle the existence question for equitably 2-colourable m-cycle decompositions of K[v] for m even and m = 5. Furthermore, we show that for all admissible v > 5, there exists at least one 5-cycle decomposition of K[v] which cannot be equitably 2-coloured. In addition, we completely settle the existence question for equitably 2-colourable m-cycle decompositions of K[v] - F and for equitably 3-colourable m-cycle decompositions of K[v] and K[v] - F, where m Є { 4, 5, 6}. We also provide upper bounds on admissible values of v for existence of equitably (m - 1)-colourable m-cycle decompositions of K[v] and K[v] - F. In Chapter 3, we partially generalise our results on equitably 2-colourable even-length cycle decompositions of K[v] - F. Except for the case where v(v - 2)/2 is an odd multiple of m and v = m = 4 (mod8), we show that if the obvious necessary conditions are satisfied then there Formatted exists an equitably 2-coloured m-cycle decomposition of K[v] - F for even m. In Chapter 4, we completely settle the existence question for equitably 2-colourable 3- and 5-cycle abstract decompositions of K[p](n), and for equitably 2-colourable 4- and 6-cycle decompositions of K[n1],[n2], ... ,[np],· In addition, we completely settle the existence question for equitably 3-colourable m-cycle decompositions of K[p]([n]), for m Є {3, 4, 5}. In Chapter 5, we completely settle the existence question for equitably 2- and 3-colourable 3-cube decompositions of K [v], K[v] - F and K[x],[y]· We also consider other types of coloured graph decompositions. Suppose that the vertices of K[v], have been coloured with at most two colours. Let C[1] C[2], . . .C[m] denote the colouring of the m-cycle (x[1],x[2], . . . ,x[m]) which assigns the colour C[i] to the vertex x[i] for i = 1, 2, . . . , m, where C[i] Є {black, white}. We let T be the set of all possible such colourings and we let S C T. If ɧ is an m-cycle system such that the colouring type of every m-cycle in ɧ is in S, and every colouring type in S is represented in ɧ, then we say that ɧ has proper colouring Type S. In Chapter 6, we completely settle the existence question for 4-cycle decompositions with proper colouring Type S for all possible S. In Chapter 7, we completely settle the existence question for 5-cycle decompositions with proper colouring Type S for all S when |S| = 1, and for many |S| when |S| = 2. For the remaining S, where |S| = 2, we determine some necessary conditions for existence of such decompositions. Keyword Graph theory
{"url":"http://espace.library.uq.edu.au/view/UQ:107288","timestamp":"2014-04-18T08:40:02Z","content_type":null,"content_length":"28068","record_id":"<urn:uuid:54de9d97-a1b2-4a4a-a607-628362eb794b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
3. Student Learning Map • Topic:Reasoning • Subject(s):Math • Days:7 • Grade(s):9, 10, 11, 12 Key Learning: Logic and reasoning are used in geometry and the real world to solve problems and reach conclusions. Unit Essential Question(s): Why are inductive and deductive reasoning not interchangeable? Concept: Inductive Reasoning Concept: Deductive Reasoning Lesson Essential Question(s): How do advertisers use inductive reasoning to influence consumers? ( Lesson Essential Question(s): How would you recognize an invalid conclusion? ( Lesson Essential Question(s): How would you recognize an invalid conclusion? ( How do advertisers use inductive reasoning to influence consumers? ( What strategies do we use to develop a direct proof? ( Lesson Essential Question(s): What strategies do we use to develop a direct proof? ( How is indirect reasoning used in the real world? ( How is indirect reasoning used in the real world? (
{"url":"http://publish.learningfocused.com/297366","timestamp":"2014-04-21T13:13:44Z","content_type":null,"content_length":"27530","record_id":"<urn:uuid:fa95a6d2-6465-486a-b0b3-2e284a33760b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Are the data consistent with the assumed process mean? 7. Product and Process Comparisons 7.2. Comparisons based on data from one process 7.2.2. Are the data consistent with the assumed process mean? The testing of H[0] for a Given a random sample of measurements, Y[1], ..., Y[N], there are three types of questions regarding the true mean of the population that can be addressed with the sample single population mean data. They are: 1. Does the true mean agree with a known standard or assumed mean? 2. Is the true mean of the population less than a given standard? 3. Is the true mean of the population at least as large as a given standard? Typical null hypotheses The corresponding null hypotheses that test the true mean, Test statistic where the The basic statistics for the test are the sample mean and the standard deviation. The form of the test statistic depends on whether the poulation standard deviation, standard deviation is not known where the sample mean is and the sample standard deviation is with N - 1 degrees of freedom. Comparison with critical For a test at significance level 1. | t&nbsp| ≥ t[1-α/2, N-1] 2. t ≥ t[1-α, N-1] 3. t ≤ t[α, N-1] where t[1-α/2, N-1] is the 1-α/2 critical value from the t distribution with N - 1 degrees of freedom and similarly for cases (2) and (3). Critical values can be found in the t-table in Chapter 1. Test statistic where the If the standard deviation is known, the form of the test statistic is standard deviation is known For case (1), the test statistic is compared with z[1-α/2], which is the 1-α/2 critical value from the standard normal distribution, and similarly for cases (2) and (3). Caution If the standard deviation is assumed known for the purpose of this test, this assumption should be checked by a test of hypothesis for the standard deviation. An illustrative example of The following numbers are particle (contamination) counts for a sample of 10 semiconductor silicon wafers: the t-test The mean = 53.7 counts and the standard deviation = 6.567 counts. The test is two-sided Over a long run the process average for wafer particle counts has been 50 counts per wafer, and on the basis of the sample, we want to test whether a change has occurred. The null hypothesis that the process mean is 50 counts is tested against the alternative hypothesis that the process mean is not equal to 50 counts. The purpose of the two-sided alternative is to rule out a possible process change in either direction. Critical values For a significance level of Chapter 1). Even though there is a history on this process, it has not been stable enough to justify the assumption that the standard deviation is known. Therefore, the appropriate test statistic is the t-statistic. Substituting the sample mean, sample standard deviation, and sample size into the formula for the test statistic gives a value of t = 1.782 with degrees of freedom N - 1 = 9. This value is tested against the critical value t[1-0.025;9] = 2.262 from the t-table where the critical value is found under the column labeled 0.975 for the probability of exceeding the critical value and in the row for 9 degrees of freedom. The critical value is based on Conclusion Because the value of the test statistic falls in the interval (-2.262, 2.262), we cannot reject the null hypothesis and, therefore, we may continue to assume the process mean is 50 counts.
{"url":"http://www.itl.nist.gov/div898/handbook/prc/section2/prc22.htm","timestamp":"2014-04-20T13:20:07Z","content_type":null,"content_length":"9613","record_id":"<urn:uuid:d5011181-e8c1-431a-8ba4-b65789d367fe>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Submitted by Anonymous on October 17, 2012. As pointed out in the comment above, the analysis of when to offer a double is not correct. Assume that (1) you double when your probability of winning reaches d, (2) your opponent follows the same strategy, and (3) all doubles are accepted (which is the favored strategy if d<4/5). Then it is easy to show (in the Brownian motion model of the authors) that for d<2/3, the probability that your opponent will get an opportunity to redouble (before you win) is >1/2. In this case, an escalating series of doubles is likely, and your expected return is given by an infinite series of growing, alternating-sign terms (which does not converge). On the other hand, for d>2/3, the series converges (to a finite positive value). In this case, offering a double increases your expected return (by a factor of 2), and so is the favored strategy. Thus d=2/3 is the correct threshold for Mark Srednicki Dept of Physics UC Santa Barbara
{"url":"http://plus.maths.org/content/comment/reply/2183/3721","timestamp":"2014-04-20T03:23:30Z","content_type":null,"content_length":"20861","record_id":"<urn:uuid:fbc442eb-5e14-4dbb-ae82-d109c952e9e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
A 4.80 Kg Counterweight Is Attached Toa Light Cord, ... | Chegg.com A 4.80 kg counterweight is attached toa light cord, which is wound around a spool (refer to Fig. 10.20).The spool is a uniform solid cylinder of radius 6.00 cm and mass 1.10kg. Figure 10.20 (a) What is the net torque on the system aboutthe point (b) When the counterweight has a speed , the pulley hasan angular speed .Determine the total angular momentum of the system about (c) Using the fact that and your result from (b),calculate the acceleration of the counterweight.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/480-kg-counterweight-attached-toa-light-cord-wound-around-spool-refer-fig-1020--spool-unif-q557474","timestamp":"2014-04-19T16:42:45Z","content_type":null,"content_length":"25176","record_id":"<urn:uuid:c86e0724-f041-4fae-8d6c-c9b53e311df7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
I'm trying to find the inverse of a 50x50 dense matrix of rational numbers. The matrix is created using the method matrix_from_rows from another matrix that was created using matrix(QQ,B) from a list of lists of rational numbers B. The error I get is the following: /usr/.../sage-4.7.1/local/bin/sage-sage: line 301: 27416 Killed sage-cleaner &>/dev/null /usr/.../sage-4.7.1/local/bin/sage-sage: line 301: 27417 Killed sage-ipython "$@" -i I'm doing this at the university, so the number of processors and RAM both seem to be infinite. I wonder if there is an alternative. For example, I could try to invert the matrix in Mathematica or Maple. Is there an easy way to interface with them? Thank you! You could interface, but it'd be nice to figure out what the problem is. Could you edit your message to include the smallest matrix you can find which crashes? DSM (Nov 21 '11) Did you install a binary package? Most likely it was compiled with optimization flags that your CPU does not support. Try to compile from source. Volker Braun (Nov 21 '11) Thanks for the comments. I am sorry I did not come back before. (The reason was basically that I did not get notified of your comments by the system.) I do not know how the binary was compiled because I am using the version installed in the university's server. I will try to provide a concrete example, the way Simon King has suggested below. Zatrapadoo (Dec 09 '11) Please provide a concrete example, perhaps by providing a link to a worksheet or so. I just tried (using sage-4.7.2): sage: MS = MatrixSpace(QQ, 50) sage: A = MS.random_element() sage: %time B = ~A CPU times: user 0.23 s, sys: 0.00 s, total: 0.23 s Wall time: 0.38 s So, 50x50 over QQ really doesn't seem to be a challenge, and thus it would be interesting to know what you did exactly. posted Nov 23 '11 Simon King 376 ● 2 ● 11
{"url":"http://ask.sagemath.org/question/931/sage-crashes-when-inverting-a-large-matrix","timestamp":"2014-04-16T19:01:54Z","content_type":null,"content_length":"24104","record_id":"<urn:uuid:54170e51-8119-4b61-b4bb-70e469bbb637>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
50 Multiple Choice Questions on Quantitative Methods 1. 69857 50 Multiple Choice Questions on Quantitative Methods Question 1 In a balanced transportation model where supply equals demand, a. all constraints are equalities b. none of the constraints are equalities c. all constraints are inequalities d. none of the constraints are inequalities Question 2 In a transportation problem, items are allocated from sources to destinations a. at a maximum cost b. at a minimum cost c. at a minimum profit d. at a minimum revenue Question 3 The assignment model is a special case of the ________ model. a. maximum-flow b. transportation c. shortest-route d. none of the above Question 4 The linear programming model for a transportation problem has constraints for supply at each ______ and _______ at each destination. a. destination / source b. source / destination c. demand / source d. source / demand Question 5 An assignment problem is a special form of transportation problem where all supply and demand values equal a. 0 b. 1 c. 2 d. 3 Question 6 The transshipment model is an extension of the transportation model in which intermediate transshipment points are ______ between the sources and destinations. a. decreased b. deleted c. subtracted d. added Question 7 Inventory costs include a. carrying b. ordering c. shortage costs d. all of the above Question 8 In a(an) ____________ inventory system a constant amount is ordered when inventory declines to a predetermined level. a. optional b. economic c. periodic d. continuous Question 9 EOQ is a(an) _________ inventory system. a. periodic b. continuous c. optimal d. economic Question 10 As order size increases, total a. inventory costs will increase, reach a maximum and then quickly decrease. b. inventory cost will decrease, reach a minimum and then increase. c. ordering costs will initially increase while total carrying cost will continue to decrease d. carrying cost decreases while the total ordering cost increases Question 11 If we roll 1 die the probability of any 1 of the 6 possible outcomes' occurring is 1/6. This is an example of a a. subjective probability b. classical probability c. conditional probability d. relative frequency probability Question 12 A _________ probability is the probability that an event will occur given that another event has already occurred. a. subjective b. objective c. conditional d. binomial Question 13 Farmer Green has a herd of cattle. Twenty percent of his herd are bulls. He has three different breeds of bulls--10 are Jerseys, 20 are Holsteins, and 20 are Guernseys. Given that you have selected a bull, what is the probability that the bull is also a Holstein? a. 0.02 b. 0.20 c. 0.40 d. 0.80 Question 14 The events in an experiment are _________ if only one can occur at a time. a. mutually exclusive b. non-mutually exclusive c. mutually inclusive d. non-mutually inclusive Question 15 Bayesian Analysis enables one to calculate posterior probabilities. a. True b. False Question 16 A subjective probability reflects the feelings or opinions regarding the likelihood that an outcome will occur. a. True b. False Question 17 The shipping company manager wants to determine the best routes for the trucks to take to reach their destinations. The problem can be solved using the a. shortest route solution technique b. minimum spanning tree solution method c. maximal flow solution method d. minimal flow solution method Question 18 In the linear programming formulation of the shortest route problem, there is one constraint for each node indicating a. capacity on each path b. whatever comes into a node must also go out c. capacity on each arc d. a maximum capacity on a path Question 19 The minimal spanning tree problem determines the ___________ total branch lengths connecting all nodes in the network a. selected b. maximum c. minimum d. divided Question 20 The objective of the maximal flow solution approach is to _________ the total amount of flow from an origin to a destination a. minimize b. maximize c. discriminate d. divide Question 21 Once a project is underway, the project manager is responsible for the a. people b. cost c. time d. all of the above Question 22 If an activity cannot be delayed without affecting the entire project, it is a _______ activity a. completed b. critical c. conjugated d. none of the above Question 23 A ____________ represents the beginning and end of activities, referred to as events. a. path b. arc c. branch d. node Question 24 When an activity is completed at a node, it has been a. finished b. ended c. realized d. completed Question 25 Project management differs from management for more traditional activities mainly because of a. its limited time frame b. its unique set of activities c. a and b d. none of the above Question 26 The critical path is the ____________ time the network can be completed. a. maximum b. minimum c. longest d. shortest Question 27 Attributes of decision-making techniques include all of the following except: a. payoffs b. constraints c. alternatives d. states of nature Question 28 With the criterion ____________, the decision maker attempts to avoid regret. a. minimax regret b. equal likelihood c. Hurwicz d. maximin Question 29 To lose the opportunity to make a defined profit by making the best decision is referred to as: a. equal likelihood criterion b. state c. payoff d. regret Question 30 When is it most appropriate to use a decision tree? a. if the decision maker wishes to minimize opportunity loss b. if a decision situation requires a series of decisions c. if the decision maker must use perfect information d. if all states of nature are equally likely to occur Question 31 The coefficient of optimism may be selected to being a value between: a. 0 and -1 b. 0 and +1 c. -1 and +1 d. -6 and +6 Question 32 According to the _____, the defensive player will select the strategy that has the smallest of the maximum payoffs. a. maximax strategy b. minimin strategy c. maximin strategy d. minimax strategy Question 33 The expected opportunity loss criterion will always result in the same decision as the expected value criterion. a. True b. False Question 34 A cereal company that is planning on marketing a new low-cholesterol cereal should be concerned about the states of nature--that is the probability that people will stay interested in eating healthy a. True b. False Question 35 The length of a queue a. could be finite b. could be infinite c. can constantly change d. all of the above Question 36 Items may be taken from a queue a. on a first-come-first-serve basis b. on a last-come-first-serve basis c. according to the due date of the item d. all of the above Question 37 Which of the following items is not a part of the queuing system? a. arrival rate b. service facility c. waiting line d. activity flow Question 38 In a single-server queuing model, the average number customers in the queuing system is calculated by dividing the arrival rate by: a. service rate b. service time c. service rate minus arrival rate d. service rate plus arrival rate Question 39 The most important factors to consider in analyzing a queuing system are a. the service and arrival rate b. the nature of the calling population c. the queue discipline d. all of the above Question 40 Queuing analysis is a deterministic technique. a. True b. False Question 41 The operating characteristics of a queuing system provide information rather than an optimization of a queuing system. a. True b. False Question 42 The applicability of forecasting methods depends on a. the time frame of the forecast b. the existence of patterns in the forecast c. the number of variables to which the forecast is related d. all of the above Question 43 _________ is a gradual, long-term, up or down movement of demand. a. seasonal pattern b. cycle c. trend d. prediction Question 44 ___________ is good for stable demand with no pronounced behavioral patterns. a. longer-period moving average b. shorter-period moving average c. moving average d. weighted moving average Question 45 The Delphi method for acquiring informed judgments and opinions from knowledgeable individuals uses a series of questionnaires to develop a consensus forecast about what will occur in the future. a. True b. False Question 46 ___________ methods assume that what has occurred in the past will continue to occur in the future. a. Time series b. Regression c. Quantitative d. Qualitative Question 47 In exponential smoothing, the closer alpha is to ___________, the greater the reaction to the most recent demand. a. -1 b. 0 c. 1 d. 5 Question 48 Time series methods tend to be most useful for short-range forecasting. a. True b. False Question 49 An exponential smoothing forecast will react more strongly to immediate changes in the data than the moving average. a. True b. False Question 50 Longer-period moving averages react more quickly to recent demand changes than do shorter-period moving averages. a. True b. False Answers to 50 Multiple Choice Questions on Quantitative Methods
{"url":"https://brainmass.com/business/business-management/69857","timestamp":"2014-04-16T18:56:35Z","content_type":null,"content_length":"37415","record_id":"<urn:uuid:4fb517ba-54d6-40c8-b47c-7785a9b30f92>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical solution of the Fokker-Planck-Landau equation by fast spectral methods Nowadays, numerical simulations of plasmas are receiving a great deal of attention both in research and in industry thanks to the numerous applications directly connected to these phenomena. In addition, there exist many practical situations in which the so-called Coulomb collisions are fundamental for correctly describing the plasma dynamics as for instance in magnetic fusion devices (like tokamak devices ). The Fokker-Planck-Landau (FPL) equation is used to describe the binary collisions between charged particles in plasma physics. A new approach for the accurate numerical solution of the FPL equation has been presented recently in [1, 2]. The method is based on a fast spectral solver for the efficient solution of the collision operator. The use of a suitable explicit Runge-Kutta solver for the time integration of the collision phase permits to avoid excessive small time steps induced by the stiffness of the diffusive collision operator. Here we present some numerical simulation of the relaxation process in a three-dimensional Coulomb gas (see [3,4]). We refer also to [5] for the development of Monte Carlo methods for the Landau-Fokker-Planck equation. The relaxation process in 3D We consider the relaxation process for a three dimensional Coulomb gas (a=-3). The initial data is chosen as the sum of two symmetric Maxwellian functions f[0](v) = 5[exp(-(|v-v[1]|^2)/(2v[t]^2)) + exp(-(|v-v[2]|^2)/(2v[t]^2))]/(2(2pv[t]^2)^3/2), with v[1] = (1.25,1.25,0), v[2] = (-1.25,-1.25,0) and the thermal velocity is v[t]=0.4. The final time of the simulation is T = 80. The movie reports the time evolution of the level set of the distribution function f(t,v[x],v[y],v[z])=0.02 obtained with n=32 modes. Initially the level set of the initial data corresponds to two ``spheres'' in the velocity space. Then, the two distributions start to merge until the stationary state characterized by a Maxwellian distribution with zero mean velocity is reached. This is represented by a single centered sphere.
{"url":"http://www.lorenzopareschi.com/2005/01/numerical-solution-of-fokker-planck.html","timestamp":"2014-04-16T10:09:51Z","content_type":null,"content_length":"72426","record_id":"<urn:uuid:25a9dc36-c8e7-4972-8eea-fe22b11decc7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Arthur Miller Einstein and Schrödinger never fully accepted the highly abstract nature of Heisenberg's quantum mechanics, says Miller. They agreed with Galileo's assertion that "the book of nature is written in mathematics", but they also realized the power of using visual imagery to represent mathematical symbols. For most people I am sure it is of little interest that such an abstract language could have ever amounted to anything,since we might have been circumscribed to the natural living that is required that we could do without it. But really, can we? Paul Dirac When one is doing mathematical work, there are essentially two different ways of thinking about the subject: the algebraic way, and the geometric way. With the algebraic way, one is all the time writing down equations and following rules of deduction, and interpreting these equations to get more equations. With the geometric way, one is thinking in terms of pictures; pictures which one imagines in space in some way, and one just tries to get a feeling for the relationships between the quantities occurring in those pictures. Now, a good mathematician has to be a master of both ways of those ways of thinking, but even so, he will have a preference for one or the other; I don't think he can avoid it. In my own case, my own preference is especially for the geometrical So of course one appreciates those who start the conversation to help raise the questions in ones own mind. Might it be a shared response to something existing deeper in our society that it would warrant descriptions that we might be lacking in. Ways in which to describe something about nature. There is something definitely to be said about the geometer that can visualize the spaces within which they are working. It has to make sense. It has to describe something? Why then not just plain English(whatever language you choose) String theory's mathematical tools were designed to unlock the most profound secrets of the cosmos, but they could have a far less esoteric purpose: to tease out the properties of some of the most complex yet useful types of material here on Earth.What Good are Mathematics in the Real World? Do you know how many mathematical expressions are needed in order to describe the theory The language of physics is mathematics. In order to study physics seriously, one needs to learn mathematics that took generations of brilliant people centuries to work out. Algebra, for example, was cutting-edge mathematics when it was being developed in Baghdad in the 9th century. But today it's just the first step along the journey.Guide to math needed to study physics Conversations on Mind, Matter, and Mathematics How mathematics arose from cognitive realizations. Ex. Newton and Calculus. The branches of mathematics. Who are it's developers and what did they develop and why? It may be as important as the history in relation to how one may perceive the history and development of mathematics. These were important insights into the way one might of asked how did emergence exist if such things could have been imagined in the mind of the beholder. To attempt to describe nature in the way that one might do by invention? So are these mathematical things discovered or are they invented? Why the history is important? This is the basis of the question of what already exists in terms of information has always existed and we are only getting a preview of a much more complicated system. It does not have to be a question of what a MBT exemplifies in itself, but raises the questions about what already exists, exists as part of what always existed. Where do ideas and mathematics come from? This is a foundation stance that is taken right throughout science? If it exists in the universe, it exists in you? How does one connect? See Also WHAT IS YOUR FAVORITE DEEP, ELEGANT, OR BEAUTIFUL EXPLANATION? See Also : Some Educational links to look at then. I suppose you are two fathoms deep in mathematics, and if you are, then God help you, for so am I, only with this difference, I stick fast in the mud at the bottom and there I shall remain. -Charles Darwin How nice that one would think that, "like Aristotle" Darwin held to what "nature holds around us," that we say that Darwin is indeed grounded. But, that is a whole lot of water to contend with, while the ascent to land becomes the species that can contend with it's emotive stability, and moves the intellect to the open air. One's evolution is hard to understand in this context, and maybe hard for those to understand the math constructs in dialect that arises from such mud. For me this journey has a blazon image on my mind. I would not say I am a extremely religious type, yet to see the image of a man who steps outside the boat of the troubled apostles, I think this lesson all to well for me in my continued journey on this earth to become better at what is ancient in it's descriptions, while looking at the schematics of our arrangements. How far back we trace the idea behind such a problem and Kepler Conjecture is speaking about cannon balls. Tom Hales writes,"Nearly four hundred years ago, Kepler asserted that no packing of congruent spheres can have a density greater than the density of the face-centered cubic packing." Kissing number problem In three dimensions the answer is not so clear. It is easy to arrange 12 spheres so that each touches a central sphere, but there is a lot of space left over, and it is not obvious that there is no way to pack in a 13th sphere. (In fact, there is so much extra space that any two of the 12 outer spheres can exchange places through a continuous movement without any of the outer spheres losing contact with the center one.) This was the subject of a famous disagreement between mathematicians Isaac Newton and David Gregory. Newton thought that the limit was 12, and Gregory that a 13th could fit. The question was not resolved until 1874; Newton was correct.[1] In four dimensions, it was known for some time that the answer is either 24 or 25. It is easy to produce a packing of 24 spheres around a central sphere (one can place the spheres at the vertices of a suitably scaled 24-cell centered at the origin). As in the three-dimensional case, there is a lot of space left over—even more, in fact, than for n = 3—so the situation was even less clear. Finally, in 2003, Oleg Musin proved the kissing number for n = 4 to be 24, using a subtle trick.[2] The kissing number in n dimensions is unknown for n > 4, except for n = 8 (240), and n = 24 (196,560).[3][4] The results in these dimensions stem from the existence of highly symmetrical lattices: the E8 lattice and the Leech lattice. In fact, the only way to arrange spheres in these dimensions with the above kissing numbers is to center them at the minimal vectors in these lattices. There is no space whatsoever for any additional balls. So what is the glue that binds all these spheres in in the complexities that they are arrange in the dimensions and all that we shall have describe gravity along with the very nature of the particle that describe the reality and makeup that we have been dissecting with the collision process? As with good teachers, and "exceptional ideas" they are those who gather, as if an Einstein crosses the room, and for those well equipped, we like to know what this energy is. What is it that describes the nature of such arrangements, that we look to what energy and mass has to say about it's very makeup and relations. A crystal in it's molecular arrangement? Look's like grapefruit to me, and not oranges?:) Symmetry's physical dimension by Stephen Maxfield Each orange (sphere) in the first layer of such a stack is surrounded by six others to form a hexagonal, honeycomb lattice, while the second layer is built by placing the spheres above the “hollows” in the first layer. The third layer can be placed either directly above the first (producing a hexagonal close-packed lattice structure) or offset by one hollow (producing a face-centred cubic lattice). In both cases, 74% of the total volume of the stack is filled — and Hales showed that this density cannot be bettered..... In the optimal packing arrangement, each sphere is touched by 12 others positioned around it. Newton suspected that this “kissing number” of 12 is the maximum possible in 3D, yet it was not until 1874 that mathematicians proved him right. This is because such a proof must take into account all possible arrangements of spheres, not just regular ones, and for centuries people thought that the extra space or “slop” in the 3D arrangement might allow a 13th sphere to be squeezed in. For similar reasons, Hales’ proof of greengrocers’ everyday experience is so complex that even now the referees are only 99% sure that it is correct.... Each sphere in the E8 lattice is surrounded by 240 others in a tight, slop-free arrangement — solving both the optimal-packing and kissing-number problems in 8D. Moreover, the centres of the spheres mark the vertices of an 8D solid called the E8 or “Gosset” polytope, which is named after the British mathematician Thorold Gosset who discovered it in 1900. Coxeter–Dynkin diagram The following article is indeed abstract to me in it's visualizations, just as the kaleidescope is. The expression of anyone of those spheres(an idea is related) in how information is distributed and aligned. At some point in the generation of this new idea we have succeeded in in a desired result, and some would have "this element of nature" explained as some result in the LHC? A while ago I related Mendeleev's table of elements, as an association, and thought what better way to describe this new theory by implementing "new elements" never seen before, to an acceptance of the new 22 new particles to be described in a new process? There is an "inherent curve" that arises out of Riemann's primes, that might look like a "fingerprint" to some. Shall we relate "the sieves" to such spaces? At some point, "this information" becomes an example of a "higher form "realized by it's very constituents and acceptance, "as a result." Math Will Rock Your World by Neal Goldman By the time you're reading these words, this very article will exist as a line in Goldman's polytope. And that raises a fundamental question: If long articles full of twists and turns can be reduced to a mathematical essence, what's next? Our businesses -- and, yes, ourselves. "I’m a Platonist — a follower of Plato — who believes that one didn’t invent these sorts of things, that one discovers them. In a sense, all these mathematical facts are right there waiting to be discovered."Donald (H. S. M.) Coxeter I contrast the nature of Numerical Relativity to the computer and the way we would think human consciousness could have been linked in it's various ways. Who hasn't thought that the ingenuity of the thinking mind could not have been considered the Synapse and the Portal to the thinking Mind?:) Also think about what can be thought here as Gerardus t" Hooft asked as to think about in the limitations of what can be thought in relation to computerizations. There is something to be said here about what conscious is not limited too. It is by it's very nature "leading perspective" that we would like to have all these variables included in or assertions of what we can see while providing experimental data to the mind set of those same computerization techniques? Numerical Relativity Mind Map So we of course like to see the mind's ingenuity( computerized or otherwise) when it comes to how it shall interpret what is the road to understanding that gravity is seen in Relativities Source:Numerical Relativity Code and Machine Timeline It is a process by which the world of blackholes come into viewing in it's most "technical means providing the amount of speed and memory" that would allow us to interpret events in the way we have. The information has to be mapped to computational methodology in order for us to know what scientific value scan be enshrined in the descriptions of the Blackhole. Imagine that with current technologies we can never go any further then what we can currently for see given the circumstances of this technology? Source:Expo/Information Center/Directory-Spacetime Wrinkles Map So on the one hand there is an "realistic version" being mapped according to how we develop the means to visualize of what nature has bestowed upon us in the according to understanding Blackhole's and their Singularities. Numerical Relativity and Math Transferance Part of the advantage of looking at computer animations is knowing that the basis of this vision that is being created, is based on computerized methods and codes, devised, to help us see what Einstein's equations imply. Now that's part of the effort isn't it, when we see the structure of math, may have also embued a Dirac, to see in ways that ony a good imagination may have that is tied to the abstractions of the math, and allows us to enter into "their portal" of the mind. NASA scientists have reached a breakthrough in computer modeling that allows them to simulate what gravitational waves from merging black holes look like. The three-dimensional simulations, the largest astrophysical calculations ever performed on a NASA supercomputer, provide the foundation to explore the universe in an entirely new way. Scientists are watching two supermassive black holes spiral towards each other near the center of a galaxy cluster named Abell 400. Shown in this X-ray/radio composite image are the multi-million degree radio jets emanating from the black holes. Click on image to view large resolution. Credit: X-ray: NASA/CXC/AIfA/D.Hudson & T.Reiprich et al.; Radio: NRAO/VLA/NRL According to Einstein's math, when two massive black holes merge, all of space jiggles like a bowl of Jell-O as gravitational waves race out from the collision at light speed. Previous simulations had been plagued by computer crashes. The necessary equations, based on Einstein's theory of general relativity, were far too complex. But scientists at NASA's Goddard Space Flight Center in Greenbelt, Md., have found a method to translate Einstein's math in a way that computers can understand. Quantum Gravity Now their is a strange set of circumstance here that would leave me to believe, that the area of quantum gravity has lead Numerical Relativity to it's conclusion? Has the technology made itself feasible enough to explore new experimental data that would allow us to further interpret nature in the way it shows itself? What about at the source of the singularity? See: Dealing with a 5D World I would not be fully honest if I did not give you part of the nature of abstract knowledge being imparted to us, if I did not include the "areas of abstractness" to include people who help us draw the dimensional significance to experience in these mathematical ways. It is always good to listen to what they have to say so that we can further developed the understanding of what becomes a deeper recognition of the way nature unfolds of itself. There are two reasons that having mapped E8 is so important. The practical one is that E8 has major applications: mathematical analysis of the most recent versions of string theory and supergravity theories all keep revealing structure based on E8. E8 seems to be part of the structure of our universe. The other reason is just that the complete mapping of E8 is the largest mathematical structure ever mapped out in full detail by human beings. It takes 60 gigabytes to store the map of E8. If you were to write it out on paper in 6-point print (that's really small print), you'd need a piece of paper bigger than the island of Manhattan. This thing is huge. Emphasis and underlined, my addition. Computer Language and Math Joined from Artistic Impressionism? Most people think of "seeing" and "observing" directly with their senses. But for physicists, these words refer to much more indirect measurements involving a train of theoretical logic by which we can interpret what is "seen."- Lisa Randall THOMAS BANCHOFF has been a professor of mathematics at Brown University in Providence, Rhode Island, since 1967. He has written two books and fifty articles on geometric topics, frequently incorporating interactive computer graphics techniques in the study of phenomena in the fourth and higher dimensions The marriage between computer and math language(Banchoff) I would say would be important from the prospective of displaying imaging, seen in the development of abstract language as used in numerical relativity? Accummalated data gained from LIGO operations. Time variable measures? See:Computer Graphics In Mathematical Research ......A Condensative Result exists. Where "energy concentrates" and expresses outward. I mean if I were to put on my eyeglasses, and these glasses were given to a way of seeing this universe, why not look at the whole universe bathed in such spacetime fabric? This a opportunity to get "two birds" with one stone? I was thinking of Garrett's E8 Theory article and Stefan's here. On March 31, 2006 the high-resolution gravity field model EIGEN-GL04C has been released. This model is a combination of GRACE and LAGEOS mission plus 0.5 x 0.5 degrees gravimetry and altimetry surface data and is complete to degree and order 360 in terms of spherical harmonic coefficients. High-resolution combination gravity models are essential for all applications where a precise knowledge of the static gravity potential and its gradients is needed in the medium and short wavelength spectrum. Typical examples are precise orbit determination of geodetic and altimeter satellites or the study of the Earth's crust and mantle mass distribution. But, various geodetic and altimeter applications request also a pure satellite-only gravity model. As an example, the ocean dynamic topography and the derived geostrophic surface currents, both derived from altimeter measurements and an oceanic geoid, would be strongly correlated with the mean sea surface height model used to derive terrestrial gravity data for the combination Therefore, the satellite-only part of EIGEN-GL04C is provided here as EIGEN-GL04S1. The contributing GRACE and Lageos data are already described in the EIGEN-GL04C description. The satellite-only model has been derived from EIGEN-GL04C by reduction of the terrestrial normal equation system and is complete up to degree and order 150. How many really understand/see the production of gravitational waves in regards to Taylor and Hulse? To see Stefan's correlation in terms of "wave production" is a dynamical quality to what is still be experimentally looked for by LIGO? As scientists, do you know this? 6:41 AM, November 11, 2007 See here Thus the binary pulsar PSR1913+16 provides a powerful test of the predictions of the behavior of time perceived by a distant observer according to Einstein's Theory of Relativity. Since we know the theory of Relativity is about Gravity, then how is it the applications can be extended to the way we see "anew" in our world? A sphere, our earth, not so round anymore. Uncle has tried to correct me on "isostatic adjustment." Derek Sears, professor of cosmochemistry at the University of Arkansas, explains. See here Planets are round because their gravitational field acts as though it originates from the center of the body and pulls everything toward it. With its large body and internal heating from radioactive elements, a planet behaves like a fluid, and over long periods of time succumbs to the gravitational pull from its center of gravity. The only way to get all the mass as close to planet's center of gravity as possible is to form a sphere. The technical name for this process is "isostatic adjustment." With much smaller bodies, such as the 20-kilometer asteroids we have seen in recent spacecraft images, the gravitational pull is too weak to overcome the asteroid's mechanical strength. As a result, these bodies do not form spheres. Rather they maintain irregular, fragmentary shapes. K. Shumacker. Scientific America Do not have time to follow up at this moment. 7:02 AM, November 11, 2007 .....and here. In context of the post and differences, I may not have pointed to the substance of the post, yet I would have dealt with my problem in seeing. In general terms, gravitational waves are radiated by objects whose motion involves acceleration, provided that the motion is not perfectly spherically symmetric (like a spinning, expanding or contracting sphere) or cylindrically symmetric (like a spinning disk). A simple example is the spinning dumbbell. Set upon one end, so that one side of the dumbell is on the ground and the other end is pointing up, the dumbbell will not radiate when it spins around its vertical axis but will radiate if it tumbles end-over-end. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. If we imagine an extreme case in which the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would be given off. Given the context of the "whole universe" what is actually pervading, if one did not include gravity? So singularities are pointing to the beginning(i), yet, we do not know if we should just say, the Big Bang, because, one would had to have calculated the energy used and where did it come from "previous" to manifest? So some will have this philosophical position about "nothing(?)," and "everything as already existing." Wherever there are no gravitational waves the space time is flat. One would have to define these two variances. One from understanding the relation to "radiation" and the other "perfectly spherically "I’m a Platonist — a follower of Plato — who believes that one didn’t invent these sorts of things, that one discovers them. In a sense, all these mathematical facts are right there waiting to be discovered."Donald (H. S. M.) Coxeter There are two reasons that having mapped E8 is so important. The practical one is that E8 has major applications: mathematical analysis of the most recent versions of string theory and supergravity theories all keep revealing structure based on E8. E8 seems to be part of the structure of our universe. The other reason is just that the complete mapping of E8 is the largest mathematical structure ever mapped out in full detail by human beings. It takes 60 gigabytes to store the map of E8. If you were to write it out on paper in 6-point print (that's really small print), you'd need a piece of paper bigger than the island of Manhattan. This thing is huge. See:Pasquale Del Pezzo and E8 Origination?-Monday, March 19, 2007 If I had thought there was a way to describe the "interior" of the blackhole, it would be by recognizing the dimensionality the blackhole had to offer. One had to know where to locate "this place in the natural world." If we had understood the energy values of the particle world colliding(that space and frame of reference, then what were we finding that such a place in dimensionality could exist in the natural world? Yoyu had to accept that there was dynamical moves that werre being defined as a possiility. Thus RHIC is in a certain sense a string theory testing machine, analyzing the formation and decay of dual black holes, and giving information about the black hole interior. The RHIC fireball as a dual black hole-Horatiu Nastase So what ways would allow us to do this, and this is part of the idea that came to me as I was thinking about the place where all possibilities could exist. Yet, what existed as "moduli form in the valleys" was being extended. So I am connecting other things here too. 3) It is claimed that cosmic rays can energy exceeding that of colliders, and they have not caused trouble, suggesting that colliders will not cause trouble either. However, the analogy is not precise. It assumes two things that may not be true. First, cosmic ray center of mass energy exceeding that of colliders has never been measured directly. Measurements that seem to show this are based on showers of secondary particles. Second, the product of a collision between a cosmic ray and an earth particle will always be moving at an appreciable fraction of the speed of light. If it has a small capture radius, it will always pass right through earth like a neutrino. The product of a collider collision can (sometimes) be moving at less than escape velocity from earth. If so, it will fall into earth where it will have forever to accrete other matter. Some calculations show rapid accretion. See: Risk Evaluation Forum Using this above as one basis of the argument, it was by these assumptions that I too was convinced things would be okay. There are a lot of things that go with this statement that currently is not expressed given current information in regards to Pierre Auger experiments. That when clearly seen in the light of current research into LHC, does not allow one to take in all that they should be. Go back to John Ellis and current research if you must, and thinking in terms of the cosmos. It's infancy, and one does not disregard the "origins and beginnings" of this universe. Are there reasons that are less then desired that would govern any legal defence team based on some "religious affiliation" and driven from this religious context? I hope not. We would not want some Woitian backlash, as done with string theory, from a intelligent design standpoint, as a recognized motived factor in that legal defense. It is far beyond me that I ask these associative questions, yet, these images come to mind when ever the establishment hosting the world's collective scientists, is confronted by the very issues that seem evasive in regards to safety? Energies Used in Particle Creation It would behove any person to take the time to travel to the links I am supplying, to help you absorb as much information as possible.With the full intention that what I am describing does have a distillation process that will become very simple in qualitative design. Finding the energy range with which we are dealing within our colliders, has awakened the realization of the complexity dimensional attributes would have considering E8. "I’m a Platonist — a follower of Plato — who believes that one didn’t invent these sorts of things, that one discovers them. In a sense, all these mathematical facts are right there waiting to be discovered."Donald (H. S. M.) Coxeter The complexity of the blackhole would have allowed the possibilities of describing the source of "all dimensional attributes" knowing that the collapse of the blackhole would bring temperatures to the point of the quark Gluon plasma. What would be happening to allow such complexity? This basis of thought on my part is, "the equivalence determined" and thought about in terms of Lagrangian considerations. This another topic. But does deal with the understanding of the potential microscopic blackholes that could be produced, determined by the energy levels Thus RHIC is in a certain sense a string theory testing machine, analyzing the formation and decay of dual black holes, and giving information about the black hole interior. See:Are Strangelets Natural? LHC Safety? I am writing this blog entry because of Walter's comments on the side. It is very hard for me knowing that there is a train of thought developed through my research. This question of cascading showers, were with the understanding of "energy events" that allowed us to see a "greater plethora of mapping" that would direct us to the very essence of symmetry breaking, based on experimental processes herein this blog described. "String theory and other possibilities can distort the relative numbers of 'down' and 'up' neutrinos," said Jonathan Feng, associate professor in the Department of Physics and Astronomy at UC Irvine. "For example, extra dimensions may cause neutrinos to create microscopic black holes, which instantly evaporate and create spectacular showers of particles in the Earth's atmosphere and in the Antarctic ice cap. This increases the number of 'down' neutrinos detected. At the same time, the creation of black holes causes 'up' neutrinos to be caught in the Earth's crust, reducing the number of 'up' neutrinos. The relative 'up' and 'down' rates provide evidence for distortions in neutrino properties that are predicted by new theories." See: How Particles Came to Be In doing my own research, I tried to follow the thinking of the literature presented on the topic of microscopic blackholes. Now there was to my understanding a theoretical position assumed, from what we understood when dealing with the topic, and the understanding of what Cern was to produce. Fig. 2. Image showing how an 8 TeV black hole might look in the ATLAS detector (with the caveat that there are still uncertainties in the theoretical calculations). Now to me the basis of settling the questions of safety, were answered by association of "what was natural" within the domains of these cascading particle showers in terms of these cosmic rays. If we were after the origins and beginnings to our universe, we were in essence, describing and mapping the beginning times of these particle showers. Also, the dimensional attributes of the interior of the blackhole. See here and here It looks as if moderation, or maybe technical problems, has set in for me at Cosmic Variance. So I have to go from the last statement made there by Lee that I was allowed to contribute. To continue with the points I am making. I was glad to see Jacques was continuing where David B seems to have decided the futility of dealing with these issues of the String theory backlash. Lee Smolin:When there was little selection we naturally got a wide diversity of types of scientists, which was good for science. My view is that we need that diversity, we need both the hill climbers and the valley crossers, the technical masters and the seers full of questions and ideas. Raphael Bousso and Joseph Polchinski in "The String Theory Landscape" September 2004 Scientific issue speak exactly to what Lee is saying and descriptively allow us to see the pattern underlying Lee's comment. Maybe George Musser will release it for the group to inspect here Take full note of the diagrams. See OFF THE HOOK. Line-by-line crocheting instructions that tell where to increase or decrease numbers of stitches create the global shape of the Lorenz manifold.Univ. of Bristol Clifford:Hooking Up Manifolds The artlcle goes a great deal into the story of how mathematician Hinke Osinga and her partner mathematician Bernd Krauskopf got into this, and why they find it useful. You’ll also hear from mathematicians Carolyn Yackel, Daina Taimina, and Sarah-Marie Belcastro. This has been going on for a while, and there are even published scientific papers with crocheting instructions for various manifolds! How did I miss out on this?! This is great! If you did not continue with understanding the "topography of the energy involved" in terms of what the string theory landscape was doing, then you would have never understood the "hills and valleys" in the context of string theory landscape being described? HYPERBOLIC FABRIC. Many of the lines that could be inscribed on this crocheted hyperbolic plane curve away from each other, defying Euclid's parallel postulate. IN retrospect decisions we make will always resound with what we should have done, but that misses the boat when coming to the "creative abilities?" What we see may "institute a productive research group?" You exchange one for another? Lee Smolin:Is string theory in fact perturbatively finite? Many experts think so. I worry that if there were a clear way to a proof it would have been found and published, so I find it difficult to have a strong expectation, either way, on this issue. The fact that a way had been describe in terms of developing the "Triple Torus" speaks to the continued development of the string theory landscape? How could you conclusively finish off this statement and then from it describe the state of the union, when this had already been explained technically? We say that E8 has rank 8 (the maximum number of mutually commutative degrees of freedom), and dimension 248 (as a manifold). This means that a maximal torus of the compact Lie group E8 has dimension 8. The vectors of the root system are in eight dimensions, and are specified later in this article. The Weyl group of E8, which acts as a symmetry group of the maximal torus by means of the conjugation operation from the whole group, is of order 696729600. You had to see the context of the triple torus in relation too where the string landscape places were placing these modular forms. If I had said E8 and the continued development of modular form, what would this represent? The complexity of the forms themself are limited and finite so how could one claim that such work on the landscape is futile in regards to infinities? You should know that that the names of the Bee people have their names protected, to protect the community at large. Some larger human species, like to use the benefits of this society, without recognizing the constructive efforts that goes into this elixir Production. Marc D. Hauser: We know that that kind of information is encoded in the signal because people in Denmark have created a robotic honey bee that you can plop in the middle of a colony, programmed to dance in a certain way, and the hive members will actually follow the information precisely to that location. Researchers have been able to understand the information processing system to this level, and consequently, can actually transmit it through the robot to other members of the hive. See Bumblebee Wing Rotations and Dancing Many times people have used Ant world to illustrate their ideas, but the time has come, that the relationship to perspective dynamics at that level should think about the vast literature of Bee The second of five Lagrangian equilbrium points, approximately 1.5 million kilometers beyond Earth, where the gravitational forces of Earth and Sun balance to keep a satellite at a nearly fixed position relative to Earth. See Second of Five Lagrangian Equilibrium Points One should not think these people have been disassociated from reality, and that it has only been our ignorance of the economics and flight patterns, that we failed to see the dynamical community that bee propagation goes through, in order to continue it's rich development. The elixir production is coming out of that community. There are two reasons that having mapped E8 is so important. The practical one is that E8 has major applications: mathematical analysis of the most recent versions of string theory and supergravity theories all keep revealing structure based on E8. E8 seems to be part of the structure of our universe. The other reason is just that the complete mapping of E8 is the largest mathematical structure ever mapped out in full detail by human beings. It takes 60 gigabytes to store the map of E8. If you were to write it out on paper in 6-point print (that's really small print), you'd need a piece of paper bigger than the island of Manhattan. This thing is huge. See Solidification of Geometrical Presence Flower pollination is a interesting thing having considered the world that the Bee people live in. After all, the dynamics and travel used, one could not help being enamoured with the naturalness with which one may try to reproduce in human mechanistic movement, that the Bee people live and breathe. Humanistic intelligences is a larger format, to what they do in that Bee community? Cell construction provides for the further propagation of the community, but no where do the Bee people give the particulates of the cell construction? Humanistic intelligences only see the community with regards to the Bee movements :)The Bee people have a greater depth to what is seen. Observing the community at large, the Bee people have much more to present then thinking just in the way they work. Who is Navier Stokes of the humanistic intelligences to think only he could reveal anomalistic perception in the nature of viscosity, not to think there is relativistic conditions that the Bee people bring to reductionism views in physics? Worker bees perform a host of tasks from cleaning the hive cells to looking after the larvae The workers have a variety of tasks to perform – some collect nectar from flowers, others pollen, some are engaged in constructing new combs, or looking after the developing larvae, some perform the duty of cleaning the cells or feeding the larvae on special secretion that they regurgitate from their mouth parts. In these insects the exact task of any individual depends largely on its age, although there is a certain flexibility, depending on the requirements of the hive. So I've taken a different tack here. If it is so hard for the community at large to comprehend that extra dimensional thinking then there has to be some way in which we as lay people can envision the acrobatics of a busy bee and their flight plans? What the community is all about. Who is doing what? How many dimensions are there? Consider ants crawling on a tabletop. In their daily experience, they can explore only 2 dimensions, those of the table surface. They may see a bee up flying, or occasionally landing on the table top, but that 3rd dimension is something they can only see or imagine, not experience. Perhaps we are in an analogous situation. Instead of a tabletop, we live in a 3-dimensional space called 3-brane (a name generalizing 2-brane, i.e., membrane). For some reason, we (i.e., atoms, molecules, photons etc.) are stuck in this 3-brane, even though there are 6 additional dimensions out there. Gravity, like the bee, can go everywhere. We call this the brane world, a rather natural phenomenon in superstring theory. At the moment, physicists are working hard to understand this scenario better and to find ways to experimentally test this idea. The Bee people had graduated from the world of the ant people, jsut by their evolutionary timeline. They were "much more visionary" then the ant people. Because they could leave their three dimensional world of the table top, and pop into ant world's frame of reference. Ant people were never the wiser. Just that, Bee people existed. Providing a rigorous theoretical framework that incorporates both recent developments such as Aubrey-Mather theory and established fundamentals like Kolmogorov-Arnold-Moser theory, this book represents an indispensable resource for graduate students and researchers in the disciplines concerned as well as practitioners in fields such as aerospace engineering. See Wolf-Rayet star Brane theory development needed a boost from the Bee people. Not only now do we understand the "dynamical thinking that goes with the Bee's flight patterns," we are now thinking, hey, "can these things apply" to the current solutions the humanistic intelligences persevere to unfold in their space travels? Not just "our waist lines" as some might think in regards to "lensing" and the circles we apply in "computerize efforts." The range of territory of the Bee's community is well considered? While I might infer the "attributes of Coxeter here," it is with the understanding such a dimensional perspective which has it's counterpart in the result of what manifests as matter creations. Yet we have taken our views down to the "powers of ten" to think of what could manifest even before we see the result in nature. When you go to the site by PBS of where, Nano: Art Meets Science, make sure you click on the lesson plan to the right. Visitors' shadows manipulate and reshape projected images of "Buckyballs." "Buckyball," or a buckminsterfullerene molecule, is a closed cage-structure molecule with a carbon network. "Buckyball" was named for R. Buckminster "Bucky" Fuller (1895-1983), a scientist, philosopher and inventor, best known for creating the geodesic dome. Photo Credit: © 2003 Museum Associates/Los Angeles County Museum Fundamentally the properties of materials can be changed by nanotechnology. We can arrange molecules in a way that they do not normally occur in nature. The material strength, electronic and optical properties of materials can all be altered using nanotechnology. See Related information on bucky balls here in this site. This should give some understanding of how I see the greater depth of what manifest in nature, as solids in our world, has some "other" possibilities in dimensional attribute, while it is given association to the mathematical prowess of E8. I do not know of many who will take in all that I have accumulated in regards to how one may look at their planet, can have the depth of perception that is held in to E8.? One may say what becomes of the world as it manifest into it's constituent parts, has this energy relation, that it would become all that is in the design of the world around us. While some scientists puzzle as to the nature of the process of E8, little did they realize that if you move your perception to the way E8 is mapped to 248 dimensions, the image while indeed quite pleasing, you see as a result. It can include so much information, how would you know that this object of mathematics, is a polytrope of a kind that is given to the picture of science in the geometrical structure of the bucky ball or fullerene. Diamond and graphite are two allotropes of carbon: pure forms of the same element that differ in structure. Allotropy (Gr. allos, other, and tropos, manner) is a behaviour exhibited by certain chemical elements: these elements can exist in two or more different forms, known as allotropes of that element. In each different allotrope, the element's atoms are bonded together in a different manner. For example, the element carbon has two common allotropes: diamond, where the carbon atoms are bonded together in a tetrahedral lattice arrangement, and graphite, where the carbon atoms are bonded together in sheets of a hexagonal lattice. Note that allotropy refers only to different forms of an element within the same phase or state of matter (i.e. different solid, liquid or gas forms) - the changes of state between solid, liquid and gas in themselves are not considered allotropy. For some elements, allotropes can persist in different phases - for example, the two allotropes of oxygen (dioxygen and ozone), can both exist in the solid, liquid and gaseous states. Conversely, some elements do not maintain distinct allotropes in different phases: for example phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state. The term "allotrope" was coined by the famous chemist Jöns Jakob Berzelius. "I’m a Platonist — a follower of Plato — who believes that one didn’t invent these sorts of things, that one discovers them. In a sense, all these mathematical facts are right there waiting to be discovered."Donald (H. S. M.) Coxeter There are two reasons that having mapped E8 is so important. The practical one is that E8 has major applications: mathematical analysis of the most recent versions of string theory and supergravity theories all keep revealing structure based on E8. E8 seems to be part of the structure of our universe. The other reason is just that the complete mapping of E8 is the largest mathematical structure ever mapped out in full detail by human beings. It takes 60 gigabytes to store the map of E8. If you were to write it out on paper in 6-point print (that's really small print), you'd need a piece of paper bigger than the island of Manhattan. This thing is huge. Clifford of Asymptotia drew our attention to this for examination and gives further information and links with which to follow. He goes on to write,"Let’s not get carried away though. Having more data does not mean that you worked harder to get it. Mapping the human genome project involves a much harder task, but the analogy is still a good one, if not taken too far." Of course since the particular comment of mine was deleted there, and of course I am okay with that. It did not mean I could not carry on here. It did not mean that I was not speaking directly to the way these values in dimensional perspective were not being considered. Projective Geometries? A theorem which is valid for a geometry in this sequence is automatically valid for the ones that follow. The theorems of projective geometry are automatically valid theorems of Euclidean geometry. We say that topological geometry is more abstract than projective geometry which is turn is more abstract than Euclidean geometry. There had to be a route to follow that would lead one to think in such abstract spaces. Of course, one does not want to be divorced from reality. So one should not think that because the geometry of GR is understood, that you think nothing can come from the microseconds after the universe came into expression. At this point in the development, although geometry provided a common framework for all the forces, there was still no way to complete the unification by combining quantum theory and general relativity. Since quantum theory deals with the very small and general relativity with the very large, many physicists feel that, for all practical purposes, there is no need to attempt such an ultimate unification. Others however disagree, arguing that physicists should never give up on this ultimate search, and for these the hunt for this final unification is the ‘holy grail’. Michael The Holy Grail sure comes up lots doesn't it:) Without invoking the pseudoscience that Peter Woit spoke of. I thought, if they could use Babar, and Alice then I could use the Holy Grail? See more info on Coxeter here. Like Peter I will have to address the "gut feelings" and the way Clifford expressed it. I do not want to practise pseudoscience as Peter is about the landscape.:) When ones sees the constituent properties of that Gossett polytope 4[21] in all it's colours, the complexity of that situation is quite revealing. Might we not think in the time of supergravity, gravity will become weak, in the matter constitutions that form. As in Neutrino mixing I am asking you to think of the particles as sound as well as think them in relation to the Colour of Gravity. If you were just to see grvaity in it's colourful design and what value that gravity in face of the photon moving within this gravitational field? We detect the resulting "wah-wah-wah" in properties of the neutrino that appear and disappear. For example, when neutrinos interact with matter they produce specific kinds of other particles. For example, when neutrinos interact with matter they produce specific kinds of other particles. Catch the neutrino at one moment, and it will interact to produce an electron. A moment later, it might interact to produce a different particle. "Neutrino mixing" describes the original mixture of waves that produces this oscillation effect. The "geometry of curvature" had to be implied in the outcome, from that quantum world? Yet at it's centre, what is realized? You had to be lead there in terms of particle research to know that you are arriving at the "crossover point." The superfluid does this for examination. 5. Regular polytope: If you keep pulling the hypercube into higher and higher dimensions you get a polytope. Coxeter is famous for his work on regular polytopes. When they involve coordinates made of complex numbers they are called complex polytopes. Pasquale Del Pezzo, Duke of Cajanello, (1859–1936), was "the most Neapolitan of Neapolitan Mathematicians". He was born in Berlin (where his father was a representative of the Neapolitan king) on 2 May 1859. He died in Naples on 20 June 1936. His first wife was the Swedish writer Anne Charlotte Leffler, sister of the great mathematician Gösta Mittag-Leffler (1846-1927). At the University of Naples, he received first a law degree in 1880 and then in 1882 a math degree. He became a pre-eminent professor at that university, teaching Projective Geometry, and remained at that University, as rector, faculty president, etc. He was mayor of Naples starting in 1919, and he became a senator in the Kingdom of Naples. His scientific achievements were few, but they reveal a keen ingenuity. He is remembered particularly for first describing what became known as a Del Pezzo surface. He might have become one of the strongest mathematicians of that time, but he was distracted by politics and other interests. So what chance do we have, if we did not think this geometry was attached to processes that would unfold into the bucky ball or the fullerene of science. To say that the outcome had a point of view that is not popular. I do not count myself as attached to any intelligent design agenda, so I hope people will think I do not care about that. I found the email debate between Smolin and Susskind to be quite interesting. Unfortunately, it mixes several issues. The Anthropic Principle (AP) gets mixed up with their other agendas. Smolin advocates his CNS, and less explicitly loop quantum gravity. Susskind is an advocate of eternal inflation and string theory. These biases are completely natural, but in the process the purported question of the value of the AP gets somewhat lost in the shuffle. I would have liked more discussion of the AP directly See here for more information So all the while you see the complexity of that circle and how long it took a computer to map it, it has gravity in it's design, whether we like to think about it or not? But of course we are talking about the symmetry and any thing less then this would have been assign a matter state, as if symmetrical breaking would have said, this is the direction you are going is what we have of earth? Isostatic Adjustment is Why Planets are Round? While one thinks of "rotational values" then indeed one would have to say not any planets is formed in the way the sun does. Yet, in the "time variable understanding" of the earth, we understand why it's shape is not exactly round. Do you think the earth and moon look round if your were considering Grace? On the moon what gives us perspective when a crater is formed to see it's geological structure? It's just not a concern of the mining industry, as to what is mined on other orbs, but what the time variable reveals of the orbs structure as well. Clementine color ratio composite image of Aristarchus Crater on the Moon. This 42 km diameter crater is located on the corner of the Aristarchus plateau, at 24 N, 47 W. Ejecta from the plateau is visible as the blue material at the upper left (northwest), while material excavated from the Oceanus Procellarum area is the reddish color to the lower right (southeast). The colors in this image can be used to ascertain compositional properties of the materials making up the deep strata of these two regions. (Clementine, USGS slide 11) See more here The link post titled nameabove in title above will have to be corrected, and can be done in other places, but for now it has to remain as it is here. Those CV people are Radicals for change, eh?:) See here, here, here? "If you are not with us you are against us". Imagine such sentiments even within your own country? Could such a country become divisive within itself, that there is the emergence of a "left" and "right." and that you have just now assume the dubious "left." :) I'm thinking of Sean's entry on perspective. The sound of "one hand clapping." Sort of the recognition, that if you change the "idealization to nonviolence," it acted as a force with which anger and aggressive attitudes of the "right" to control behavior of the left, had nothing against with which it could push? Reserve one's judgement on the speech? :( What am I asking? Would automatically send up the hackles Qui! NOn! Seeing the same cartoonist construct three different pieces on the same topic, from three standpoints, is really an edifying experience. Not only for my meager acuity in the visual arts, but to understand message and framing. I’ll share these three cartoons with you, as well as the rough draft. There would be no sound. There would be no clapping. Just the recognition that "non violent" action would call sane citizens of the state, to recognzie diversity in opinion, and not polarize around such distinctions. I know it's hard walking the (A)middleway. Sometimes this "heartway for expression" has to have applications moved to the head for correct thinking, least we are circumvented to the childish antics of emotive reactions in the probable future. The "child" is the future, and to continunally challenge our emotive constructs( these things well imprinted in our own realities) that we say we cannot change of what has already happened? We can certainly change, how we will react in the future by challening these perspectives. A Clear mind and consise thinking, to combat what is embedded, will have fortunate consequences, although, the struggle to implement such a strategy, must unfold in order to change how we see into that future. I am "constantly fighting"( my own battle with self) to bring sane recognition out of the historical background and "relative phrases" to situations that might become the struggle of nations, if one had thought to extend it's appliciability to such lengths? It's not that you fight your parents, and all that has been encoded in you, or that you change the reactionary way( this is part of our psychological makeup) in which you will emotively react(a "positive anger" for change is possible), but that there is a third choice, and one in which a better logic would materialize? Held to political distinctions that we had all might revert too. That the emotive forced garnered a better visulaization then, not to be lead, by illusions of the religious right or to default on the irreligiousness of the left(?). Would I have just been as guilty here in that last statement? Now we've arrived at the end of the century, where crime, moral chaos, and politics driven by the often hollow sophistications of sociology and psychology all alert us to the breakdown of too many previous assumptions, particularly the coming apart of what might be called our ethical agreements with one another. Late capitalism, in apparent triumph, seems to encourage self-interest over any lingering sense of a commonwealth. Who then are the better nation builders? That to continue to perfect this war that rages within ourselves would take this to the political stage and make the war happen on a much wider and divisive So who then who would the better nation builders be? Those that seek the third choice and the choice that recognizes change is very fast and becoming. That there is no time standing, always new. As the Discussion continues The point is that if any of you, "censored" like Peter likes too, then I would have thought the way they censored your comments Mark and Joanne, would have been felt an affrontation about getting to the heart of issues, "equally frustating" in other venues. I recognize your efforts as well of doing "something" amidst such censorhips. Getting to the heart of issues against "flagrant attitudes" politicized, requires that we think twice about what some who contribute, and what cannot work to the advanges of clear thinking? A double edged sword? :) Would one defend another of equal stature in regards to science who ID intelligent conversation as to express one's own agenda not deterrred as "negative way" to the right way? :) I would rather focus on "nation building" as you have. How has profound thinking been changed? I think the teachers are suppose to let us decide instead of them deciding for us. We are suppose to find our way as you lay correct infomration before us instead of packaging it, and saying ,this is wrong becuase I say so? Maybe these are the failing attributes as parents, we assign to our children in those hidden meme's?:)
{"url":"http://www.eskesthai.com/search/label/E8","timestamp":"2014-04-20T15:56:53Z","content_type":null,"content_length":"321093","record_id":"<urn:uuid:2658d493-1017-4997-ae5c-309799932cd3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Ductwork surface area calculation problem April 25th 2012, 06:32 AM #1 Apr 2012 Ductwork surface area calculation problem I have a major in economics and I am creating an excel spreadsheet for quotations of sheet metal ductwork as my final thesis. I need to calculate weights for various types of ductwork produced in the company I chose for my thesis. In order to calculate weight, I need to know the surface area of ductwork parts. For many parts (pipes, elbows,etc.) I have figured out the formula for surface area myself, resp. found it on the internet. But there are some pieces I am not able to figure out how to get the surface area based on the input parameters. You will find the pictures of the parts in the pdf attachment for which I need to get the formula for calculation of the surface area (in metric units). I would be very thankful if anyone of you could help me with finding the correct formulas. Thank you very much for your time, help and effort. Re: Ductwork surface area calculation problem Those are too many questions to answer here but all the formulas you need can be found at Surface Area and Volume Formulas - Three Dimensional Shapes Re: Ductwork surface area calculation problem Thank you for the reply, I know about those basic formulas, but my shapes are asymetrical and / or need to calculate area under the curved line (integration), so these basic formulas will not help me. Re: Ductwork surface area calculation problem I can only answer based on what you give. The only "curved lines" in the figures you show are circles. All the figures are made from basic cylinders, spheres, and prisms that you can apply basic formulas to. Re: Ductwork surface area calculation problem I am working on the same thing for my brother's sheet metal company. How did you calculate the surface area of the elbows? HELP PLEASE!! Re: Ductwork surface area calculation problem Go elbow your way through these: April 25th 2012, 07:31 AM #2 MHF Contributor Apr 2005 April 25th 2012, 08:02 AM #3 Apr 2012 April 28th 2012, 01:50 PM #4 MHF Contributor Apr 2005 October 3rd 2012, 04:41 AM #5 Oct 2012 October 3rd 2012, 08:22 PM #6 MHF Contributor Dec 2007 Ottawa, Canada
{"url":"http://mathhelpforum.com/geometry/197890-ductwork-surface-area-calculation-problem.html","timestamp":"2014-04-18T09:23:47Z","content_type":null,"content_length":"45785","record_id":"<urn:uuid:9aee7f4f-2144-4f39-9f0c-99c5d90eb1b5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
Specht module Specht module Specht modules are some very nice $ℤ\left[{S}_{n}\right]$modules which are irreducible? after base change to $ℚ$. Over a field of characteristic $p$, where $p\mid n!$, the base changes are not irreducible, but every irreducible module appears as the cosocle? of a Specht module. Revised on October 28, 2009 18:33:15 by Toby Bartels
{"url":"http://ncatlab.org/nlab/show/Specht+module","timestamp":"2014-04-18T13:15:21Z","content_type":null,"content_length":"11874","record_id":"<urn:uuid:b3282eac-95f1-48a9-9eb1-74f87fcc4498>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding an eigenvector February 1st 2013, 01:09 PM Finding an eigenvector I'm trying to find an eigenvector of a matrix. I have λ = 1, so my matrix (A - λI) is $[-0.5253, 0.8593, -0.1906; -0.8612, -0.5018, 0.1010; 0.1817, 0.1161, -0.0236]$ And from rows 2 and 3 I get these simultaneous equations I eliminate to find $t_{2}=0.225t_{3}$ and $t_{1}=-0.0137t_{3}$ Thus the eigenvector is t= $k (-0.0137, 0.225, 1)$ But the actual answer is given as (-0.0088, 0.216, 1). Thanks for any pointers. February 1st 2013, 02:45 PM Re: Finding an eigenvector one question: are you using decimal approximations of rational numbers? February 1st 2013, 11:41 PM Re: Finding an eigenvector They're not approximations, just measurements. February 2nd 2013, 10:04 AM Re: Finding an eigenvector Is my technique right? February 2nd 2013, 11:55 AM Re: Finding an eigenvector Looks like you are having rounding errors. When I calculate the eigenvector for the matrix you give, I'm getting different results than either of your answers. See for instance here: {{1-0.5253, 0.8593, -0.1906},{ -0.8612, 1-0.5018, 0.1010},{ 0.1817, 0.1161, 1-0.0236}} - Wolfram|Alpha Results The rounding errors you have are propagating more than you may like. To answer your question: yes, your technique is right. Note that there are more advanced methods to keep the rounding errors to a minimum. February 2nd 2013, 12:33 PM Re: Finding an eigenvector Thanks very much for confirming. I've used more accurate values and I get a more sensible answer, still not spot-on though. Important thing is that my method is right.
{"url":"http://mathhelpforum.com/algebra/212412-finding-eigenvector-print.html","timestamp":"2014-04-17T22:28:48Z","content_type":null,"content_length":"7403","record_id":"<urn:uuid:80284f74-57a5-42c7-8428-84e7fec01707>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
D. J. Bernstein Fast arithmetic I have speed reports for djbfft 0.76 on In each case the compiler options are the default options in the djbfft installation: -O1 -fomit-frame-pointer with -malign-double added automatically on the x86 processors. I also have some speed reports for djbfft 0.75 under alternate compilers: • a 240MHz HP PA-8200 under HP-UX B.11.0 cc +O2 -Dinline and • a 240MHz HP PA-8200 under HP-UX B.11.0 cc +O3 +Oall -Dinline. Contents of the speed reports Codes used in the reports: • r: Real transform. • c: Complex transform. • 4: Single-precision transform. • 8: Double-precision transform. • +: Forward DFT. • -: Inverse DFT. • m: Multiplication. Convolution against a precomputed filter takes one forward DFT, one multiplication, and one inverse DFT. • s: Scaling. Precomputation of a filter takes one forward DFT and one scaling. • nothing: No computation. This shows the overhead of the tick-counting mechanism. • RDTSC: Tick counts are obtained from the Pentium cycle counter. • gethrtime: Tick counts are obtained from the Solaris gethrtime() nanosecond counter. Each line shows the individual tick counts for eight iterations of the routine being benchmarked. The first iteration is normally slower than the rest; instructions may not be in cache (or even memory), inputs may not be in cache, etc. The first few iterations may wobble a bit because of branch prediction hysteresis. All the iterations will usually have different speeds for inputs larger than cache. Individual iterations may occasionally be much slower if the operating system happens to perform a context switch. For example, the Pentium-133 lines Using RDTSC, pentium/*.c. nothing 27 17 17 18 17 17 17 18 256 r8- 11288 8127 8102 8102 8102 8102 8102 8102 show that a 256-point in-cache double-precision real inverse DFT, with a tiny amount of timing overhead, normally takes 8102 Pentium cycles. Notes on previous versions of djbfft 19970916: First version of djbfft. I wrote this code to prove to the FFTW authors that a simple split-radix FFT would run faster than their complicated code on a Pentium. My unscheduled code, 86 lines long, did a size-256 single-precision transform in about 35000 Pentium cycles, faster than FFTW. A few days later, after some casual instruction scheduling, I had the time down to about 24000 Pentium cycles. 19971116: djbfft 0.50. About 23000 Pentium cycles for a size-256 double-precision transform. I was still learning about the Pentium FPU at this point. 19971218: djbfft 0.55. About 20000 Pentium cycles. New in this version: inverse transforms. 19971226: djbfft 0.60. About 20000 Pentium cycles. New in this version: simultaneous support for single precision and double precision. 19980923: djbfft 0.70. About 18000 Pentium cycles, or 15000 UltraSPARC-I cycles. New in this version: multiplication routines to support complex convolution and real convolution. 19990914: djbfft 0.75. About 17000 Pentium cycles, or 6300 UltraSPARC-I cycles, or 13000 Pentium-II cycles. New in this version: real FFTs and UltraSPARC tuning. 19990930: djbfft 0.76. About 17000 Pentium cycles, or 6300 UltraSPARC-I cycles, or 12000 Pentium-II cycles. New in this version: some Pentium-II tuning.
{"url":"http://cr.yp.to/djbfft/speed.html","timestamp":"2014-04-17T01:36:32Z","content_type":null,"content_length":"4734","record_id":"<urn:uuid:40202c21-50ef-462e-8331-1f01b4905dae>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Morphisms of Frobenius manifolds: Definitions and examples? up vote 4 down vote favorite Frobenius manifolds arise in the study of quantum cohomology and mirror symmetry -- roughly they are manifolds (or varieties or whatever) such that the tangent spaces are Frobenius algebras (there are more conditions to be satisfied -- flat metric, flat identity, etc). A basic reference is Manin's book "Frobenius manifolds, quantum cohomology, and moduli spaces". I have studied Frobenius manifolds for a while now, and I have so far never seen any definitions nor any examples of morphisms (other than, perhaps, isomorphisms). Are there any reasonable definitions of morphism of Frobenius manifolds? Are there any interesting examples? add comment 1 Answer active oldest votes This is going to be more a non-answer than an answer: I don't think there is a useful notion of morphism between Frobenius manifolds, besides the notions of (local) isomorphisms, and of a sub-Frobenius manifold (which is not the same thing as a submanifold of a Frobenius manifolds). To argue completely in intrinsic terms: if you require that a morphism preserves the metric on the tangent spaces, then it is necessarily a local embedding. Locally, the image is then a sub-Frobenius manifold, i.e. a flat submanifold $M \subset N$ such that the multiplication on the tangent bundle $T_N \otimes T_N \to T_N$ maps $T_M \otimes T_M$ to $T_M$. Examples of sub-Frobenius manifolds arise naturally, e.g. in quantum cohomology, $\oplus_p H^{p, p}(X, \mathbb{C})$ is a sub-Frobenius manifold of $H^*(X, \mathbb{C})$. up vote 3 down To argue with a little more context: a more flexible notion of morphisms of Frobenius manifolds wouldn't be useful unless it arises naturally. However, even for the most simple situations you vote could think of, say if you compare the quantum cohomology of a product $X \times Y$ with the quantum cohomology of $X$ and $Y$, there does not seem to be a useful morphism between the Frobenius manifolds involved. Instead, the (Frob. mfd of) QC of $X \times Y$ is a tensor product of the QCs of $X$ and $Y$. Other nice statements are known for Grassmannians $G(k, n)$, whose QC is a kind of mixture of anti-symmetric and symmetric $k$-fold tensor product of the QC of $P^{n-1}$ (see papers by subsets of Bertram/Ciocan-Fontanine/Kim/Sabbah on abelian/non-abelian quotients). Again, a very nice statement, but no morphisms of Frob. manifolds anywhere. [If I may sneak in a little advertising: If $\tilde X$ is the blow-up of $X$ at a point, then the QC of then the QC of $\tilde X$ has a partial compactification, and the boundary is a Frob. submanifold isomorphic to QC of $X$. Well, unfortunately that's a little bit of a lie, as the multiplication has a pole on the added boundary divisor, and so arXiv:math.AG/0403260 doesn't read quite as nicely as the statement above. But again, while there is almost an inclusion of QC of $X$ into QC of $\tilde X$, there is no morphism in the other direction.] add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/38204/morphisms-of-frobenius-manifolds-definitions-and-examples?sort=oldest","timestamp":"2014-04-21T15:15:29Z","content_type":null,"content_length":"51052","record_id":"<urn:uuid:87a3d7a1-b372-4ca1-9ca5-0bdc9a3c2c40>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
SOC S554 3707 Statistical Techniques in Sociology 1 Sociology | Statistical Techniques in Sociology 1 S554 | 3707 | Patricia McManus This is the first semester of the two-course sequence in social statistics required of graduate students in Sociology. The course takes a systematic approach to the exposition of the general linear model for continuous dependent variables. In addition to laying the theoretical foundations for future social science research, this course introduces students to the use of computerized statistical analysis using the software program Stata. The course is organized into four sections. The first section of the course reviews the fundamental statistical concepts that are the building blocks for regression analysis. The purpose of this section is both to refresh your memory and to provide a deeper, more formal presentation of familiar concepts. The second section focuses on the assumptions and mechanics of the classical linear regression model and introduces the model in matrix form. At the end of the second section you will have a good mechanical knowledge of regression analysis. The third section deals with violations of the assumptions of the classical linear regression model. At the end of the third section you will have a deeper theoretical and applied understanding of the flexibility and limitations of the general linear regression model for social science data. The final section introduces students to the use of structural equations models in social science research. The purpose of this brief section is to give you some exposure to these complex models for continuous dependent variables rather than to ask you to develop sophistication with these techniques. In addition to the regularly scheduled class periods, students are required to attend lab sessions which focus on computing methods and data analysis techniques. Students who enroll in this course have taken at least one statistics course at the level of S250, the undergraduate course required of Sociology majors. Students are not expected to have a background in calculus, but facility with algebra and knowledge of the rudiments of statistical distribution theory and hypothesis testing is a prerequisite.
{"url":"http://www.indiana.edu/~deanfac/blspr01/soc/soc_s554_3707.html","timestamp":"2014-04-19T09:32:17Z","content_type":null,"content_length":"2676","record_id":"<urn:uuid:b11a29f2-0df1-4d6f-be76-cadb8aa667b6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Virtual Network Embedding: A Hybrid Vertex Mapping Solution for Dynamic Resource Allocation Journal of Electrical and Computer Engineering Volume 2012 (2012), Article ID 358647, 17 pages Research Article Virtual Network Embedding: A Hybrid Vertex Mapping Solution for Dynamic Resource Allocation School of ICT, KTH Royal Institute of Technology, 16440 Kista, Sweden Received 2 March 2012; Revised 14 May 2012; Accepted 16 May 2012 Academic Editor: Shuo Guo Copyright © 2012 Adil Razzaq et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Virtual network embedding (VNE) is a key area in network virtualization, and the overall purpose of VNE is to map virtual networks onto an underlying physical network referred to as a substrate. Typically, the virtual networks have certain demands, such as resource requirements, that need to be satisfied by the mapping process. A virtual network (VN) can be described in terms of vertices (nodes) and edges (links) with certain resource requirements, and, to embed a VN, substrate resources are assigned to these vertices and edges. Substrate networks have finite resources and utilizing them efficiently is an important objective for a VNE method. This paper analyzes two existing vertex mapping approaches—one which only considers if enough node resources are available for the current VN mapping and one which considers to what degree a node already is utilized by existing VN embeddings before doing the vertex mapping. The paper also proposes a new vertex mapping approach which minimizes complete exhaustion of substrate nodes while still providing good overall resource utilization. Experimental results are presented to show under what circumstances the proposed vertex mapping approach can provide superior VN embedding properties compared to the other approaches. 1. Introduction Internet is being utilized to provide a wide range of services. Over a period of time (which is not too long), it has become vital component/core architecture to provide services for global commerce, media, and defense [1]. In spite of the success attributed with the current Internet, it has some flaws which need to be addressed. The “everything over IP” [2], as well as “best-effort” packet delivery does not suit all the services being provided on current Internet, whereas security, routing stability, and control and QoS (quality of service) guarantees are also some of the major concerns [1]. However, there are many limitations/obstacles in overcoming the above mentioned flaws. Some of these include appropriate changes in routers and host software, as well as joint agreement of all the ISPs on any architectural change [3]. Capital investment, competing interests of stakeholders as well as end-to-end design of IP, calls for a worldwide agreement to introduce any changes [1]. Since, it is very rare that a single ISP controls complete end-to-end path, new services have only been employed/tested within small geographic locations [4]. The challenges/requirements to overcome the Internet impasse/ossification were outlined in [3–5]. Requirements mentioned in [3] include, ease of experimentation with new architectures on live traffic, provisioning of a plausible deployment path for an architecture, and focusing of an architectural solution on a broad range of problems. The challenges described in [4] are, discovering the resources of a physical infrastructure, assigning virtual networks to underlying physical networks, and accounting of resources. Isolation, performance, scalability, flexibility, evolvability, management, and applications were the challenges identified for new generation network architectures (future network) in [5]. Network virtualization is at the heart of proposals for addressing the Internet ossification [1, 3, 4]. It can be utilized in experimental research facilities [6–8] as well as in provision of customized end-to-end services over a shared infrastructure [1, 4]. A primary feature of the future Internet would be to assign substrate network resources to the requested virtual networks. Therefore, virtual network embedding (VNE) is the key area in network virtualization. In order to embed/assign/map a virtual network onto the substrate/physical network, each virtual node is mapped to a physical node and each virtual link is embedded on a substrate path. A number of virtual networks (VNs) can be deployed on top of the physical network (or substrate), depending on the capability of the substrate and the demands of VNs. Virtual network embedding (VNE) problem is NP-hard [9, 10] where several constraints need to be satisfied. In order to map a VN onto the substrate, requirements of both its vertices as well as edges should be fulfilled. In addition to this, VNs can arrive at different times, in any order and can be based on any standard network topology (e.g., star, bus, ring, or mesh). The substrate network also has a limited amount of resources. Thus, we need to embed or map a VN with resource constraints onto the substrate network (SN) which has finite resources. In this paper, we evaluate three different vertex mapping approaches for VNE. Our first approach deals with mapping virtual vertices onto any available substrate nodes which can satisfy their demand. This method does not take into account the possibility of a node becoming bottleneck at the time of mapping a VN’s vertex, is named as baseline approach (BLA), and was presented in [11]. Second approach is focused on mapping virtual vertices to the substrate nodes with maximum resources, is called as greedy node mapping (GNM), and was presented in [9]. The advantage of using GNM is that it can minimize the use of substrate resources from bottleneck nodes. Drawback of GNM can be that vertices may get mapped in such a way that more bandwidth resources may be needed to map a VN as compared to BLA. Third approach is being proposed in this paper and is named as HBNRM (hybrid BLA bottleneck node reduced mapping). Main focus of this new approach is to utilize benefits of both BLA and GNM while minimizing their disadvantages. Main contributions of this paper are to evaluate different vertex mapping approaches and investigate their impact on VN embedding, how substrate’s nodes become bottleneck and get exhausted. Resource utilization as a result of mapping vertices at different substrate locations by three vertex mapping approaches is also analyzed. In order to thoroughly investigate the impact of vertex mapping by any approach, evaluations are done by mapping sparsely and densely connected VNs on sparsely as well as densely connected substrate networks. The proposed solution starts with the hypothesis that it should be possible to avoid complete exhaustion of node resources at a lesser cost, which can improve VN embedding possibilities. Rest of the paper is organized as follows. Section 2 defines the problem while Section 3 presents work done in the area of VNE. Section 4 describes our solution whereas, Section 5 presents simulation results. Section 6 concludes the paper. 2. Network Model and Problem Description The proposed solution represents virtual as well as substrate networks as undirected graphs. The substrate network is represented by , whereas the network to be mapped; that is, the VN is shown by . Notations for describing the VN mapping problem are summarized in Table 1. Throughout this document when a reference is made to a link or node, it means that it belongs to the substrate, while VN’s link and node will be termed as edge and vertex, respectively. We consider central processing unit (CPU) as a resource for nodes and vertices, and bandwidth to be the resource for edges and links. Figure 1 shows a substrate network whereas Figure 2 represents a VN request. Notation for describing node and link capacities is similar to the one proposed in [9]. VN will only be mapped on the substrate if requirements of each of its vertex as well as edge are satisfied. After mapping vertices onto the nodes which satisfy vertex demand, paths need to be calculated for each pair of nodes in the VN. Then link resources in the path are compared with the edge demand. At this point, if the path satisfies edge request, then VN is completely mapped. After satisfying requests of vertices and edges of a VN, a residual graph is obtained which contains remaining capacities of nodes and links of the substrate [12]. In the beginning of VN embedding process, we initialize residual capacities of nodes and links with the following actual capacities: Therefore, when a node or link is mapping a vertex or an edge for the first time then, its residual capacity is equal to its original capacity () and the vertex or edge demand ( or ) is matched with it. After initial mapping of a vertex or edge is made, the new residual capacity () of a node () is obtained by subtracting vertex demand () from the node resource, whereas remaining capacity () of a link is found by deducting edge request () from the link resource: Resources need to be returned to the substrate if, after mapping initial vertices or edges of a VN, there comes a point when requirements of a certain edge or vertex cannot be satisfied. This means that in such a scenario the initial or base graph is regenerated: We define the cost of mapping a VN, as sum of overall substrate resources assigned to its vertices and edges, in the same way, as previously presented in [13]. Our cost function is similar to the one given in [13]: A vertex will only be mapped on a single node whereas an edge can be mapped on a substrate path containing one or more than one links. The term in (4) indicates bandwidth allocated to an edge () from a substrate link (). 3. Literature Review This section is divided into two parts. It starts with a description of constraints associated with VNE, while the second part describes and categorizes work done in this area. 3.1. Constraints Associated with VNE The virtual network embedding problem is NP-hard [9, 10] where several constraints need to be satisfied. In addition to vertex and edge constraints, the VNs can arrive at different times and in any order whereas substrate resources are also finite. Therefore, before going through the details of various solutions to the VNE problem, it is necessary to first have a look at the constraints associated with it. 3.1.1. Node Constraints Two types of node constraints may be associated with a VN request. (i) Capacity A VN request may be constrained by a certain amount of resources on its nodes. Nodes of a VN may require a fixed number of processing or memory resources, for example, in order to run an experiment, 500MHZ of CPU may be required for each virtual node of the VN [9]. (ii) Location In addition to the capacity, placement of VN’s nodes may be required in certain locations. This constraint may be imposed if, VN’s nodes are part of a service which requires this feature, for example, CDN (Content Distribution Network), gaming service. 3.1.2. Link Constraints Two types of link constraints may be associated with a VN request. (i) Bandwidth In order to run an experiment or provide a service, a VN may require certain amount of bandwidth on each of its links [9]. (ii) Link Propagation Delay In addition to the demand of bandwidth on its links a VN may also be constrained by link propagation delay, for example, a VN carrying delay sensitive traffic to provide a service such as QoS (Quality of Service) [14]. 3.1.3. Admission Control The substrate network has finite resources on its nodes and links. Admission control process needs to be implemented for two reasons.(i)It ensures that demands of newly arrived VNs can be fulfilled by the substrate.(ii)Resource allocation made to already mapped VNs is not violated. Therefore, VN requests may be rejected or postponed if the substrate does not have sufficient resources to satisfy demands of a VN at the time of arrival [9]. 3.2. Virtual Network Embedding Approaches After going through challenges associated with the VNE problem, we can now have a look at how they have been taken care of by various solutions. Details of such solutions will be given first; later on Table 2 also depicts this process. Description of each of these types of solutions is presented below. 3.2.1. Constraints We consider node capacity, link bandwidth, and admission control (defined above) as basic constraints to the VNE problem. Some solutions to the VNE problem handle all basic constraints while others only provide solution to a subset of these constraints. The solutions presented in [18, 20] do not perform admission control. Node capacity and admission control constraint are not considered in [16, 17]. Assuming that vertex mapping is known in advance, the authors have only provided solution for edge mapping in [19], that is, they have not taken care of node capacity constraint. The approaches presented in [9, 10, 13, 14, 21] take care of all basic constraints. 3.2.2. Method In order to embed a VN onto the substrate, we need to find appropriate mappings for both its vertices as well as edges. Therefore, the VNE problem can be decomposed into vertex and edge mapping, and for this, various approaches can be adopted. (A) Vertex Mapping (i) Iterative Method. The iterative method of mapping VNs onto substrate networks was presented in [16], where nodes are categorized as backbone and access nodes. In this method, first backbone nodes are mapped onto the substrate, then access nodes are connected to backbone nodes and shortest paths are computed between these nodes, after this, link capacities are calculated and in the end it is ensured that the backbone nodes have been mapped optimally. The vertex mapping approach in [14] can also be considered to be iterative, as it selects one node (of highest degree) in each step. The process is repeated for remaining nodes moving on from nodes with highest to lowest degrees until all of them get mapped on the substrate. Vertex mapping approach in [20] divides the entire VN topology into a set of elementary clusters. The decomposition of VNs is based on star topology, where nodes are characterized as hub and spoke. Mapping of a VN is done sequentially by assigning the decomposed star topology based VNs to the substrate, one at a time. (ii) Simulated Annealing. Simulated annealing approach has been used to find the optimal topology for a given communication pattern in [17], where the goal is to find optimal reconfiguration (iii) Greedy Node Mapping. Greedy node mapping approach maps vertices on nodes with maximum resources [9, 10, 18]. The advantage of using this method is to minimize the use of substrate resources at bottleneck nodes/links, which helps in satisfying the requirements of future VN requests which demand fewer resources. (iv) Baseline Approach (BLA). The baseline approach (BLA) of mapping vertices on any available substrate nodes which can satisfy their demand (by only evaluating if, ) was presented in [11]. VNs embedded using this approach can incur less cost as compared to GNM. However, BLA does not take into account the possibility of a node becoming bottleneck at the time of mapping a VN’s vertex. (v) Mixed Integer Programming. In [13], the authors have formulated a solution to the VNE problem by using mixed integer programming (MIP) formulation. Vertex mapping in this solution is done by using two techniques. In first algorithm vertex mapping is done deterministically and is called D-Vine (deterministic rounding based virtual network embedding algorithm) while second algorithm does it randomly and is presented as R-Vine (randomized rounding based virtual network embedding algorithm). (B) Edge Mapping The edge mapping approaches can be devised based on flows. The flows can be categorized as either unsplittable or splittable. (i) Shortest Path Mapping (SPM). The shortest path mapping is a cost efficient approach of mapping edges on substrate paths. It has been used as a primary approach for edge mapping in a number of solutions. The solutions proposed in [14–18, 20, 21] have used SPM for edge mapping while the one given in [9] also uses it for unsplittable flows. (ii) Multicommodity Flow. In case of splittable flows, the multicommodity flow based approach has been used for edge mapping [9, 10, 13, 19]. 3.2.3. VN Requests Virtual network requests can be either specified in advance (offline problem) or arrive as part of a dynamic process (online problem). The solutions given in [16, 18, 20] solve offline version of VNE problem, while the ones proposed in [9, 13, 14] solve it as an online problem. 3.2.4. Type of Mapping (TOM) A VN embedding algorithm may be carried out either in a distributed or centralized manner. The solutions proposed in [9, 10, 16–18, 21] map VN requests in a centralized way while the ones proposed in [15, 20] assign VN requests to substrate networks using a distributed process. 3.2.5. Adaptability After VN requests get mapped on the substrate, a VNE solution may need to provide the feature of adaptability, that is, respond to variations in either substrates or VNs. This may be required in either of the following scenarios. (i) A user may add new requirements for an embedded VN request. A set of new candidate resources may need to be identified in response to the additional requirements. The above mentioned change in user’s requirements was taken care of and a solution in this regard was proposed in [15]. A solution to the problem of dynamically reconfiguring topology of an overlay network in response to changes in communication requirements was also presented in [17]. (ii) A physical node/link may be hosting many virtual nodes/links. In case, a problem occurs with a single physical node/link then several virtual nodes/links will be affected. Therefore, the physical/virtual node and link failures should always be kept into consideration and virtual nodes and links should be re-mapped if a failure occurs. An approach to take care of vertex/node as well as edge/link failures of VNs and substrates, and remap VN’s vertices and edges on alternate nodes and links has been presented in [15]. Provision of path resiliency by constructing alternate one-hop overlay routes via intermediary nodes was part of the solution proposed in [14]. (iii) The concept of “path migration” by either changing the splitting ratios of existing paths or selecting new underlying paths can enable a substrate to accommodate a newly arrived VN. The idea of path migration was presented in [9]. (iv) After being mapped, a VN may be reconfigured to be assigned to a different set of substrate nodes and links upon arrival of a new VN request. In [18], a solution termed as “VN assignment with reconfiguration” has been proposed which states that node and link assignments to an embedded VN request are not fixed for its lifetime and may be changed at the arrival of a new VN request in order to better utilize substrate resources. 3.2.6. Optimization Objective VNE is a resource constrained problem and in addition to the main objective of optimizing the use of substrate resources, proposed solutions for this problem have focused on several other factors as The solutions proposed in [9, 18] focus on maximizing the usage of substrate resources, while in [14] the focus has been on mapping of virtual networks to achieve high quality and resilience. In case, a substrate node or link fails, the virtual vertices or edges mapped on it should be moved quickly enough (adaptation time should be minimized) to other nodes or links which can satisfy resource requirements, which was the objective of distributed fault-tolerant embedding algorithm, as proposed in [15]. The objective of mapping virtual networks onto a common substrate in such a way that can enable a network to support any traffic pattern allowed by a general set of constraints while minimizing the network cost was presented in [16]. Using dynamic overlay topology reconfiguration, a solution was proposed in [17] to minimize the cost of using an overlay. The two types of costs considered were occupancy cost and reconfiguration The objective of maximizing the acceptance ratio and revenue was achieved by doing coordinated node and link mapping as presented in [13]. The goal of maximizing the number of accepted VNs by preallocating resources for nodes and solving link mapping based on multicommodity flow was proposed in [19]. The objective of minimizing the network cost by mapping VNs using a distributed method and in the process achieving balanced load-sharing among all substrate nodes was the focus of solution proposed in [20]. The goal of minimizing mapping time was achieved by using simulated annealing technique and presented in [21]. Table 2 shows how solutions to the VNE problem have handled challenges associated with it. 4. Hybrid BLA-BNRM Approach (HBNRM) VN mapping process starts by assigning vertices to nodes, then proceeds on to find -shortest paths [22] between each pair of mapped nodes, and finishes by mapping edges onto paths that satisfy their demand. Our approach is inspired to some extent by [9] as we use similar notations to denote both virtual and substrate networks. However, we use a different -shortest paths algorithm [22] than edge disjoint paths [9], as it gives us a better choice of mapping an edge on substrate paths. The proposed approach solves VNE problem by considering all basic constraints (Section 3.2.1), handles VN requests online (Section 3.2.3), type of mapping is centralized (Section 3.2.4) whereas optimization objective (Section 3.2.6) is to maximize acceptance ratio (MAR). This section will initially present a description of our vertex mapping approach and in the second phase, edge mapping approach will be described. 4.1. Vertex Mapping First step of the proposed solution starts by finding candidate nodes of the substrate, which can map vertices by satisfying their demands. In this phase, each vertex has to be mapped to a different node . Several approaches can be adopted for this purpose and each will affect how VNs get mapped as well as substrate resources are utilized in the process. In this paper, two existing (i.e., BLA and GNM) and one proposed approach (HBNRM) will be evaluated. Before going through the details of our vertex mapping approach (HBNRM), it is important to give definition of bottleneck as well as exhausted nodes. 4.1.1. Bottleneck Nodes (B(N)) The idea to minimize the use of substrate resources from bottleneck nodes and links was presented in [9], while, the concept of bottleneck links was also mentioned in [23]. Nodes and links having lack of residual capacities to map vertices/edges and hence resulting in rejection of a VN request were termed as bottlenecks in [10]. We proposed definitions for bottleneck nodes and links of a substrate in terms of their capability of mapping vertices and edges of VNs due to arrive in future in the mapping process in [11]. We define a node as a bottleneck if, it is unable to map two vertices (of highest capacity) of future VN requests. In other words, residual capacity of a bottleneck node is less than a certain value 4.1.2. Exhausted Nodes (E(N)) An exhausted node is a bottleneck node, whose resources get completely utilized (). We now describe our vertex mapping approach. In future work of [11] it was mentioned that one possible extension of that work could be to investigate how the two approaches (i.e., BLA and BNRM) could be combined in order to maximize the number of virtual networks that are mapped, while still avoiding bottleneck nodes. Another objective of the new approach should be to utilize benefits of both BLA (baseline approach) and GNM (greedy node mapping approach) while trying to minimize their disadvantages. In other words, we should be able to minimize bottleneck nodes of a substrate (GNM’s advantage) while trying to minimize the cost to map a VN (BLA’s advantage). We have named this approach as HBNRM (Hybrid BLA-BNRM) approach which is presented below. One important component of HBNRM is the use of node exhaustion limit (nel) values. Nel is a value which is used to make sure that a node does not become bottleneck. The vertex is only mapped on the node if, after mapping, the node has resources equal to or greater than nel. Nel values are used according to the rule defined below. 80/50 Rule for NEL Values We start by using a nel value (, defined above) which ensures that a node does not become bottleneck after mapping a vertex. This value is increased or decreased according to the following criteria. (i) When about eighty percent (80%) of nodes reach the set nel value in an interval (or request window), it is decreased to the next level and then same rule gets applied to the new value.Experiments have shown that once about eighty percent nodes reach a set nel value then it may be decreased otherwise, VNs may get dropped for next interval even though sufficient node resources maybe present in the substrate.(ii) If greater than fifty percent (50%) VNs get dropped in an interval in early stages of VN mapping process, then, nel value is increased to the next level.When VNs get rejected in early stages of mapping process then, it could mean that sufficient node resources are present in the substrate but link resources have started to exhaust as a result of mapping vertices on same nodes repeatedly. By increasing nel value it can be ensured that nodes which were not selected previously can now be selected for mapping of new VN requests and as a result more link resources could be made available.(iii) If greater than fifty percent (50%) VNs get dropped in an interval in later stages of VN mapping process, then, nel value is decreased to the next level.When VNs get rejected in later stages of mapping process, then it could mean that link resources have started to exhaust and a decrease in nel value may map vertices on different nodes and as a result some unused links may become available for edge mapping. So, nel value is increased or decreased when either of above conditions occurs. In Section 5.2, we will explain cases where nel values were increased or decreased and which condition of 80/50 rule was applied. Another important point about 80/50 rule is that it can be modified according to number of VNs considered for an interval (request window). In this paper, 50VNs constitute an interval (request window), if number of VNs are reduced to 40 for an interval this could change the rule to 85/55. Similarly if VN requests are increased to 60 then rule might become 75/40. The vertex mapping function for HBNRM can be given as The in (5), defines the time at which a VN arrives and is compared with a certain nel value. Vertex mapping function in (5) starts by checking all the vertices (V) and first selects the one which demands most processor resources (). The benefit of doing this is that if, the substrate cannot satisfy the demand of this vertex then the mapping process stops here for this VN and requirements of remaining vertices do not need to be checked, which saves amount of computations according to number of vertices in a VN. In this way, if demand of the first vertex is satisfied then the process is repeated for all remaining vertices moving on from vertex demanding most to the least resources. The vertex mapping algorithm is defined in Algorithm 1. VN requests are satisfied by using first come first served (FCFS) approach and the process begins by assigning vertices to unique substrate nodes according to the selected vertex mapping function (BLA, HBNRM, or GNM). In the next step, residual capacities of nodes (selected for mapping vertices of the VN) are generated. The edge mapping algorithm is called only if, all vertex requests of a VN can be satisfied at this stage. 4.2. Edge Mapping The next step is to map an edge () on a substrate path () containing one or more than one links. In the proposed solution, -shortest paths [22] are found for each edge. The next step is to calculate resources on a path. To achieve this objective, we take link with minimum resources in the path and match edge demand with it. If this link can satisfy edge request, then remaining links will surely be able to do that. The approach of mapping an edge on shortest of -shortest paths, which can satisfy its demand, termed as shortest path mapping (SPM), was presented in [9, 11]. Edge mapping function for the SPM can be given as The edge mapping algorithm is defined in Algorithm 2. Edge mapping phase of the solution assigns all edges to the substrate paths and is executed number of times the total edges in a VN. It starts by finding out shortest of -shortest paths (for ) for each edge of the VN. In the next step, resources on that path are calculated and edge demand is matched with it. If this path has sufficient resources to satisfy the edge demand, then it is selected for mapping that particular edge. Otherwise, the process continues till either a path among the -shortest paths can satisfy edge request or no path has sufficient resources to satisfy edge demand. In case, after mapping the initial edges of a VN, there comes a point when the requirements of a certain edge cannot be satisfied by the substrate, then the resources reserved initially for edges and vertices of a VN need to be returned to the substrate (Step3). At this point, if there are sufficient path resources to satisfy request of all the edges then the VN is completely mapped (VN request 5. Experimental Setup and Evaluation This section is divided into two parts; first describes experimental setup while second presents evaluation results. The proposed solution has been implemented using Matlab. 5.1. Experimental Setup Substrate networks have been generated using the BRITE tool [24] whereas, virtual networks have been created using Matlab. 5.1.1. Substrate Networks The proposed solution has been tested on four different substrate networks (, , , and ). Two of these networks ( and ) consist of 100 nodes and about 500 links [9, 11, 12], while and comprise of 100 nodes and about 300 links [25]. The node resource (CPU) as well as link resource (bandwidth) is assigned different values from 10 to 100 units. The size of substrates ( and ) can be compared with that of a medium sized ISP [9].: Node and link resources are randomly chosen from 20 to 100.: Node and link resources are randomly chosen from 10 to 100.: Node and link resources are randomly chosen from 20 to 100.: Node and link resources are randomly chosen from 30 to 100. The substrates and have more links (are densely connected) but contain different number of node and link resources. The substrates and have fewer links (are sparsely connected) and also contain different number of node and link resources. 5.1.2. Virtual Networks Two different sets of VNs have been mapped onto substrate networks; each set differs from the other, by number of edges. Set 1. Number of vertices of a VN is randomly chosen between 2 and 10 and vertices are randomly connected with the probability 0.3. Set 2. Number of vertices of a VN is randomly chosen between 2 and 10 and vertices are randomly connected with the probability 0.5. Set 2 is similar to the setup presented in [9, 11, 12, 18] while Set 1 resembles with the one given in [11, 25]. Vertex and edge resources in both sets are randomly chosen from 1 to 5. Two sets of VNs put different demand on substrate’s paths and can impact on how VNs are mapped. 5.2. Evaluation The main focus of evaluation is to analyze the effect of using three different vertex mapping approaches (BLA, GNM, and HBNRM). Evaluation is done by mapping sparsely and densely connected VNs on sparsely and densely connected substrates. Results are presented in the form of graphs and tables which show the impact of using each approach on mapping VN requests, the way resources are utilized, nodes become bottleneck and get exhausted. Results for densely connected substrates ( and ) will be presented first which will be followed by a discussion about sparsely connected substrates ( and ). The section ends with a summary of 5.2.1. Densely Connected Substrates ( and ) (a) VN Mapping Figures 3 to 8 represent evaluation results, where requested VNs are shown on horizontal axis, whereas mapped VNs by evaluated approaches and cost incurred in the process are presented on vertical When a substrate is densely connected, has higher number of resources (i.e., ), and a set of sparsely connected VNs (VN-Set 1) is mapped on it, then mapping results are almost similar for all methods (Figure 3). VNs put less demand on substrate’s paths and even if vertices are mapped on different locations in the substrate by each approach, sufficient resources are available for edge mapping and therefore, mapping results are almost similar. However, among three approaches, GNM has highest cost for VN mapping, whereas BLA has least cost (Figure 3). An almost similar VN mapping trend (like that of Figure 3) can be seen when a set of densely connected VNs (VN-Set 2) is mapped on (Figure 4). When the substrate is densely connected but has comparatively less number of resources (i.e., ) and a set of densely connected VNs (VN-Set 2) is mapped on it, then mapping results are quite different than previous substrate (), as shown by Figure 5. In case of GNM, vertices can be mapped at a distance from each other and since the substrate has fewer resources so, the edge demand can become difficult to fulfill. In this case, BLA and HBNRM give almost similar mapping of VN requests (Figure 5). Tables 3 to 20 present evaluation results for the rest of evaluation criteria where number of mapped VNs in each interval, cost incurred in the process, percentage of overall utilized resources as well as bottleneck and exhausted nodes are shown. Mapped VNs () and cost of mapped VNs () are shown for each interval in Tables 3 to 20, as compared to overall mapped VNs and mapping cost, shown in Figures 3 to 8. Percentage of utilized resources , bottleneck and exhausted nodes (, ) are also important to analyze, as they represent how each approach utilizes substrate’s resources. (b) Resource Utilization Resource utilization for Tables 3 to 20 should be viewed based on the following factors. (i) When same number of VNs are mapped by evaluated methods at a particular instant of time. The actual cost comparison between any compared approaches can be seen when same number of mapped VNs are analyzed from initial set of VNs. Cost is only incurred when VNs are mapped and once VN mapping decreases so will incurred cost. Secondly, since we have used random VNs so it would be unfair to compare an approach which maps more VNs having higher number of nodes and in the process incur more cost with the one which maps more VNs with lesser number of nodes and also costs less. Since, mapped VNs from initial set would be same for all the approaches so we will make cost comparison based on that. (ii) The overall VNs mapped by using a certain method. When more overall VNs are mapped by using a particular method then, it can incur more cost. For sparsely connected set of VNs (VN-Set 1) three approaches map similar number of VNs when initial 250 VN requests arrive on (Tables 3 to 5). At this point, average cost of mapping a VN, for BLA is 37.49, for GNM is 40.52, whereas for HBNRM is 38.71 units of substrate resources. So BLA’s cost of mapping VNs is the least whereas GNM’s mapping cost is the highest among three approaches. Overall as well, GNM uses 0.9% more resources as compared to BLA and 0.68% more when matched with HBNRM (Tables 3 to 5). For this set of VNs almost similar number of overall VNs is mapped by using either of three approaches (Tables 3 to 5). For densely connected set of VNs (VN-Set 2) three approaches map similar number of VNs when initial 100 VN requests arrive on (Tables 6 to 8). At this point, average cost of mapping a VN, for BLA is 57.05, for GNM is 67.83, whereas for HBNRM is 57.6 units of substrate resources. So, in this case as well BLA’s cost of mapping VNs is least whereas GNM’s mapping cost is the highest among three Overall, HBNRM uses 2.92% more resources as compared to BLA and 1.45% more when matched with GNM (Tables 6 to 8). For this set of VNs although overall mapped VNs is almost similar by using either of three approaches but their mapping trends are different and therefore, more overall resources are utilized by HBNRM (Tables 6 to 8). When a set of densely connected VNs (VN-Set 2) is mapped on the evaluated approaches map similar number of VNs when initial 50 VN requests arrive (Tables 9 to 11). At this point, average cost of mapping a VN, for BLA is 54.88, for GNM is 64.24, whereas for HBNRM is 55.5 units of substrate resources. So, in this case as well BLA’s cost of mapping VNs is the least whereas GNM’s mapping cost is the highest among three approaches. Overall, BLA uses 7.18% more resources as compared to GNM (Tables 9 and 10). However, BLA maps 6.5% more VNs as compared to GNM as well (Tables 9 and 10). When matched with HBNRM although BLA maps 1% more VNs (Tables 9 and 11) but their mapping trends are different and it uses 0.26% less resources (Tables 9 and 11). (c) Bottleneck Nodes Nodes start to become bottleneck from VN interval-1 (VNs 1–50) when BLA is used for mapping on or (Tables 3, 6, and 9). In case of GNM there are no bottleneck nodes till the sixth interval on (Tables 4 and 7). When approach used is HBNRM then also nodes start to become bottleneck after sixth interval on (Tables 5 and 8). For HBNRM nodes do not become bottleneck as long as nel value is set according to bottleneck limit () as defined in Section 4.1. Starting nel value for nodes in Tables 5, 8, 11, 14, 17, and 20 is 10 (), since, maximum capacity of any vertex for both sets of VNs is 5. If nel value needs to be decreased to comply with either condition (i) or (iii) of 80/50 rule for nel values (Section 4.1) it is initially set to 5 for these sets of VNs (). The objective behind this new value is that a node will still be able to map at least one vertex of highest capacity of future requests. On substrate for both sets of VNs nel value needs to be decreased for seventh interval to comply with condition (i) of 80/50 rule for nel values (Tables 5 and 8). So, bottleneck nodes start to appear from there onwards. In case of GNM there are no bottleneck nodes on (Table 10), as it maps fewer VNs than compared approaches (Tables 9 to 11). When approach used is HBNRM then, nodes start to become bottleneck after seventh interval (Table 11). According to condition (i) of 80/50 rule for nel values (Section 4.1) nel value needs to be decreased for eighth interval (Table 11). So, bottleneck nodes start to appear in that interval. (d) Exhausted Nodes When BLA is used on or , nodes start to exhaust from first interval (Tables 3, 6, and 9). In case of GNM only 9% nodes exhaust for VN-Set 1 on (Table 4) whereas no node resource exhausts for VN-Set 2 (Table 7), similar trend can be seen for HBNRM where no node resource exhausts (Tables 5 and 8). When GNM and HBNRM are used on , no node resources exhaust (Tables 10 and 11). 5.2.2. Sparsely Connected Substrates ( and ) (a) VN Mapping When a substrate is sparsely connected (i.e., ) and a set of sparsely connected VNs (VN-Set 1) is mapped on it, then mapping results are almost similar for all methods (Figure 6). VNs put less demand on substrate’s paths and even if vertices are mapped on different locations in the substrate by each approach, sufficient resources are available for edge mapping and therefore, mapping results are almost similar (Figure 6). When a set of densely connected VNs (VN-Set 2) is mapped on the same substrate (), then mapping results are quite different than previous set of VNs (Figure 7). In case of BLA, vertices can be mapped repeatedly on same nodes and since the substrate has fewer link resources they can exhaust early. Therefore, the edge demand can become difficult to fulfill. In this case, GNM and HBNRM give almost similar mapping of VN requests until the arrival of 300VNs (Figure 7). However HBNRM overall, maps more VNs as compared to GNM (Figure 7). When the substrate is sparsely connected, has comparatively higher number of resources (i.e., ) and a set of densely connected VNs (VN-Set 2) is mapped on it, then too trend of mapping VNs by evaluated approaches is quite similar to that of previous substrate (, Figure 7), as shown by Figure 8. In this case, GNM and HBNRM give almost similar mapping of VN requests until the arrival of 250VNs (Figure 8). However HBNRM overall, maps more VNs as compared to GNM (Figure 8). (b) Resource Utilization For sparsely connected set of VNs (VN-Set 1) three approaches map similar number of VNs when initial 100 VN requests arrive on (Tables 12 to 14). At this point, average cost of mapping a VN, ) for BLA is 38.88, for GNM is 45.04, whereas for HBNRM is 38.82 units of substrate resources. In this case, BLA and HBNRM’s cost of mapping VNs is almost similar whereas GNM’s mapping cost is the highest among three approaches. Overall BLA uses 0.62% more resources as compared to GNM and 1.47% when matched with HBNRM (Tables 12 to 14). However, BLA maps 3.5% more VNs as compared to GNM and 1.75% more when matched with HBNRM. When a set of densely connected VNs (VN-Set 2) is mapped on then, the evaluated approaches do not give similar mapping results from initial set of VNs (Tables 15 to 17). Therefore, cost comparison for this set of VNs cannot be presented. Overall, BLA uses 10.54% less resources as compared to GNM, and 12.76% less when matched with HBNRM (Tables 15 to 17). However in this case, BLA maps 9.5% less VNs as compared to GNM (Tables 15 and 16) and 14.5% less when matched with HBNRM (Tables 15 and 17). Three approaches map similar number of VNs when initial 50 VN requests arrive on (Tables 18 to 20). At this point, average cost of mapping a VN, for BLA is 57.68, for GNM is 82.86, whereas for HBNRM is 59.94 units of substrate resources. So, in this case as well BLA’s cost of mapping VNs is least whereas GNM’s mapping cost is highest among three approaches. Overall HBNRM uses 12.37% more resources as compared to BLA (Tables 18 and 20) and 3% more when matched with GNM (Tables 19 and 20). However in this case, HBNRM maps 11% more VNs as compared to BLA (Tables 18 and 20) and 7.75% more when matched with GNM (Tables 19 and 20). (c) Bottleneck Nodes ( Nodes start to become bottleneck from VN interval-1 (VNs 1–50) when BLA is used for mapping on or (Tables 12, 15, and 18). In case of GNM there are no bottleneck nodes till the seventh interval for VN-Set 1 on (Table 13), and for VN-Set 2 no node becomes bottleneck (Table 16). When approach used is HBNRM then nodes start to become bottleneck after sixth interval in case of VN-Set 1 (Table 14). For HBNRM, nodes do not become bottleneck as long as nel value is set according to bottleneck limit () as defined in Section 4.1. However, according to condition (i) of 80/50 rule for nel values (Section 4.1) it needs to be decreased for seventh interval for VN-Set 1 (Table 14). In case of VN-Set 2 no node becomes bottleneck (Table 17). Less number of VNs gets mapped for VN-Set 2 as compared to VN-Set 1 for both GNM and HBNRM and therefore no nodes become bottleneck (Tables 16 and 17). According to condition (iii) of 80/50 rule for nel values (Section 4.1) it needs to be decreased for next interval for VN-Set 2 (Table 17) if more VN requests arrive as more than 50% VNs get dropped in eighth interval (Table 17). In case of GNM there are no bottleneck nodes on (Table 19). When approach used is HBNRM then, also there are no bottleneck nodes (Table 20). However, according to condition (ii) of 80/50 rule for nel values (Section 4.1) nel value needs to be increased for fifth interval (Table 20). Number of mapped VNs is below 50% in the fourth interval (Table 20) and therefore nel value is increased to the next level for fifth interval and again 80/50 rule is applied on that value. Moreover, according to condition (iii) of 80/50 rule for nel values (Section 4.1) it needs to be decreased for next interval if more VN requests arrive as more than 50% VNs get dropped in eighth interval (Table 20). (d) Exhausted Nodes When BLA is used, nodes start to exhaust from first interval on and (Tables 12, 15, and 18). In case of GNM no node resource exhausts on and (Tables 13, 16, and 19), similar trend can be seen for HBNRM where no node resource exhausts (Tables 14, 17, and 20). Summary 1. When a substrate has higher number of resources and is densely connected, then the three approaches give almost similar results in terms of number of mapped VNs when either sparsely or densely connected VNs are mapped (Figures 3 and 4, Tables 3 to 8). Resources by each approach are utilized in a different manner. BLA uses least whereas GNM uses highest number of resources (Figures 3 and 4, Tables 3 to 8). However, when the substrate is densely connected but has lesser number of resources and a set of densely connected VNs are mapped then BLA maps more VNs as compared to GNM (Figure 5, Tables 9 and 10). On a sparsely connected substrate, when a set of sparsely connected VNs get mapped then also compared approaches give almost similar mapping results (Figure 6, Tables 12 to 14). However, when the substrate is sparsely connected but a set of densely connected VNs are mapped then GNM maps more VNs as compared to BLA (Figures 7 and 8, Tables 15 and 16, Tables 18 and 19). The HBNRM approach is either close to or gives better VN mappings than compared approaches on either sparsely or densely connected substrates (Figures 3 to 8, Tables 3 to 20). The flexibility of 80/ 50 rule for nel values facilitates in doing better vertex mapping in changing scenarios and thus good mapping results can be achieved regardless of the type of substrate. HBNRM comes close to GNM in terms of minimizing complete exhaustion of node resources of a substrate, and also is near to BLA in terms of reducing mapping cost of VNs (Tables 3 to 20). 6. Conclusion and Future Work We have proposed an approach to virtual network embedding which not only minimizes complete exhaustion of substrate nodes but also does that at the cost of utilizing comparatively less resources than an existing approach. Main focus of this approach is to do cost efficient mapping of vertices on those nodes of a substrate which after mapping, do not become bottleneck for future VN requests. The proposed approach (referred to as HBNRM) has been compared with existing vertex mapping methods BLA and GNM. BLA does not take node resource exhaustion into consideration which GNM does but can map VNs at a higher cost. The number of virtual networks that can be assigned to a substrate has been investigated for varying distributions of VN requests and substrate topologies. The results show that BLA is favorable for densely connected substrates, while GNM gives better results for sparsely connected ones. HBNRM, on the other hand either gives almost similar or better VN mappings for both sparsely as well as densely connected substrates when compared with BLA and GNM. One possible extension of this work is to include the feature of adaptability to either deal with change in user’s demands after a VN gets mapped on the substrate or, handle node and link failures. 1. J. S. Turner and D. E. Taylor, “Diversifying the Internet,” in Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM'05), pp. 755–760, December 2005. View at Publisher · View at Google Scholar · View at Scopus 2. F. A. Shaikh, S. McClellan, M. Singh, and S. K. Chakravarthy, “End-to-end testing of IP QoS mechanisms,” Computer, vol. 35, no. 5, pp. 80–87, 2002. View at Publisher · View at Google Scholar · View at Scopus 3. T. Anderson, L. Peterson, S. Shenker, and J. Turner, “Overcoming the internet impasse through virtualization,” Computer, vol. 38, no. 4, pp. 34–41, 2005. View at Publisher · View at Google Scholar · View at Scopus 4. N. Feamster, L. Gao, and J. Rexford, “How to lease the Internet in your spare time,” SIGCOMM Computer Communication Reiew, vol. 37, no. 1, pp. 61–64, 2007. 5. A. Nakao, “Network virtualization as foundation for enabling new network architectures and applications,” IEICE Transactions on Communications, vol. E93-B, no. 3, pp. 454–457, 2010. View at Publisher · View at Google Scholar · View at Scopus 6. A. Bavier, N. Feamster, M. Huang, L. Peterson, and J. Rexford, “In VINI Veritas: realistic and controlled network experimentation,” in Proceedings of the Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM '06), pp. 3–14, Pisa, Italy, 2006. View at Publisher · View at Google Scholar 7. Planetlab, http://www.planet-lab.org/. 8. M. Yu, Y. Yi, J. Rexford, and M. Chiang, “Rethinking virtual network embedding: substrate support for path splitting and migration,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 2, pp. 17–29, 2008. 9. N. Farooq Butt, M. Chowdhury, and R. Boutaba, “Topology-awareness and reoptimization mechanism for virtual network embedding,” Lecture Notes in Computer Science, vol. 6091, pp. 27–39, 2010. View at Publisher · View at Google Scholar · View at Scopus 10. A. Razzaq, P. Sjödin, and M. Hidell, “Minimizing Bottleneck Nodes of a Substrate in Virtual Network Embedding,” in In Proceedings of the 2nd IFIP International Conference Network of the Future (NoF'11), Paris, France, November 2011. 11. J. Lischka and H. Karl, “A virtual network mapping algorithm based on subgraph isomorphism detection,” in Proceedings of the Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM '09), 2009. 12. N. M. Mosharaf, K. Chowdhury, M. R. Rahman, and R. Boutaba, “Virtual network embedding with coordinated node and link mapping,” in 28th Conference on Computer Communications (INFOCOM '09), pp. 783–791, April 2009. View at Publisher · View at Google Scholar · View at Scopus 13. J. Shamsi and M. Brockmeyer, “QoSMap: QoS aware mapping of virtual networks for resiliency and efficiency,” in Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM'07), November 2007. View at Publisher · View at Google Scholar · View at Scopus 14. I. Houidi, W. Louati, D. Zeghlache, P. Papadimitriou, and L. Mathy, “Adaptive virtual network provisioning,” in Proceedings of the 2nd ACM SIGCOMM workshop on Virtualized infrastructure systems and architectures, pp. 41–48, ind, September 2010. View at Publisher · View at Google Scholar · View at Scopus 15. J. Lu and J. Turner, “Efficient mapping of virtual networks onto a shared substrate,” Tech. Rep. WUCSE-2006-35, Washington University, 2006. 16. J. Fan and M. H. Ammar, “Dynamic topology configuration in service overlay networks: a study of reconfiguration policies,” in 25th IEEE International Conference on Computer Communications (INFOCOM '06), April 2006. View at Publisher · View at Google Scholar · View at Scopus 17. Y. Zhu and M. Ammar, “Algorithms for assigning substrate network resources to virtual network components,” in Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM '06), April 2006. View at Publisher · View at Google Scholar · View at Scopus 18. W. Szeto, Y. Iraqi, and R. Boutaba, “A multi-commodity flow based approach to virtual network resource allocation,” in Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM'03), pp. 3004–3008, December 2003. View at Scopus 19. I. Houidi, W. Louati, and D. Zeghlache, “A distributed virtual network mapping algorithm,” in IEEE International Conference on Communications (ICC '08), pp. 5634–5640, May 2008. View at Publisher · View at Google Scholar · View at Scopus 20. R. Ricci, C. Alfeld, and J. Lepreau, “A solver for the network testbed mapping problem,” ACM Computer Communication Review, vol. 33, no. 2, pp. 65–81, 2003. 21. J. Y. Yen, “Finding the K shortest loopless paths in a network,” Management Science, vol. 17, no. 11, pp. 712–716, 1971. 22. Y. Zhu, Routing, resource allocation and network design for overlay networks [Ph.D. thesis], College of computing, Georgia Institute of Technology, 2006. 23. A. Medina, A. Lakhina, I. Matta, and J. Byers, “BRITE: an approach to universal topology generation,” in Proceedings of the 9th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS '01), pp. 346–353, August 2001. View at Scopus 24. A. Razzaq and M. S. Rathore, “An approach towards resource efficient virtual network embedding,” in Proceedings of the 2nd International Conference on Evolving Internet (Internet '10), pp. 68–73, esp, September 2010. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/jece/2012/358647/","timestamp":"2014-04-17T17:39:23Z","content_type":null,"content_length":"202481","record_id":"<urn:uuid:dd833519-77dc-48e0-9e1a-2524e82b2099>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Horner's Method October 6th 2008, 05:26 PM Horner's Method I'm attempting to evaluate 3x^2 + x + 1 at x = 2. Also, how many multiplications are used by this algorithm to evaluate a polynomial of degree n at x = c? procedure Horner(c, a0, a1, a2, ... , an : real numbers) y:= an for i := 1 to n y := y * c + a (n-i) {y = an c^n + a(n-1) c^n-1 + ... + a1c + a0} October 6th 2008, 11:06 PM a0=1, a1=1, a2=3 put y=a3=3 first trip second trip Also, how many multiplications are used by this algorithm to evaluate a polynomial of degree n at x = c? procedure Horner(c, a0, a1, a2, ... , an : real numbers) y:= an for i := 1 to n y := y * c + a (n-i) {y = an c^n + a(n-1) c^n-1 + ... + a1c + a0} There is 1 multiplication per trip around the loop, and the final trip count is n, so there are n multiplications. October 7th 2008, 08:08 AM n additions? Would there also be n additions used to evaluate a polynomial of degree n at x = c? (Not counting additions used to increment the loop variable) October 7th 2008, 09:16 AM
{"url":"http://mathhelpforum.com/discrete-math/52327-horners-method-print.html","timestamp":"2014-04-16T17:14:14Z","content_type":null,"content_length":"6089","record_id":"<urn:uuid:e037eda1-7059-4ab4-a706-9cf1c4a898ed>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Bible Story Activities - Weights Use the following Bible story activities to teach the children in your Sunday School class about the units of weight mentioned in the story you are studying. For more information on weights in biblical times, go to Weights in Biblical Times A Ring and Bracelets for Rebekah. Genesis 24:22. The gold ring Abraham's servant gave to Rebekah was either a nose ring or an earring. The ring weighed one beka or a half shekel, which is about equivalent to the weight of a US Jefferson 5¢ coin. The golden bracelets weighed ten shekels which is about equivalent to 1/4 pound or the weight of ten US Kennedy 50¢ coins. Teach the children about the weight of a beka and a shekel by showing them a ring and bracelets comparable in weight to the ones given to Rebekah Joseph Sold Into Slavery. Genesis 37:12-28. Joseph's brothers sold him as a slave to the Ismaelites for twenty shekels of silver. Twenty shekels of silver are about equivalent to the weight of twenty US Kennedy 50¢ coins. The Tabernacle and its Furnishings. Exodus 37-38. The weight of the gold offered by the Israelites for use in building the tabernacle totaled 29 talents and 730 shekels (Exodus 38:24). This equals about 2,200 pounds of gold. If gold is valued at $250 per troy ounce, the value of this gold is about $8 million. Each person counted in the census contributed one beka, or half a shekel of silver (Exodus 38:25-26). One beka is about equivalent to the weight of a US Jefferson 5¢ coin. The weight of the silver collected totaled 100 talents and 1,775 shekels (Exodus 38:25). This equals about 7,550 pounds of silver. If silver is valued at $4 per troy ounce, the value of this silver is about $440,000. The weight of the bronze offered by the Israelites for use in building the tabernacle totaled 70 talents and 2,400 shekels (Exodus 38:29). This equals about 5,300 pounds of bronze. The craftsmen used one talent of gold to make the lampstand for the tabernacle. A talent is about 75 pounds. Consider the following to help the children visualize the weights of the gold, silver and bronze used in the construction of the tabernacle. The weight of the lampstand can be represented by a child or children weighing about 75 pounds. The weight of the bronze would be about equal to the weight of a Chevrolet Suburban or heavy duty pickup truck. Go outside and see if you can see one. The weight of the gold would be about equal to the weight of all the people and other stuff you could fit in a Chevrolet Suburban or heavy duty pickup truck. The weight of the silver would be about equal to the weight of a fully loaded Chevrolet Suburban or heavy duty pickup truck. The Value of a Male or Female Child. Leviticus 27:5. The Israelites gave a value of silver when they made a vow to dedicate a person to the Lord. The value of a male between the ages of five and twenty was set at 20 shekels of silver. The value of a female between the ages of five and twenty was set at ten shekels of silver. You can use US Kennedy 50¢ coins to help the children visualize the weight of a shekel. The weight of twenty 50¢ coins is about equal to the weight of 20 shekels. Use ten 50¢ coins to represent the weight of ten shekels. Redemption of the Firstborn. Numbers 18:16. Every Israelite offered a redemption price of five shekels of silver in place of the firstborn son and the firstborn male of unclean animals. The weight of five shekels is about 1/8 pound or two ounces. You can show the children five US Kennedy 50¢ coins to give them an idea of what five shekels of silver would have looked like. The Commanders' Offering of Gold. Numbers 31:52. The gold articles offered to the Lord by the commanders of the army weighed 16,750 shekels (Numbers 31:52), or about 420 pounds. See how many children in the class it takes to equal 420 pounds. Gideon's Golden Earrings. Judges 8:26. The golden earrings the Israelites gave to Gideon weighed 1,700 shekels (Judges 8:26), or about 43 pounds. The earring Abraham's servant gave to Rebekah weighed 1/2 shekel (see above). Let the children guess how many earrings the Israelites gave to Gideon. It was probably close to 3,400 earrings. Goliath's Armor and Spear. 1 Samuel 17:5-7. Goliath's coat of armor weighed 5,000 shekels, or about 125 pounds. The point of his spear weighed 600 shekels, or about 15 pounds. Let the children see if they can safely lift a 15 pound weight without straining. This will help the children understand how big and strong Goliath was. David's Crown. 2 Samuel 12:30. After he captured Rabbah, David took the gold crown from the head of the king of Rabbah. The crown weighed one talent or 75 pounds. The weight of the crown can be represented by a child or children weighing 75 pounds. The Philistine's Spearhead. 2 Samuel 21:16. Ishbi-Benob's spear had a spearhead that weighed 300 shekels, or about 7.5 pounds. Let the children see if they can safely lift a 7.5 pound weight without straining. Solomon's Income. 1 Kings 10:14. Each year Solomon received gold weighing 666 talents, not including revenues from merchants and traders and the Arabian kings and governors of the land. This equals about 50,000 pounds of gold. This would be about equal to the weight of ten golden Chevrolet Suburbans or heavy duty pickup trucks. If gold is valued at $250 per troy ounce, this part of Solomon's yearly income had a value of $182 million. Try to explain that this is an enormous amount of money. Solomon's Shields. 1 Kings 10:17. King Solomon made small shields of gold, each of which weighed three minas, or about 3.75 pounds. Let the children see how long they can hold a "shield" weighing 3.75 pounds. Gold and Silver for the Temple. 1 Chronicles 22:14. King David set aside 100,000 talents of gold and 1,000,000 talents of silver to be used in building the temple of the Lord. This equals about 7.5 million pounds of gold and 75 million pounds of silver. The gold is about equal to the weight of 19 freight locomotives. The silver is about equal to the weight of 190 freight locomotives. If gold is valued at $250 per troy ounce and silver is valued at $4 per ounce, the value of the gold and silver David set aside for the building of the temple totaled about $32 billion. Try to explain that this is an enormous amount of money. This information is presented to help teachers serving in a Christian Preschool Ministry or a Children's Ministry or a Sunday School class teach children what the Bible says about God and the way He wants us to live.
{"url":"http://www.sundayschoolresources.com/biblestoryactivities2.htm","timestamp":"2014-04-16T16:02:04Z","content_type":null,"content_length":"21402","record_id":"<urn:uuid:4fca6fbb-a516-40a0-97d9-3cccf8ab8713>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
wave equation January 26th 2009, 06:10 AM #1 Jan 2009 wave equation :=1/2(Tyxx+ ½ytt ) and P := Tyxyt. Show that Et = Px and TEx = ½Pt and deduce that E and P are also solutions of the wave equation, i.e. xx = ½Ett and TPxx = ½Ptt. What are the physical interpretations of E and P? I am unsure of this, the physical interpretation of E and P.. are they the nodes? many thanks Well my guess is that E is the total energy and P total momentum of the wave, you can check for units if they are the same ones as they should be, anyway that's my guess. :=1/2(Tyxx+ ½ytt ) and P := Tyxyt. Show that Et = Px and TEx = ½Pt and deduce that E and P are also solutions of the wave equation, i.e. xx = ½Ett and TPxx = ½Ptt. What are the physical interpretations of E and P? I am unsure of this, the physical interpretation of E and P.. are they the nodes? many thanks Are you sure on $<br /> E= \frac{1}{2} \left(Ty_{xx}+\frac{1}{2}y_{tt} \right)$? I mean the second order derivatives. January 28th 2009, 12:54 AM #2 Junior Member Jan 2009 January 31st 2009, 06:31 AM #3
{"url":"http://mathhelpforum.com/advanced-applied-math/69964-wave-equation.html","timestamp":"2014-04-20T04:55:14Z","content_type":null,"content_length":"41724","record_id":"<urn:uuid:ab483025-da70-4b48-9e3d-cb05e43400ff>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
A senior scholar reports on S. Fortier's presentation at the CMS meeting A senior scholar reports on S. Fortier’s presentation at the CMS meeting First there was the open letter to the industry minister by 327 mathematical scientists, including 27 Canada Research Chairs and 35 fellows of the Royal Society of Canada. Then came the public letter by 16 members of the Evaluation Group 1508 for Mathematics and Statistics, as well as various individual letters to Isabelle Blain. All convey serious concerns about the new evaluation procedures of the Discovery Grants and the disastrous funding decisions they caused in the 2011 competition. This eventually led NSERC’s President Suzanne Fortier to travel to the Edmonton meeting of the Canadian Mathematical Society on June 03, in order to “communicate directly with the research community, and not through blogs or letters to the press”. I wasn’t there, but Walter Craig, CRC, FRSC, Killam Fellow, and chair of the Math/NSERC Liaison committee was. He filed the following report –that we are posting here with his permission– for the benefit of Canada’s mathematical research community. Friday June 3, 2011 Edmonton CMS meeting Notes by Walter Craig (with thanks for corrections by N. Reid) Jacques Hurtubise introduced Suzanne Fortier as President of NSERC, and described her role as being in the “position of stewardship of Canadian science”. Suzanne Fortier: Opening statement to the effect that she is working for the success and benefit of Canada, including the research community, but also other communities, both national and international. Examples of scientific concerns and benefits she cites include climate change and energy resources. Trust is threatened, between NSERC and the mathematical sciences community. She says that we need to communicate, but not through blogs or letters to the press. (*1) There is an increased role of science and technology in national policies, mainly in entrepreneurship and in IP development. On the other hand, the government has committed itself to an overall 5% cut in order to balance the budget. This will be spread evenly across government agencies. On top of this, the government, and hence NSERC, wants to invest in “innovation”, shifting its emphasis further from basic research. NSERC total budget has grown 84% between 2001 and 2011, mostly in “innovation” and in “people” (namely CERC’s, scholarships). The Discovery Grant total budget has grown 40% in the same period. SF statement that, while there is some decrease in the number of grants in Math & Stats, the total “money is not decreasing”. At the same time, SF acknowledges that there is a serious problem with the budget in this year’s competition. She blames the Math & Stats community for this, with the following reasons: * Dynamics of the Math & Stats community: (*2) - many retirees - fewer entering and early career researchers - departments are not hiring young researchers * SF critizes the Math & Stats success rates, giving data which proports to show that our success rates are out of line with other disciplines. (*3) * SF blames the math community for its sense of entitlement to research grants, and its “cosiness of reviewers and the mathematics community” (*4). She states that, but for a few changes, the old review system is still in place. * SF blames the members of the Evaluation Group for the `bin’ distributions, for placing too many proposals in bin J, and for `bin’ grade inflation. - evidence of bin inflation is the fact that the EG placed 8 people in mathematics into the `bins’ A B and C. (*5) - This evidence was used in the decision to split the Math budget from the Statistics budget, protecting the latter from the deep cuts experienced by mathematicians whose proposals were rated in the middle bins. Question from the audience: What are the dollar values for grant awards for bin A in Chemistry, as compared to bin A in Math & Stats. What are the values for grants in bin J in Math & Stats vs for Chemistry. [This question was not answered (*6)] SF stated the Math & Stats competition budget to be at $3M for this DG competition [actually it is $3,007K, a drop of 13.9% from 2010, and below our previous estimate]. She showed a detailed table which gave the NSERC investment in different NSERC programs for the various EGs. She compared Math & Stats DG ~ $18M with CS DG ~ $27M, and pointed out that in the MRS program Math & Stats has $4.1M while CS has essentially $0. The table also showed the MRS budget in Physics to be $26.8M, which includes the CITA astrophysics institute in Toronto, but this was not pointed out by SF. Another indicator of poor Math & Stats performance that SF cited is that our discipline has 9.5% of the Discovery Grant holders, but only 5.4% of the CRCs. (*7) As to the role of the Long Range Plan, SF said that “by itself the LRP will not increase funding, but it may help the community identify untapped resources”, advising members of the community to – “Don’t talk about what you need, but how you can contribute”. One point of light that SF mentioned is that Rita Colwell, is now chairing the CCA committee `Expert panel: Science performance and research funding’. She is a former Director of the NSF, and has been a supporter of basic research and of the role of mathematics in it. This panel will write a report in 2012 which is supposed to indicate fair and appropriate levels of research funding for future Canadian The talk ended with a series of questions to SF and comments from the audience. Notes by Walter Craig: *1) We agree completely that open lines of communication between NSERC and the mathematics community are very important. On the other hand, we have had only partial success in attempts at direct communication with the NSERC directorate through e-mail and through other means, especially with regards to our concerns over the DG competition results this year. *2) SF stated that Math & Stats lost 600K of our budget in non-returning applicants. “You are not replenishing your faculty at universities at the same rate as other disciplines”. However this actual data has not been released, neither in the presentation of SF nor by other means. We would very much like to have it in quantitative terms. In particular the mathematics community wants to know how the budget for the 2011 DG competition was calculated, for Math & Stats as well as for the other EGs. I note that most other EGs have not had such drastic budget cuts imposed on them in this year’s competition (chemistry is an exception), while Math & Stats decreased by 13.9% over one year, and 22.8% over the past five years. *3) I do not believe that this statement is supported by the data, our success rates lie around the middle of the other EGs, as one can check from the NSERC release *4) This is an absolutely unwarranted criticism. I understand that the international review was prompted by the need for NSERC to direct funding towards merit rather than towards PI history, so this particular criticism could have been directed at the whole system. But that is not what I understood from her comments. *5) This argument holds very little water. The returning grants over $40K (those that would have been in bins A, B and C if they existed at the time) number 12, whose total budget carried into the competition is $596K. The number of new grant allocations that are over $40K is 10, where 8 of them are in A, B and C bins. The total budget for them is $535K. This is far from bin inflation, and very far from draining the budget, rather the contrary. *6) This is information that has now been released to the LRP. The Chemistry bin A is funded at $152K and bin J is at $30K, while for comparison, computer science bin A is at $100K and bin J is at $19K. This year’s Math & Stats bin A is worth $52K and bin J is at $10K. *7) Since CRCs are allocated proportionally to NSERC dollars, and not by numbers, while Math & Stats grants are well known to be underfunded, this is a misuse of the statistics. 5 Responses to A senior scholar reports on S. Fortier’s presentation at the CMS meeting 1. Apropos of this blog posting, a challenge to Dr. Fortier’s arguments regarding impact and relevance is found in the 13 July issue of Nature: Nature 475,166–169(14 July 2011) 2. Pingback: You are not alone! | Piece of Mind 3. Pingback: Grade inflation, instability and uncertainty in Discovery Grant competitions | Piece of Mind This entry was posted in Op-eds, R&D Policy. Bookmark the permalink.
{"url":"http://nghoussoub.com/2011/06/18/a-senior-scholar-reports-on-s-fortiers-presentation-at-the-cms-meeting/","timestamp":"2014-04-17T06:48:19Z","content_type":null,"content_length":"68378","record_id":"<urn:uuid:076e3ed3-ef7c-4afb-8421-a78928a78ebd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
ROBERT F. O'CONNELL PUBLICATIONS (with PDF’s) 1. W. R. Johnson, R. F. O'Connell, C. J. Mullin, "Coulomb Field Effects on the Decay of Bound Polarized Muons," Phys. Rev. 124, 904 (1961). 2. R. F. O'Connell and L. O'Raifeartaigh, "On a Generalization of the Gellmann-Rosenfeld Triangle for ∑ Decays," Phys. Lett. 3, 197 (1963). 3. R. F. O'Connell, "K-Shell Internal Conversion Coefficients at Threshold," Nuclear Phys. 45, 142 (1963). 4. C. O. Carroll and R. F. O'Connell, "Third Order Coulomb Wave Function and Single Quantum Annihilation," Phys. Rev. 132, 2540 (1963). 5. S. Minami and R. F. O'Connell, "X^o Meson (960 MeV) Production in K^--P Collision and Determination of its Spin-Parity," Nuovo Cimento 34, 504 (1964). 6. C. O. Carroll and R. F. O'Connell, "The Shape of the Low-Energy Spectrum for Coulombic Interactions," Phys. Rev. Lett. 14, 840 (1965). 7. R. F. O'Connell and C. O. Carroll, "Internal Conversion Coefficients: General Formulation for all Shells and Application to Low Energy Transitions," Phys. Rev. 138, B1042 (1965). 8. R. F. O'Connell and D. R. Tompkins, "Generalized Conservation Laws for Free Field with Mass," Nuovo Cimento 38, 1088 (1965). 9. R. F. O'Connell and C. O. Carroll, "Screening Corrections in Problems Involving Bound Electrons," Nuovo Cimento 38, 1431 (1965). 10. R. F. O'Connell and D. R. Tompkins, "Generalized Solutions for Massless Free Fields and Consequent Generalized Conservation Laws," J. Math. Phys. 6, 1952 (1965). 11. R. F. O'Connell and D. R. Tompkins, "Physical Interpretation of Generalized Conservation Laws," Nuovo Cimento 39, 391 (1965). 12. R. F. O'Connell and C. O. Carroll, "Internal Conversion Coefficients: General Formulation for all Shells and Application to Low and High Energy Transitions," in "Internal Conversion Processes," ed. by J. Hamilton (Academic Press, New York 1965), P.333. 13. C. O. Carroll and R. F. O'Connell, "Internal Conversion Coefficients for High Energy Transitions," Nucl. Phys. 80, No. 3, 500 (1966). 14. R. F. O'Connell, "Cosmic Ray Electron Spectrum and the Universal Black-Body Radiation at 3^o K," Phys. Rev. Lett. 17, 1232 (1966). 15. S. Sofia and R. F. O'Connell, "Evolution of the Radio Spectral Index of Supernova Remnants," Zeit. f. Astrophys. 65, 498 (1967). 16. R. F. O'Connell and C. O. Carroll, "Internal Conversion Coefficients at Gamma-Ray Threshold Energies," Nuclear Data A3, 287 (1967); A4, 320(E) (1968). 17. R. F. O'Connell and S. Sofia, "The Cosmic Ray Electron Spectrum in a Disc-Halo Galactic Model," Nuovo Cimento 50, 359 (1967). 18. R. F. O'Connell and A.Salmona, "Radiation of Gravitational Waves in Brans-Dicke General Relativity Theory," Phys. Rev. 160, 1108 (1967). 19. R. F. O'Connell, "Schiff's Proposed Gyroscope Experiment as a Test of the Scalar-Tensor Theory of General Relativity," Phys. Rev. Lett. 20, 69, 238(E) (1968). 19E. R. F. O'Connell, "Schiff's Proposed Gyroscope Experiment as a Test of the Scalar-Tensor Theory of General Relativity," Phys. Rev. Lett. 20, 69, 238(E) (1968). 20. R. F. O'Connell, "Regression of the Node of the Orbit of Mercury due to the Solar Quadrupole Moment," Ap. J. 152, L11 (1968). 21. R. F. O'Connell, "Calculation of the General Relativistic Perihelion Shift using Isotropic Coordinates," Am. J. Phys. 36, 757 (1968). 22. R. F. O'Connell, "Effect of the Anomalous Magnetic Moment of the Electron on Spontaneous Pair Production in a Strong Magnetic Field," Phys. Rev. Lett. 21, 397 (1968). 23. R. F. O'Connell, "Motion of a Relativistic Electron with an Anomalous Magnetic Moment in a Constant Magnetic Field," Phys. Lett. 27A, 391 (1968). 24. C. O. Carroll and R. F. O'Connell, "High Energy K Conversion Coefficients," Phys. Lett. 28A, 105 (1968). 25. R. F. O'Connell, "Effect of the Anomalous Magnetic Moment of the Electron on the Non-Linear Lagrangian of the Electromagnetic Field," Phys. Rev. 176, 1433 (1968). 26. R. F. O'Connell, "Universal 2.7^o K Black Body Radiation," Bull. Am. Phys. Soc. 13, 1714 (1968). 27. C. O. Carroll and R. F. O'Connell, "High Energy K Conversion Coefficients for ^114Cd and ^150Sm," Nucl. Phys. A125, 637 (1969). 28. R. F. O'Connell and J. J. Matese, "Effect on a Constant Magnetic Field on the Neutron Beta Decay Rate and its Astrophysical Implications," Nature 222, 649 (1969). 29. R. F. O'Connell, "Effect of the Earth's Quadrupole Moment on the Procession of a Gyroscope," Astrophys. and Space Science 4, 119 (1969). 30. J. J. Matese and R. F. O'Connell, "Neutron Beta Decay in a Uniform Constant Magnetic Field," Phys. Rev. 180, 1289 (1969). 31. R. F. O'Connell, "Precession of Schiff's Proposed Gyroscope in an Arbitrary Force Field," Nuovo Cimento Lett. 1, 933 (1969). 32. R. F. O'Connell and S. D. Verma, "Possible Origin of the Diffuse Component of Cosmic X-Rays," Phys. Rev. Lett. 22, 1443 (1969). 33. R. F. O'Connell and J. J. Matese, "Effect of a Constant Magnetic Field on the Beta Decay Rate of a Neutron Immersed in a Completely Degenerate Electron Gas," Phys. Lett. 29A, 533 (1969). 34. R. F. O'Connell, "Simple Derivation of Schwinger's Quantization Relation Between Electric and Magnetic Charges," Nuovo Cimento Lett. 2, 221 (1969). 35. R. F. O'Connell, "Magnetic Moment of a Magnetized Electron Gas and Magnetic Fields in White Dwarfs and Neutron Stars," Nuovo Cimento Lett. 3, 218 (1970). 36. R. F. O'Connell and S. D. Verma, "Have the Diffuse Cosmic X-Rays an Anisotropic Component?," Nature 224, 505 (1969), ibid. 225, 671 (1970). 37. R. F. O'Connell and S. D. Verma, "The Diffuse Component of Cosmic X-rays and the 8.3^o K Galactic Blackbody Radiation," Acta Physica Academiae Scientiarum Hungaricae 29, Suppl. 1, 255 (1970). 38. J. J. Matese and R. F. O'Connell, "Production of Helium in the Big-Bang Expansion of a Magnetic Universe," Astrophys. J. 160, 451 (1970). 39. R. F. O'Connell, "The Gravitational Field of the Electron," Phys. Lett. 32A, 402 (1970). 40. B. M. Barker and R. F. O'Connell, "Derivation of the Equations of Motion of a Gyroscope from the Quantum Theory of Gravitation," Phys. Rev. D2, 1428 (1970). 41. B. M. Barker and R. F. O'Connell, "Another Effect of the Earth's Quadrupole Moment on the Precession of a Gyroscope," Nuovo Cimento Lett. 4, 561 (1970). 42. B. M. Barker and R. F. O'Connell, "Effect of the Earth's Revolution Around the Sun on the Proposed Gyroscope Test of the Lense-Thirring Effect," Phys. Rev. Lett. 25, 1511 (1970). 43. R. F. O'Connell, "Gyroscope Test of Gravitation: An Analysis of the Important Perturbations," JPL Technical Memorandum 33-499 unit (1971), p. 82. 44. R. F. O'Connell and K. M. Roussel, "Origin of Magnetic Fields in White Dwarfs and Neutron Stars," Nature 231, 32 (1971). 45. R. F. O'Connell and G. L. Surmelian, "Effect of Gravitational Light Deflection on the Proposed Gyroscope Test of the Lense-Thirring Effect," Phys. Rev. D4, 286 (1971). 46. R. F. O'Connell and K. M. Roussel, "On the Origin of Magnetic Fields in White Dwarfs and Neutron Stars--II," Nuovo Cimento Lett. 2, 55 (1971) and 2, 815(E) (1971). 47. R. F. O'Connell and S. N. Rasband, "Lense-Thirring Type Gravitational Forces Between Disks and Cylinders," Nature 232, 193 (1971). 48. R. F. O'Connell, "Present Status of the Relativity-Gyroscope Experiment," Gen. Relativ. and Grav. 3, 123 (1972). 49. G. Chanmugam, R. F. O'Connell, and A. K. Rajagopal, "Polarized Radiation from Magnetic White Dwarfs--Exact Solution of Kemp's Model," Astrophys. J. 175, 157 (1972). 50. R. F. O'Connell and K. M. Roussel, "Magnetic Properties of a Degenerate Electron Gas and Implications for Metals, White Dwarfs, and Neutron Stars," Astron. & Astrophys. 18, 198 (1972). 51. G. Chanmugam, R. F. O'Connell, and A. K. Rajagopal, "Superfluidity in Neutron Stars," Phys. Letters 39A, 285 (1972). 52. B. M. Barker and R. F. O'Connell, "Relativity Gyroscope Experiment at Arbitrary Orbit Inclinations," Phys. Rev. D6, 956 (1972). 53. R. F. O'Connell, "Radial Motion of a Spinning Test Body in the Field of a Black Hole," Phys. Rev. D6, 3035 (1972). 54. A. K. Rajagopal, G. Chanmugam, R. F. O'Connell, and G. L. Surmelian, "Ionization Energies of Hydrogen in Magnetic White Dwarfs," Astrophys. J. 177, 713 (1972). 55. G. Chanmugam, R. F. O'Connell, and A. K. Rajagopal, "Polarized Radiation from Magnetic White Dwarfs II--Solution of Kemp's Model at all Temperatures," Astrophys. J. 177, 719 (1972). 56. E. R. Smith, R. J. W. Henry, G. L. Surmelian, R. F. O'Connell, and A. K. Rajagopal, "Energy Spectrum of the Hydrogen Atom in a Strong Magnetic Field," Phys. Rev. D6, 3700 (1972). 57. K. M. Roussel and R. F. O'Connell, "The Wavelength Dependence of Linear and Circular Polarized Radiation from the Magnetic White Dwarf Grw + 70° 8274, Ap. J. 182, 277 (1973). 58. R. F. O'Connell, "Spin, Rotation and C, P, and T Effects in the Gravitational Interaction and Related Experiments," in Experimental Gravitation: Proceedings of Course 56 of the International School of Physics "Enrico Fermi" (Academic Press, 1974), p.496. 59. E. R. Smith, R. J. W. Henry, G. L. Surmelian, and R. F. O'Connell, "Hydrogen Atom in a Strong Magnetic Field:Bound-Bound Transitions," Ap. J. 179, 659 (1973) and 182, 651(E) (1973). 59E. E. R. Smith, R. J. W. Henry, G. L. Surmelian, and R. F. O'Connell, "Hydrogen Atom in a Strong Magnetic Field:Bound-Bound Transitions," Ap. J. 179, 659 (1973) and 182, 651(E) (1973). 60. R. F. O'Connell, "Polarized Radiation from Magnetic White Dwarfs and Atoms in Strong Magnetic Fields," in Proceedings of the International Astronomical Union Symposium No. 53 "Physics of Dense Matter," (Reidel Publishers, 1974), p.287. 61. R. F. O'Connell, "Computation of Strong Magnetic Fields in White Dwarfs," in Proceedings of the International Astronomical Colloquium No. 23 "Planets, Stars and Nebulae Studied with Photopolarimetry, (University of Arizona Press, 1974), p.992. 62. G. L. Surmelian and R. F. O'Connell, "Energy Spectrum of He II in a Strong Magnetic Field and Bound-Bound Transition Probabilities," Astrophys. and Space Science 20, 85 (1973). 63. R. F. O'Connell, "Bremsstrahlung Model of Polarized Radiation from Magnetic White Dwarfs," Phys. Lett. 46A,249 (1973). 64. K. M. Roussel and R F. O'Connell, "Variational Solution of Schrodinger's Equation for the Static Screened Coulomb Potential," Phys. Rev. A9, 52 (1974). 65. R. J. W. Henry, R. F. O'Connell, E. R. Smith, G. Chanmugam, and A. K. Rajagopal, "Energy Spectrum of H^- in a Strong Magnetic Field," Phys. Rev. D9, 329 (1974). 66. R. F. O'Connell, "Highly-Excited States of Atoms in a Magnetic Field," Astrophys. J. 187, 275 (1974). 67. G. L. Surmelian and R. F. O'Connell, "Energy Spectrum of Hydrogen-Like Atoms in a Strong Magnetic Field," Astrophys. J. 190, 741 (1974); 204, 311(E) (1976). 67E. G. L. Surmelian and R. F. O'Connell, "Energy Spectrum of Hydrogen-Like Atoms in a Strong Magnetic Field," Astrophys. J. 190, 741 (1974); 204, 311(E) (1976). 68. K. M. Roussel and R. F. O'Connell, "Semi-Conductor Impurity and Exciton Levels in a Magnetic Field," J. Phys. Chem. Solids 35, 1429 (1974). 69. B. M. Barker and R. F. O'Connell, "Nongeodesic Motion in General Relativity," Gen. Relativ. Gravit. 5, 539 (1974). 70. G. L. Surmelian and R. F. O'Connell, "Quadratic Zeeman Effect in the Hydrogen Balmer Lines from Magnetic White Dwarfs," Astrophys. J. 193, 705 (1974). 71. B. M. Barker and R. F. O'Connell, "Effect of the Rotation of the Central Body on the Orbit of a Satellite," Phys. Rev. D10, 1340 (1974). 72. B. M. Barker and R. F. O'Connell, "Effect of the Gyro's Quadrupole Moment on the Relativity Gyroscope Experiment," Phys. Rev. Dll, 711 (1975). 73. G. L. Surmelian, R. J. W. Henry, and R. F. O'Connell, "Energy Spectrum of He I and H^- in a Strong Magnetic Field," Phys. Lett. 49A, 431 (1974). 74. R. F. O'Connell, "Atoms in Strong Magnetic Fields--A "New" Area of Laboratory Atomic Physics Research with Implication for Astrophysics and Solid-State Physics," Proc. of Fourth International Conference on Atomic Physics (1974), p.145. 75. R. F. O'Connell, "Can Quantum Gravitational Forces Stop Gravitational Collapse?" Gen. Relativ. Gravit. 6, 99 (1975). 76. R. F. O'Connell, "Internal Magnetic Fields of Pulsars, White Dwarfs, and Other Stars," Astrophys. J. 195, 751 (1975). 77. K. M. Roussel and R. F. O'Connell, "A Comparison Between Debye-Huckel Screening and the Stark Effect on the Determination of the Last Observable Spectral Line from a Plasma," Phys. Lett. 51A, 244 78. B. M. Barker and R. F. O'Connell, "The Gravitational Two Body Problem With Arbitrary Masses, Spins, and Quadrupole Moments," Phys. Rev. D 12, 329 (1975). 79. B. M. Barker and R. F. O'Connell, "Relativistic Effects in the Binary Pulsar PSR 1913+16," Astrophys. J. Lett. 199, L25 (1975). 80. R. F. O'Connell, "Ionization Energies of Hydrogen-Like Atoms in Intense Electromagnetic Fields," Phys. Rev. A 12, 1132 (1975). 81. G. W. Ford and R. F. O'Connell, "Atomic Ionization Potentials in a Plane Electromagnetic Wave," Phys. Rev. A 13, 1281 (1976). 82. A. R. Khan and R. F. O'Connell, "Gravitational Analogue of Magnetic Force," Nature 261, 480 (1976). 83. B. M. Barker and R. F. O'Connell, "General Relativistic Effects in Binary Systems," in Physics and Astrophysics of Neutron Stars and Black Holes: Proceedings of Course 65 of the International School of Physics "Enrico Fermi", (North-Holland, 1976), p. 437. 84. R. F. O'Connell, "A Simplified Form for the Hamiltonian and Lagrangian of the Spin-Independent Gravitational Two-Body System," Gen. Relativ. Gravit. 7, 805 (1976). 85. B. M. Barker and R. F. O'Connell, "Lagrangian-Hamiltonian Formalism for the Gravitational Two-Body Problem with Spin and PPN Parameters [] and []," Phys. Rev. D 14, 861 (1976). 86. R. F. O'Connell, "Contact Interactions in the Einstein and Einstein-Cartan-Sciama-Kibble (ECSK) Theories of Gravitation," Phys. Rev. Lett. 37, 1653 (1976) and 38, 298(E) (1977). 86E. R. F. O'Connell, "Contact Interactions in the Einstein and Einstein-Cartan-Sciama-Kibble (ECSK) Theories of Gravitation," Phys. Rev. Lett. 37, 1653 (1976) and 38, 298(E) (1977). 87. L. Chan and R. F. O'Connell, "Two-Body Problems--A Unified, Classical, and Simple Treatment of Spin-Orbit Effects," Phys. Rev. D15, 3058 (1977). 88. B. M. Barker and R. F. O'Connell, "Post-Newtonian Two-Body and n-Body Problems with Electric Charge in General Relativity," J. Math. Phys. 18, 1818 (1977), and 19, 1231(E) (1978). 88E. B. M. Ba rker and R. F. O'Connell, "Post-Newtonian Two-Body and n-Body Problems with Electric Charge in General Relativity," J. Math. Phys. 18, 1818 (1977), and 19, 1231(E) (1978). 89. R. F. O'Connell, "One-Dimensional Hydrogenic Atom in an Electric Field with Solid-State Applications," Phys. Lett. 60A, 481 (1977). 90. B. M. Barker and R. F. O'Connell, "Perihelion Precession for the Charged Two-Body Problem in General Relativity," Nuovo Cimento Lett. 19, 467 (1977). 91. B. M. Barker and R. F. O'Connell, "Conditions for Static Balance for the Post-Newtonian Two-Body Problem with Electric Charge in General Relativity," Phys. Lett. 61A, 297 (1977). 92. R. F. O'Connell, "Attractive Spin-Spin Contact Interactions in the ECSK Torsion Theory of Gravitation," Phys. Rev. D16, 1247 (1977). 93. R. F. O'Connell and E. P. Wigner, "On the Relation Between Momentum and Velocity for Elementary Systems," Phys. Lett. 61A, 353 (1977). 94. L. Chan and R. F. O'Connell, "Charmonium--the ^1P[1] State," Phys. Lett. 76B, 121 (1978). 95. R. F. O'Connell, "Rydberg States in Strong Electric and Magnetic Fields," Phys. Rev. A17, 1984 (1978). 96. R. F. O'Connell and E. P. Wigner, "Position Operators for Systems Exhibiting the Special Relativistic Relation Between Momentum and Velocity," Phys. Lett. 67A, 319 (1978). 97. B. M. Barker and R. F. O'Connell, "Center of Inertia for the Post-Newtonian n-Body Problem in Gravitation with PPN Parameters [] and []," Phys. Lett. 68A, 289 (1978). 98. B. M. Barker and R. F. O'Connell, "Center of Inertia and Coordinate Transformations in the Post-Newtonian Charged n-Body Problem in Gravitation," J. Math. Phys. 20, 1427 (1979). 99. R. F. O'Connell, "Effect of the Proton Mass on the Spectrum of the Hydrogen Atom in a Strong Magnetic Field," Phys. Lett. 70A, 389 (1979). 100. B. M. Barker and R. F. O'Connell, "The Gravitational Interaction: Spin, Rotation, and Quantum Effects--A Review," Gen. Rel. and Grav. 11, 149 (1979. 101. G. L. Wallace and R. F. O'Connell, "Energy Flow Vector of the Electromagnetic Field," Canad. Journ. Phys. 58, 744 (1980). 102. G. W. Ford and R. F. O'Connell, "Absorption of Radiation in a Magnetoplasma and Application to the Laser Fusion Process," Phys. Rev. A 22, 295 (1980). 103. B. M. Barker and R. F. O'Connell, "The Post-Post-Newtonian Problem in Classical Electromagnetic Theory," Ann. Phys. (NY) 129, 358 (1980). 104. B. M. Barker and R. F. O'Connell, "Acceleration-dependent Lagrangians and Equations of Motion," Phys. Lett. 78A, 231 (1980). 105. B. M. Barker and R. F. O'Connell, "Removal of Acceleration Terms from the Two-Body Lagrangian to order c^-4 in Electromagnetic Theory," Canad. Journ. Phys. 58, 1659 (1980). 106. B. M. Barker, G. G. Byrd, and R. F. O'Connell, "A Trinary Model for SS433," Astrophys. J. 243, 263 (1981). 107. R. F. O'Connell, "Intersubband-Cyclotron Combined Resonance in a Surface Space-Charge Layer," Physica 103B, 348 (1981). 108. R. F. O'Connell and G. L. Wallace, "Null Faraday Rotation - A Clean Method for Determination of Relaxation Times and Effective Masses in MIS and Other Systems," Solid State Commun., 38, 429 109. R. F. O'Connell and G. L. Wallace, "Intraband and Interband Null Faraday Rotation," Physica 113B, 51 (1982). 110. R. F. O'Connell and G. L. Wallace, "Null Ellipticity in Magneto-Optics," Solid State Commun. 39, 993 (1981). 111. R. F. O'Connell and E. P. Wigner, "Quantum-Mechanical Distribution Functions:Conditions for Uniqueness," Phys. Lett. 83A, 145 (1981). 112. R. F. O'Connell and G. L. Wallace, "Faraday rotation in the Appel-Overhauser model for inversion-layer electronics in Si," Phys. Rev. B 24, 2267 (1981). 113. R. F. O'Connell and E. P. Wigner, "Some Properties of a Non-Negative Quantum-Mechanical Distribution Function," Phys. Lett. 85A, 121 (1981). 114. B. M. Barker, G. M. O'Brien, and R. F. O'Connell, "Relativistic Quadrupole Moment," Phys. Rev. D 24, 2332 (1981). 115. R. F. O'Connell and G. L. Wallace, "Multiple Reflections in the Theory of the Faraday Effect," Phys. Lett. 86A, 283 (1981). 116. B. M. Barker, G. G. Byrd, and R. F. O'Connell, "Spin Nutation in Binary Systems Due to General Relativisic and Quadrupole Effects," Astrophys. J. 253, 309 (1982). 117. R. F. O'Connell and A. K. Rajagopal, "New Interpretation of the Scalar Product in Hilbert Space," Phys. Rev. Lett. 48, 525 (1982). 118. J. A. C. Gallas and R. F. O'Connell, "On the Spacing of the Quasi-Landau Resonances," J. Phys. B 15, L75 (1982). 119. R. F. O'Connell and G. Wallace, "Multiple Reflection Effects in the Theory of the Faraday Effect and Ellipticity for Propagation through Three Distinct Media," Canad. Journ. Phys. 61, 49 (1983). 120. J. A. C. Gallas and R. F. O'Connell, "Effect of the Magnetic Quantum Number on the Spacing of Quasi-Landau Resonances," J. Phys. B 15, L309 (1982). 121. A. Khandker, R. F. O'Connell, and G. W. Ford, "Absorption of Radiation Propagating Obliquely in a Magnetoplasma," Astrophys. J. 269, 668 (1983). 122. R. F. O'Connell and G. Wallace, "Faraday Rotation in the Appel-Overhauser Model for Inversion-layer Electrons in Si II," Phys. Rev. B 25, 5527 (1982). 122E. R. F. O'Connell and G. Wallace, "Faraday Rotation in the Appel-Overhauser Model for Inversion-layer Electrons in Si II," Phys. Rev. B 25, 5527 (1982). 123. R. F. O'Connell and G. Wallace, "Transmission of Electromagnetic Radiation through an Electron Inversion Layer of Finite Thickness in a Metal-oxide-semiconductor (MOS) Structure," Physica 115B, 1 (1982). 124. R. F. O'Connell and G. Wallace, "Ellipticity and Faraday Rotation due to a Two-Dimensional Electron Gas in a Metal-Oxide-Semiconductor (MOS) System," Phys. Rev. B 26, 2231 (1982). 125. J. A. C. Gallas and R. F. O'Connell, "Two-Dimensional Quantization of the Quasi-Landau Hydrogenic Spectrum," J. Phys. B 15, L593 (1982). 126. J. A. C. Gallas and R. F. O'Connell, "Quasi-Landau Resonances: Analytic Treatment of the Hydrogenic Spectrum in the Two-dimensional Model and Relation to Other Strong-Field Problems," J. de Physique, Colloque C2, supplement 11, C2-435 (1982). 127. R. F. O'Connell, "Two Dimensional Systems in Solid State and Surface Physics: Strong Electric and Magnetic Field Effects," J. de Physique, Colloque C2, supplement 11, C2-81 (1982). 128. R. F. O'Connell, "The Wigner Distribution Function -- 50th Birthday," Found. Phys. 13, 83 (1983), and in Quantum Space and Time - The Quest Continues, (Cambridge University Press, 1985). 129. R. F. O'Connell, "Faraday Effects," The Encyclopedia of Physics, 3rd ed., edited by R. M. Besancon (Van Nostrand Reinhold 1985), pps. 418-421. 130. J. A. C. Gallas and R. F. O'Connell, "On the Spectrum of V(r) = [] Physical Implications for a Variety of Problems," in Photophysics and Photochemistry in the Vacuum Ultraviolet, (D. Reidel, Dordrecht, Holland, 1985) pps. 721-728. 131. R. F. O'Connell and G. Wallace, "Comparison of the Faraday Rotation for the Two and Three-Dimensional Models of the Inversion Layer in a Metal-Oxide-Semiconductor System," Phys. Rev. B 27, 5901 132. R. F. O'Connell and G. Wallace, "On the Optimum Method of Analysis of Faraday Rotation and Ellipticity Measurements in a Metal-Oxide-Semi-conductor System," J. Phys. Chem. Solids 44, 951 (1983). 133. J. A. C. Gallas, E. Gerck, and R. F. O'Connell, "Scaling Laws for Rydberg Atoms in Magnetic Fields," Phys. Rev. Lett. 50, 324 (1983). 134. R. F. O'Connell, "Distribution Functions in Quantum Optics," in Laser Physics, (Springer-Verlag lecture notes in Physics no. 182, 1983) pps. 238-248. 135. R. F. O'Connell and G. Wallace, "Effect of a Finite Semiconductor Substrate on the Faraday Rotation and Ellipticity in a Metal-Oxide-Semiconductor System", Physica B 121, 41 (1983). 136. R. F. O'Connell and G. Wallace, "Memory-function approach to ellipticity and Faraday rotation in a metal-oxide-semiconductor system," Phys. Rev. B 28, 4643 (1983). 137. M. Hillery, R. F. O'Connell, M. O. Scully, and E. P. Wigner, "Distribution Functions in Physics:Fundamentals", Physics Reports 106 (3), 121 (1984). 138. B. M. Barker and R. F. O'Connell, "Time transformations in post-Newtonian Lagrangians", Phys. Rev. D 29, 2721 (1984). 139. R. J. W. Henry and R. F. O'Connell, "On the Magnetic Field in the White Dwarf Grw + 70° 8247", Ap. J. (Lett.) 282, L97 (1984). 140. R. F. O'Connell, L. Wang and H. A. Williams, "Time Dependence of a General Class of Quantum Distribution Functions", Phys. Rev. A 30, 2187 (1984). 141. R. F. O'Connell and E. P. Wigner, "Manifestations of Bose and Fermi Statistics on the Quantum Distribution Function for Systems of Spin Zero and Spin One-Half Particles", Phys. Rev. A 30, 2613 142. R. F. O'Connell and D. F. Walls, "Operational Approach to Phase-Space Measurements in Quantum Mechanics", Nature 312, 257 (1984). 143. R. F. O'Connell and B. M. Barker, "The Gyroscope Test of Relativity", Nature 312, 314 (1984). 144. R. F. O'Connell and L. Wang, "A New Parametrized Quantum Distribution Function and its Time Development", Phys. Lett. A 107, 9 (1985). 145. J. L. Greenstein, R. J. W. Henry and R. F. O'Connell, "Further Identification of Hydrogen in GRW + 70° 8247", Ap. J. (Lett.) 289, L25 (1985). 146. R. J. W. Henry and R. F. O'Connell, "Hydrogen Spectrum in Magnetic White Dwarfs: Ha, Hb, and Hg transitions", Publ. Astron. Soc. Pac. 97, 333 (1985). 147. R. F. O'Connell, "Quantum Distribution Functions in Non-Equilibrium Statistical Mechanics", in Frontiers of Nonequilibrium Statistical Physics (Plenum Publishing Corporation, 1986). 148. R. F. O'Connell and L. Wang, "Phase Space Representations of the Bloch Equation", Phys. Rev. A 31, 1707 (1985). 149. R. F. O'Connell, "The Gyroscope Experiment", Physics To-Day 38, 104 (Feb. 1985). 150. A. Khandker and R. F. O'Connell, "Theoretical Determination of the Admittance of the Metal Gate in a Metal-Oxide-Semiconductor System and Effect of the Gate on Faraday Rotation and Ellipticity," Physica 132B, 145 (1985). 151. A. Khandker, R. F. O'Connell, and G. Wallace, "Effect of a Finite Oxide Layer on the Faraday Rotation and Ellipticity in a Metal-Oxide-Semiconductor System", Phys. Rev. B 31, 5208 (1985). 152. R. Dickman and R. F. O'Connell, "Wigner Distribution and Green's Function Approach to Quantum Corrections and Implications for the Melting Temperature of Two Dimensional Wigner Crystals", Phys. Rev. B 32, 471 (1985). 153. R. Dickman and R. F. O'Connell, "Complement to the Wigner-Kirkwood Expansion", Phys. Rev. Lett. 55, 1703 (1985). 154. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "Quantum Oscillator in a Blackbody Radiation Field", Phys. Rev. Lett. 55, 2273 (1985). 155. B. M. Barker and R. F. O'Connell, "Relativistic Kepler's Third Law", Astrophys. J., 305, 623 (1986). 156. R. F. O'Connell and B. M. Barker, "Gravitational two-body problem with acceleration-dependent spin terms", Gen. Relativ. Gravit., 18, 1055 (1986). 157. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "Stark Shifts Due to Blackbody Radiation", J. Phys. B 19, (2), L41 (1986). 158. R. F. O'Connell, "Dissipation and Memory Effects in the Interaction of a Blackbody Radiation Heat Bath with Matter", in Coherence, Cooperation and Fluctuations, (Cambridge University Press, 1986), p. 264. 159. R. Dickman and R. F. O'Connell, "A Perturbation Expansion for Correlation Functions via the Wigner Distribution", Superlattices and Microstructures, 2, 57 (1986). 160. F. Narcowich and R. F. O'Connell, "Necessary and Sufficient Conditions for a Phase-Space Function to be a Wigner distribution", Phys. Rev. A 34, 1 (1986). 161. R. F. O'Connell, C. M. Savage and D. F. Walls, "Decay of Quantum Coherence due to the Presence of a Heat Bath:Markovian Master-Equation Approach", Ann. N. Y. Acad. Sci. 480, 267 (1986). 162. L. Wang and R. F. O'Connell, "Surface Effects on the Diamagnetic Susceptibility and Other Properties of a Low-Temperature Electron Gas," Phys. Rev. B 34, 5160 (1986). 163. R. Dickman and R. F. O'Connell, "Phonon Frequency Shifts in an Anharmonic Lattice via the Wigner Distribution Function", Phys. Rev. B 34, 5678 (1986). 164. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "Thermodynamic Perturbation Theory for a System of Atoms Coupled to the Radiation Field", Phys. Rev. A 34, 2001 (1986). 165. L. Wang and R. F. O'Connell, "Free Energy for Harmonically Bound Fermions in a Magnetic Field", J. Phys. A 20, 937 (1987). 166. B. M. Barker and R. F. O'Connell, "On the Completion of the Post-Newtonian Gravitational Two-Body Problem with Spin", J. Math. Phys. 28, 661 (1987). 167. R. F. O'Connell, "Wigner Distribution Function Approach to the Calculation of Quantum Effects in Condensed Matter Physics", in Proceedings of the First International Conference on the Physics of Phase Space (Springer-Verlag, New York, 1987), p.171. 168. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "On the Thermodynamics of Quantum - Electrodynamic Frequency Shifts," J. Phys. B 20, 899 (1987). 169. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "Memory Effects in Transport Theory:an Exact Model", Phys. Rev. A 36, 1466 (1987). 170. G. W. Ford and R. F. O'Connell, "Energy Shifts for a Multilevel Atom in an Intense Squeezed Radiation Field," J. Opt. Soc. Am. B 4, 1710 (1987). 171. G. Y. Hu and R. F. O'Connell, "Quantum Transport for a Many Body System Using a Quantum Langevin Equation Approach," Phys. Rev. B 36, 5798 (1987). 172. L. Wang and R. F. O'Connell, "A Precaution Needed in Using the Phase-Space Formulation of Quantum Mechanics," Physica A 144, 201 (1987). 173. L. Wang and R. F. O'Connell, "Magnetic Susceptibility of a Two-Dimensional Electron Gas in the Strong Magnetic Field Limit and for Non-Zero Temperatures," Physica Status Solidi (b) 144, 781 174. L. Wang and R. F. O'Connell, "Landau-level width:Magnetic-field and temperature dependences," Phys. Rev. B 37, 3052 (1988). 175. L. Wang and R. F. O'Connell, "Quantum Mechanics without Wave Functions," Invited Paper honoring Professor David Bohm on the occasion of his 70th birthday, special issue of Foundations of Physics, 18, 1023 (1988). 176. L. Wang and R. F. O'Connell, "On the Thermodynamics of a Degenerate Two-Dimensional Electron Gas in a Strong Magnetic Field: Density of States," Zeits. f. Physik B 73, 179 (1988). 177. F. J. Narcowich and R. F. O'Connell, "A Unified Approach to Quantum Dynamical Maps and Gaussian Wigner Distributions," Phys. Lett. A, 133, 167 (1988). 178. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "Comment on the Exact Calculation of the Partition Function for a Quantum Oscillator Interacting with the Radiation Field," Phys. Rev. A 37, 3609 179. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "The Quantum Langevin Equation," Phys. Rev. A 37, 4419 (1988). 180. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "Dissipative Quantum Tunneling:Quantum Langevin Equation Approach," Phys. Lett. A 128, 29 (1988). 181. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "Quantum Oscillator in a Blackbody Radiation Field II. Direct Calculation of the Energy using the Fluctuation-Dissipation Theorem," Ann. Phys. (NY), 185, 270 (1988). 182. G. W. Ford, J. T. Lewis, and R. F. O'Connell, "Independent Oscillator Model of a Heat Bath: Exact Diagonalization of the Hamiltonian," J. Stat. Phys. 53, 439 (1988). 183. G. Y. Hu and R. F. O'Connell, "A Theory of High Electric Field Transport," Physica A 149, 1 (1988). 184. G. Y. Hu and R. F. O'Connell, "Phonon Effects on the Cyclotron Resonance for a Many Body System; A Generalized Quantum Langevin Equation Approach," Physica A 151, 33 (1988). 185. G. Y. Hu and R. F. O'Connell, "Fluctuation Effects on the Cyclotron Resonance Spectrum for a Two-dimensional Electron Gas," Phys. Rev. B 37, 10391 (1988). 186. G. Y. Hu and R. F. O'Connell, "Quantum Theory of Transient Transport in a High Electric Field," Phys. Rev. B 38, 1721 (1988). 187. G. Y. Hu and R. F. O'Connell, "Polarizability of a Two-dimensional Electron Gas Including Fluctuation Effects," J. Phys. C 21, 4325 (1988). 188. G. Y. Hu and R. F. O'Connell, "Strong Electric Field Effect on Weak Localization," Physica A 153, 114 (1988). 189. G. Y. Hu and R. F. O'Connell, "The Memory Function for Cyclotron Resonance in the Two Dimensional Electron Gas," Solid State Comm., 68, 33 (1988). 190. G. Y. Hu and R. F. O'Connell, "Cyclotron Resonance in GaAs/AlGaAs Superlattices," Superlattices and Microstructures 5, 515 (1989). 191. G. Y. Hu and R. F. O'Connell, "Peak Splitting of the Cyclotron Resonance Spectrum in Two Dimensional Electron-Phonon-Impurity Systems" in Proc. of the 19th International Conference on the Physics of Semiconductors, Vol. I, p. 519. 192. G. Y. Hu and R. F. O'Connell, "Cyclotron Resonance in GaAs/AlGaAs Heterojunctions," in Proc. of the International Conference on the Application of "High Magnetic Fields in Semiconductor Physics II (Springer-Verlag, 1988). 193. B. M. Barker and R. F. O'Connell, "On Tolman's Mass-Energy Relation and a New Tolman-type Relation," Int. J. Mod. Phys. A 4, 327 (1989). 194. G. W. Ford and R. F. O'Connell, "Canonical Commutator and Mass Renormalization," J. Stat. Phys. 57, 803 (1989). 195. L. Wang and R. F. O'Connell, "Ideal Two-Dimensional Electron Gas in a Magnetic Field and at Non-Zero Temperatures: An Alternative Approach," Physics Status Sol. (b) 153, 343 (1989). 196. G. Y. Hu and R. F. O'Connell, "Generalized Quantum Langevin Equations for High Electric Field Transport," Phys. Rev. B 39, 12717 (1989). 197. G. Y. Hu and R. F. O'Connell, "Generalization of the Lindhard Dielectric Function to Include Fluctuation Effects," Phys. Rev. B 40, 3600 (1989). 198. G.Y. Hu and R.F. O'Connell, "1/f Noise in Two-Dimensional Mesoscopic Systems From a Generalized Quantum Langevin Equation Approach," in Proc. of the International Symposium on Nanostructure Physics and Fabrication (Academic Press, San Diego, 1989). 199. G. Y. Hu and R. F. O'Connell, "Cyclotron Resonance in Two Dimensional Electron-Phonon-Impurity Systems and Applications to SI-MOS Systems," Phys. Rev. B 40, 11701 (1989). 200. G. Y. Hu and R. F. O'Connell, "Electric Field Effect on Weak Localization in a Semiconductor Quantum Wire," Solid State Electronics 32, 1253 (1989). 201. G.Y. Hu and R.F. O'Connell, "Two-Dimensional Brownian Motion and Fluctuating Hydrodynamics", Physica A 163, 804 (1990). 202. G. Y. Hu and R. F. O'Connell, "1/f Noise: A Non-linear Generalized Langevin Equation Approach," Phys. Rev. B 41, 5586 (1990). 203. G. Y. Hu and R. F. O'Connell, "Weak Localization Theory for Lightly Doped Semiconductor Quantum Wires," J. Phys.: Condens. Matter 2, 5335 (1990). 204. G. Y. Hu and R. F. O'Connell, "Electron-Electron Interactions in Quasi-One-Dimensional Electron Systems," Phys. Rev. B 42, 1290 (1990). 205. G. Y. Hu and R. F. O'Connell, "Dielectric Response of a Quasi-One-Dimensional Electron System," J. Phys.: Condens. Matter 2, 1 (1990). 206. X. L. Li, G. W. Ford and R. F. O'Connell, "Magnetic Field Effects on the Motion of a Charged Particle in a Heat Bath," Phys. Rev. A 41, 5287 (1990). 207. X. L.Li, G. W. Ford and R. F. O'Connell, 'Charged oscillator in a heat bath in the presence of a magnetic field," Phys. Rev. A 42, 4519 (1990). 208. G. Y. Hu and R. F. O'Connell, "Intersubband Plasmons in Quasi-One-Dimensional Systems," in Nanostructures: Fabrication and Physics, (Materials Research Society, 1990). 209. R. F. O'Connell and G. Y. Hu, "The Few-Body Problem in Nanoelectronics," in Physics of Granular Nanoelectronics (Plenum Press, 1991). 210. S. Jordan, D. Koester and R. F. O'Connell, "Magneto-optical Effects from Free Electrons in Magnetic White Dwarfs," Astron. and Astrophys. 242, 206 (1991). 211. G. Y. Hu and R. F. O'Connell, "Subband Effects on Electron Transport in Quasi-One-Dimensional Electron Systems," Phys. Rev. B 43, 12341 (1991). 212. G. Y. Hu and R. F. O'Connell, "Phonon-Limited Low Temperature Mobility in a Quasi-One-Dimensional Semiconductor Quantum Wire," J. Phys. Condensed Matter 3, 4633 (1991). 213. G. Y. Hu and R. F. O'Connell, "Intersubband Plasmons in Semiconductor Quantum Wires," Phys. Rev. B 44, 3140 (1991). 214. G. W. Ford and R. F. O'Connell, "Radiation Reaction in Electrodynamics and the Elimination of Runaway Solutions," Phys. Lett. A 157, 217 (1991). 215. G. W. Ford and R. F. O'Connell, "Total Power Radiated by an Accelerated Charge," Phys. Lett. A 158, 31 (1991). 216. G. W. Ford and R. F. O'Connell, "Structure Effects on the Radiation Emitted from an Electron," Phys. Rev. A 44, 6386 (1991). 217. G. W. Ford, J. T. Lewis and R. F. O'Connell, "Quantum Tunneling in a Blackbody Radiation Field," Phys. Lett. A 158, 367 (1991). 218. G. Y. Hu and R. F. O'Connell, "Inhomogeneous Boundary Effects in Semiconductor Quantum Wires," J. Phys. Condensed Matter 4, 9623(1992). 219. G. Y. Hu and R. F. O'Connell, "Low Voltage Resistance in Small Josephson Junctions.", J. Phys. Condensed Matter 4, 9635(1992). 220. G. Y. Hu and R. F. O'Connell, "Charge Fluctuations and Zero-Bias Resistance in Small Capacitance Tunnel Junctions", Phys. Rev. B 46, 14219(1992). 221. R. F. O'Connell, "Dissipation in a Squeezed-State Environment", in NASA Conference Publication 3219(1993) p.183. 222. X. L. Li, G. W. Ford and R. F. O'Connell, "Dissipative Effects on the Mean - Square - Displacement of an Oscillator", Physica A 193, 575(1993). 223. G. W. Ford and R. F. O'Connell, "Relativistic Form of Radiation Reaction", Phys. Lett. A 174, 182(1993). 224. R. F. O'Connell, "Does the Electron have a Structure?,” Foundations of Physics 23, 461(1993). 225. X. L. Li, G. W. Ford, and R. F. O'Connell, "Energy Balance for a Dissipative System", Phys. Rev. E 48, 1547(1993). 226. X. L. Li, G. W. Ford, and R. F. O"Connell, "Correlation in the Langevin Theory of Brownian Motion", Am. J. Phys. 61, 924(1993). 227. G. Y. Hu and R. F. O'Connell, "Bloch Oscillations in Small Capacitance Josephson Junctions", Phys. Rev. B 47, 8823(1993). 228. G. W. Hu and R. F. O'Connell, "Coulomb Blockade in Multi-Gated Small Junction Systems", J. Phys. Condensed Matter 5, 7259(1993). 229. J. Y. Ryu and R. F. O'Connell, "Magnetophonon resonances in quasi-one-dimensional quantum wires", Phys. Rev. B 48, 9126(1993). 230. G. Y. Hu, R. F. O'Connell and J. Y. Ryu, "Quantum Fluctuation Effects in a Single Electron Box", Physica B 194-196, 1021(1994). 231. G. Y. Hu, R. F. O'Connell and J. Cai, "Nonperturbative Calculation of Coulomb Blockade in a Small Tunnel Junction", Physica B 194-196, 1023(1994). 232. J. Y. Ryu, G. Y. Hu and R. F. O'Connell, "Magnetophonon Resonances of Quantum Wires in Tilted Magnetic Fields", Phys. Rev. B 49, 10437(1994). 233. G. Y. Hu and R. F. O'Connell, "On the relationship between the quantum Langevin model and the Landauer formula", Phys. Lett. A 188, 384(1994). 234. G. Y. Hu and R. F. O'Connell, "Langevin Equation Analysis of a Small Capacitance Double Junction", Phys. Rev. B 49, 16505(1994). 235. G. Y. Hu and R. F. O'Connell, "Exact Solution for the Charge Soliton in a One-Dimensional Array of Small Tunnel Junctions", Phys. Rev. B 49, 16773(1994). 236. G. Y. Hu and R. F. O'Connell, "Exact Solution of the Electrostatic Problem for a Single Electron Multi-junction Trap," Phys. Rev. Lett. 74, 1839 (1995) and 76, 4097(E) (1996). 236E. G. Y. Hu and R. F. O'Connell, "Exact Solution of the Electrostatic Problem for a Single Electron Multi-junction Trap," Phys. Rev. Lett. 74, 1839 (1995) and 76, 4097(E) (1996). 237. G. W. Ford and R. F. O'Connell, "Alternative equations of Motion for the Radiating Electron" In Festschrift for H. Walther, Appl. Phys. B 60, 301 (1995). 238. X. L. Li, G. W. Ford and R. F. O'Connell, "Reply to Comment on Energy Balance for a Dissipative System", Phys. Rev. E 51, 5169 (1995). 239. G. Y. Hu, R. F. O'Connell, Y. L. He and M. B. Yu, "Electronic Conductivity of Hydrogenated Nanocrystalline Silicon Films", J. Appl. Phys. 78 (6), 3945 (1995). 240. G. Y. Hu and R. F. O'Connell, "Environmental Effects on a Single Electron Box", Physica A 219, 88 (1995). 241. G. Y. Hu and R. F. O'Connell, "Analytical Inversion of Symmetric Tridiagonal Matrices" J. Phys. A, 29, 1, (1996). 242. X.L. Li and R.F. O'Connell, "Green's Function and Position Correlation Function for a Charged Oscillator in a Heat Bath and a Magnetic Field," Physica A 224, 639 (1996). 243. X.L. Li, G.W. Ford and R.F. O'Connell, "Dissipative Effects on the Localization of a Charged Oscillator in a Magnetic Field," Phys. Rev. E 53, 3359 (1996). 244. G. W. Ford and R. F. O'Connell, "Derivative of the Hyperbolic Cotangent", Nature 380, 113 (1996). 245. G. W. Ford and R. F. O'Connell, "Inconsistency of the Rotating Wave Approximation with the Ehrenfest Theorem., Phys. Lett. A 215, 245 (1996). 246. M. Freyberger, K. Vogel, W. Schleich, and R. F. O'Connell, "Quantized Field Effects", in Atomic, Molecular and Optical Physics Handbook (American Institute of Physics, New York, 1996). 247. R. F. O'Connell, "Dissipative and Fluctuation Phenomena in Quantum Mechanics with Applications", Int. J. Quantum Chem. 58, 569 (1996). 248. G. Y. Hu and R. F. O'Connell, "Hysteretic Voltage Gap of a Multijunction Trap", Phys. Rev. B 54, 1518 (1996). 249. G. Y. Hu and R. F. O'Connell, "Exact Solution for Charge Solitons in Two Coupled One-Dimensional Arrays of Small Tunnel Junctions," Phys. Rev. B 54, 1522 (1996). 250. G. Y. Hu, R. F. O'Connell, "Co-Tunneling in Single Electron Devices: Effects of Stray Capacitances", Phys. Rev. B 54, 14560 (1996). 251. Y. B. Kang, G. Y. Hu, R. F. O'Connell and J. Y. Ryu, "Effect of Stray Capacitances on Single Electron Tunneling in a Turnstile", J. Appl. Phys. 80, 1526 (1996). 252. G. Y. Hu, R. F. O'Connell, Y. B. Kang and J. Y. Ryu, "Transferring Electrons One by One in Single Electron Devices with Long Arrays of Tunnel Junctions", Inter. J. Mod. Phys. B 10, 2441 (1996). 253. G. W. Ford and R. F. O'Connell, "There is No Quantum Regression Theorem", Phys. Rev. Lett. 77, 798(1996). 254. G. W. Ford and R. F. O'Connell, "The Blackbody Reservoir and the Planck Spectrum", Phys. Lett. A 224, 22(1996). 255. G. W. Ford, J. T. Lewis, R. F. O'Connell, "Master Equation for an Oscillator Coupled to the Electromagnetic Field", Ann. Phys. (NY) 252, 362(1996). 256. G.Y. Hu and R. F. O'Connell, "Environmental Effects on Coulomb Blockade in a Small Tunnel Junction: a Non-Perturbative Calculation", Phys. Rev.B 56, 4737 (1997). 257. G. W. Ford and R. F. O'Connell, "The Rotating Wave Approximation (RWA) of Quantum Optics: Serious Defect", Physica A 243, 377 (1997). 258. G. W. Ford and R. F. O'Connell, "The Radiating Electron: Fluctuations without Dissipation in the Equation of Motion", Phys. Rev. A 57, 3112 (1998). 259. G.W. Ford and R. F. O'Connell, "Frequency Shifts and Master Equations for a Quantum Oscillator Coupled to a Reservoir", Ann. Phys. (NY), 269, 51(1998). 260. G.Y. Hu, R. F. O'Connell and J. Y Ryu, "Analytical Solution of the Generalized Discrete Poisson Equation," J. Phys. A 31, 9279(1998). 261. G.Y. Hu and R. F. O'Connell, "Slanted Coupling of One-Dimensional Arrays of Small Tunnel Junctions," J. Appl. Phys. 84, 6713(1998). 262. G.W. Ford and R. F. O'Connell, "Comment on "Dissipative Quantum Dynamics with a Lindblad Functional"", Phys. Rev. Lett. 82, 3376(1999). 263. G.W. Ford and R. F. O'Connell, "Calculation of Correlation Functions in the Weak Coupling Approximation", Ann. Phys. (NY) 276, 144(1999). 264. G.W. Ford and R. F. O'Connell, "Exact Result for the Force Auto-Correlation in the Rotating Wave Approximation", Phys. Rev. A 61, 022110(2000). 265. G. W. Ford and R. F. O'Connell, "Driven Systems and the Lax formula", Optics Comm. 179, 451 (2000) and in "Ode to a Quantum Physicist" (Elsevier, Amsterdam, 2000). 266. G. W. Ford and R. F. O'Connell, "Comment on "The Lax-Onsager Regression "Theorem" Revisited"", Optics Comm. 179, 477 (2000) and in "Ode to a Quantum Physicist". (Elsevier, Amsterdam, 2000). 267. G. W. Ford and R. F. O'Connell, "Quantum Noise Effects in Strongly Driven Systems", Laser Physics 11, 54 (2001). 268. R. F. O'Connell, "Noise in Gravitational Wave Detector Suspension Systems: A Universal Model", Phys. Rev. D 64, 022003 (2001). 269. R. F. O'Connell, "Charge Effects on Gravitational Wave Detectors", Phys. Lett. A 282, 257 (2001). 270. R. F. O'Connell, Comment on "Completely Positive Quantum Dissipation", Phys. Rev. Lett. 87, 028901 (2001). 271. G. W. Ford and R. F. O'Connell, "Decoherence without Dissipation", Phys. Lett. A 286, 87 (2001). 272. G. W. Ford, J. T. Lewis and R. F. O'Connell, "Quantum Measurement and Decoherence", Phys. Rev. A 64, 032101 (2001). 273. G. W. Ford, and R. F. O'Connell, "Exact solution of the Hu-Paz-Zhang master equation", Phys. Rev. D 64, 105020 (2001). 274. G. W. Ford and R. F. O'Connell, "Wave Packet Spreading: Temperature and Squeezing Effects with Applications to Quantum Measurement and Decoherence", Am. J. Phys.,Theme Issue in Quantum Mechanics, 70, 319 (2002) and in Virtual Journal of Nanoscale Science & Technology 5, Issue 8 (2002) and Virtual Journal of Quantum Information 2, Issue 3 (2002). 275. I. Bialynicki-Birula, M. A. Cirone, J. P. Dahl, R. F. O'Connell, and W. P. Schleich, "Attractive and repulsive quantum forces from dimensionality of space," J. Optics B: Quantum and Semiclassical Optics 4, S393 (2002). 276. G. W. Ford and R. F. O'Connell, "Note on the derivative of the hyperbolic cotangent", J. Phys. A 35, 4183 (2002). 277. A. Ludu and R. F. O'Connell, "Laplace Transform of Spherical Bessel Functions," Physica Scripta 65, 369 (2002). 278. A. Ludu, R. F. O'Connell and J. P. Draayer, "Nonlinear Equations and wavelets" Mathematics and Computers in Simulation 62, 91 (2003). 279. M. Murakami, G. W. Ford, and R. F. O'Connell, "Decoherence in Phase Space", in Laser Physics 13, 180 (2003). 280. R. F. O'Connell, "Wigner Distribution Function Approach to Dissipative Problems in Quantum Mechanics with emphasis on Decoherence and Measurement Theory", J. Optics B 5, S349 (2003) and listed in the special collection of the "Most Frequently Downloaded Articles in 2003" from J. Optics B. 281. R. F. O'Connell and Jian Zuo, "Effect of an External Field on Decoherence", Phys. Rev. A 67, 062107 (2003) and in Virtual Journal of Quantum Information 3, Issue 7 (2003). 282. G. W. Ford and R. F. O'Connell, "Decoherence at zero temperature", J. Optics B, Special Issue on Quantum Computing, 5, S609 (2003). 283. R. F. O'Connell, "Decoherence in Nanostructures and Quantum Systems," Physica E, 19, 77(2003). 284. R. F. O'Connell, The Equation of Motion of an Electron," Phys. Lett. A 313, 491 (2003). 285. G. W. Ford and R. F. O'Connell, "Wigner Distribution Analysis of a Schrödinger Cat Superposition of Displaced Equilibrium Coherent States", Acta Physica Hungarica Quantum Electronics B 20, 91 286. Jian Zuo and R. F. O'Connell, "Effect of an External Field on Decoherence - II", J. Mod. Opt. 51, 821 (2004). 287. G. W. Ford and R. F. O'Connell, "Reply to Comment on "Quantum Measurement and Decoherence," Phys. Rev. A 70, 026102 (2004). 288. T. C. Dorlas and R. F. O'Connell, "Quantum Zeno and anti-Zeno Effects: An Exact Model", in Proceedings of SPIE (The International Society for Optical Engineering), Vol. 5436, 194 (2004). 289. R. F. O'Connell, "Proposed New Test of Spin Effects in General Relativity," Phys. Rev. Lett. 93, 081103 (2004). 290. R. F. O'Connell, "Decoherence in Quantum Systems," IEEE Transactions On Nanotechnology 4, 77 (2005). 291. G. W. Ford and R. F. O’Connell, “Entropy of a Quantum Oscillator coupled to a Heat Bath and implications for Quantum Thermodynamics”, Physica E 29, 82(2005). 292. R. F. O’Connell, “Fluctuations and Noise: A General Model with Applications”, SPIE International Symposium on Fluctuations and Noise in Photonics and Quantum Optics III (Austin, May 2005), in Proceedings of SPIE, Vol. 5842, 206 (2005). 293. G. W. Ford and R. F. O'Connell, "Limitations on the Utility of Exact Master Equations," Ann. Phys. (N.Y.), 319, 348 (2005) 294. R. F. O’Connell, “A Note on Frame Dragging”, Class. Quant. Grav., 22, 3815 (2005). 295. M. Freyberger, K. Vogel, W. Schleich, and R. F. O’Connell, “Quantized Fields Effects”, in Atomic, Molecular and Optical Physics Handbook 2^nd ed., by G.W. Drake (American Institute of Physics, New York, 2005). 296. J. A. Heras and R. F. O’Connell, “Generalization of the Schott Energy in Electrodynamic Radiation Theory”, Am. J. Phys., 74 , 150 (2006). 297. G. W. Ford and R. F. O’Connell, “Is there Unruh Radiation?”, Phys. Lett. A, 350 , 17 (2006). 298. G. W. Ford and R. F. O’Connell, “A Quantum Violation of the Second Law?”, Phys. Rev.Lett., 96 , 020402 (2006). 299. G. W. Ford and R. F. O’Connell, “Anomalous Diffusion in Quantum Brownian motion with colored noise”, Phys.Rev. A, 73 , 032103 (2006). 300. R. F. O’Connell, “Do the laws of Thermodynamics hold in the Quantum Regime”, J.Stat.Phys., 124, 15(2006). 301. G. W. Ford and R. F. O’Connell, “Free electron motion in an electromagnetic field at Zero temperature and the dependence on its rest mass,” Laser Physics 17, (4), 302(2007). 302. R. F. O’Connell, “The Expansion of the universe and the cosmological constant problem,” Phys. Lett. A 366, 177(2007). 303. G. W. Ford and R. F. O’Connell, “Quantum thermodynamic functions for an oscillator coupled to a heat bath,” Phys.Rev. B, 75 , 134301 (2007). 304. G. W. Ford and R. F. O’Connell, “Measured quantum probability distribution functions for Brownian motion," Phys. Rev. A, 76, 042122 (2007). 305. R. F. O'Connell, "Blackbody Radiation: Rosetta Stone of Heat Bath Models", Fluct. Noise Lett. 7, L 483 (2007). 306. R. F. O'Connell, Book Review of "Quantum Mechanics in Phase Space", edited by C. K. Zachos et al., Inter. J. of Quantum Information 6, 415-418 (2008). 307. R. F. O'Connell, "Stochastic methods in atomic systems and QED," Can. J. Phys. 87, 1-5 (2009). 308. R. F. O'Connell, "The Wigner Distribution," Compendium of Quantum Mechanics, edited by D. Greenberger, B. Falkenburg, K. Hentschel and F. Weinert, (Springer -Verlag 2009). 309. R. F. O'Connell, "Gravito-Magnetism in one-body and two-body systems: Theory and Experiment", in, "Atom Optics and Space Physics", Proc. of Course CLXVIII of the International School of Physics "Enrico Fermi", Varenna, Italy, 2007, ed. E. Arimondo, W. Ertmer and W. Schleich.(Societa Italiana di Fisica, 2009) 310. C. Feiler, M. Buser, E. Kajari, W.P. Schleich, E.M. Rasel, and R. F. O'Connell, "New Frontiers at the Interface of General Relativity and Quantum Optics", in "The Nature of Gravity", Space Science Reviews 148, 123-147 (2009) Everitt, C.W.F.; Huber, M.C.E.; Schafer, G.; Schutz, B.F.; Treumann, R.A. (Eds.). 311. R. F. O'Connell, "Rotation and Spin in Physics", in "General Relativity and John Wheeler", ed. I. Ciufolini and R. Matzner, (Springer, 2010). 312. G.W. Ford, Yang Gao, and R. F. O'Connell, "Entanglement without dissipation: A touchstone for an exact comparison of entanglement measures", Optics Comm. 283, 831 (2010). 313. G. W. Ford and R. F. O'Connell, "Decay of Coherence and Entanglement of a Superposition State for a Continuous Variable System in an Arbitrary Heat Bath," Inter. J. Quantum Information 8, 109 314. G. W. Ford and R. F. O'Connell, "Disentanglement and decoherence without dissipation at non-zero temperatures", Physica Scripta 82, 038112 (2010). 315. G. W. Ford and R. F. O'Connell, "Exact analysis of disentanglement for continuous variable systems and application to a two-body system at zero temperature in an arbitrary heat bath", J. Comput. Theor. Nanosci., 8, 1-7 (2011). 316. R. F. O'Connell, "Zitterbewegung is not an Observable", Modern Phys. Letts. A, 26, 469-471 (2011). 317. F. Intravaia, R. Behunin, P. W. Milonni, G. W. Ford and R. F. O'Connell, "Consistency of a Causal Theory of Radiative Reaction with the Optical Theorem", Phys. Rev. A, 84, 035801 (2011). 318. R.F. O'Connell, "Radiation reaction: general approach and applications, especially to electrodynamics," Contemporary Physics, 53(4), 301-313 (2012). 319. R. F. O'Connell, "Two oscillators in a common heat bath," Phys. Scr. T151, 014045 (2012). 320. P Kazemi, S Chaturvedi, I Marzoli, R F O’Connell and W P Schleich, "Quantum carpets: a tool to observe decoherence," New Journal of Physics 15, 013052 (2013). 321. G. W. Ford and R. F. O’Connell, "Lorentz transformation of blackbody radiation," Phys. Rev. E 88, 044101 (2013)
{"url":"http://www.phys.lsu.edu/faculty/oconnell/oconnell_pubs.html","timestamp":"2014-04-21T14:39:57Z","content_type":null,"content_length":"190356","record_id":"<urn:uuid:c3d9ffa5-2adb-4f7d-82cb-6d682857704d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Tenafly SAT Math Tutor Find a Tenafly SAT Math Tutor ...And I've completed two years of medical school before switching my focus to research and education. Over the past two years, I've been teaching Physics, Chemistry, Biology, Organic Chemistry, and Calculus at the College Level and preparing students for their MCATs, DATs, and GREs. I've been teaching 3D-animation with Maya for over 4 years. 83 Subjects: including SAT math, chemistry, calculus, physics ...Whether a student needs to learn addition or upper level algebra, basic reading skills or SAT level English, I can help. I will methodically and patiently work step by step to make the material easy. I instruct students ranging from pre-K to adult. 30 Subjects: including SAT math, English, reading, writing ...I have tutored many hours of SAT Math, as well as all of the relevant high school material. I received my bachelor's of science in biology from SUNY Geneseo. My degree included required coursework in physics, chemistry, organic chemistry, and biochemistry. 24 Subjects: including SAT math, chemistry, physics, geometry ...I love words, and I think it helps students that I'm able to define the words we encounter in a fun and relatable way, without the aid of a dictionary. I also teach a number of memory techniques to assist students in building their vocabularies and to aid in the memorization they do for other su... 36 Subjects: including SAT math, reading, chemistry, English Hello my name is Andres. I was a language teacher in my native country teaching English as a second language for native students and Spanish as a second language for foreign students. I am currently finishing my second major in engineering science. 9 Subjects: including SAT math, Spanish, calculus, geometry
{"url":"http://www.purplemath.com/tenafly_sat_math_tutors.php","timestamp":"2014-04-16T19:04:12Z","content_type":null,"content_length":"23691","record_id":"<urn:uuid:2d065cb6-b7be-48f1-ab9b-46260d7b8a94>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Nikolai Dmetrievich Brashman Born: 14 June 1796 in Rassnova: near Brünn, Austria-Hungary (now Brno, Czech Republic) Died: 13 May 1866 in Moscow, Russia Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Nikolai Dmetrievich Brashman was born into a Jewish merchant family that was rather poor. He was educated at home before entering the Vienna Imperial and Royal Polytechnical Institute soon after it was founded in 1815. The Vienna Polytechnical Institute changed its name to the Vienna Technische Hochschule in 1872 and, just over a hundred years later, became the Vienna University of Technology. In its early days when Brashman studied there, the number of courses that it offered was very limited and those that it did offer were of a very applied nature totally lacking scientific rigour. Brashman was not satisfied with the Polytechnical Institute but this was not his only problem for he also had financial difficulties and his family was so poor that he had to make money as a private tutor to enable him to support himself. In order to get a more rigorous education, in addition to attending courses at the Polytechnical Institute, Brashman enrolled in courses at the University of Vienna. There he was taught by Joseph Johann Littrow (1781-1840), an Austrian who had worked in Russia at Kazan University before being appointed professor of astronomy at Vienna in 1819. At first Littrow felt that Brashman had such a poor educational background that he would not be able to succeed in studying at university level. However, Brashman soon showed that he did indeed have the ability not only to overcome his weak grounding but to produce work of high quality. Littrow and Brashman became good friends and the friendship continued until Littrow's death in 1840. Brashman graduated from the University of Vienna in 1821, but continued to undertake research at the university. Later in 1821, on the recommendation of Littrow, Brashman was given a position in the house of Prince Yablonovsky in Lemberg (now Lviv) as the tutor of his children. Two years later, in 1823, with several letters of recommendation and a small amount of money, Brashman went to St Petersburg in Russia. In St Petersburg, Brashman was supported by Princess Evdokia Ivanovna Golitsyna (nee Izmailova) (1780-1850). She had been the wife of Sergey Mikhailovich Golitsyn (1774-1859) but by this time they had separated. She was known by the nickname Princesse Nocturne and owned the literary salon at 30 Millionnaya Street which was visited by Pushkin and other leading people. She was very enthusiastic about advanced mathematics and metaphysics and had a particular interest in mechanics writing an essay Analyse des forces. The Princess was friends with many leading mathematicians and this provided a good way for Brashman to become known. In January 1824 Brashman was appointed to teach mathematics and physics at Saint Peter and Saint Paul's School in St Petersburg. This school had a long history having been founded in 1709 as part of a Lutheran church and school in Millionnaya Street. He taught there for a year before accepting a post in the Faculty of Physics and Mathematics of the University of Kazan in March 1825. There he taught mathematics, spherical astronomy and mechanics. At Kazan, Brashman became a colleague of Nikolai Ivanovich Lobachevsky and in fact he taught mechanics using Lobachevsky's lecture notes. In 1827 Lobachevsky became rector of the University of Kazan and the university flourished with a vigorous programme of new building, with a library, an astronomical observatory, new medical facilities and physics, chemistry and anatomy laboratories being constructed. Brashman also took on a number of administrative roles but he was very definite that his greatest satisfaction was gained through teaching and research. During his nine years at Kazan he gained a high reputation both as a scientist and as a professor. The year 1830 proved a difficult one for everyone at the university when a cholera epidemic struck but Brashman played his part in minimising the damage. Brashman became professor of applied mathematics in the University of Moscow in August 1834. His initial appointment was as an extraordinary professor but, in January 1835, he was promoted to a full professorship. This was a position that he held until he retired in 1864. At Moscow he promoted the subject which he loved most, namely mechanics. He did this by fine teaching, writing excellent textbooks and research articles. For example, he published the textbook Course in Analytical Geometry (Russian) in 1838. A T Grigorian, writing in [1], says:- In his lectures on mechanics and in his articles Brashman not only tried to show the achievements of this science, but also worked out its most difficult sections. He also prepared textbooks for Russian institutions of higher education. His texts on mathematics and mechanics reflect the state of science at that time, and his proofs of important theorems show originality, clarity and comprehensiveness. Brashman wrote one of the best analytic geometry texts of his time, for which the Russian Academy of Sciences awarded him the entire Demidov Prize for 1836. The following year his textbook The theory of equilibrium of solid and liquid bodies (Russian) on mechanics, covering statics and hydrostatics using a highly original presentation, again won him the whole of the Demidov Prize. Brashman toured through Germany, France and England in 1842, where he met with leading European mathematicians. He was in England for the Twelfth Meeting of the British Association for the Advancement of Science held in Manchester in June 1842. He gave a talk entitled Considerations on the Principles of Analytical Mechanics. He began his talk as follows:- The principle of virtual velocities, on which is based the theory of equilibrium and of motion, has not, in my opinion, been explained in a manner which is clear and unobjectionable ; and I am also inclined to believe that the problem of equilibrium has not been treated analytically in a point of view sufficiently general, and that there are still many observations to be made on the correctness of the application of the principle of virtual velocities to certain problems. Similar observations may be made also with regard to the theory of motion. Mikhail Vasilevich Ostrogradski brought forward, some years ago, some new and general ideas on the laws of equilibrium and of motion in two memoirs, one of which bears the title, "On the Momenta of Forces;" and the other, "On the instantaneous Displacements of the points of a System." Profiting by his enlightened views, I published, in 1837, a treatise in the Russian language on the equilibrium of solid and fluid bodies, from which I will now give a very short extract relating to the method I have there followed, and I shall add some observations which escaped me at the time of the publication of that work, respecting the number and the character of conditions of equilibrium. In 1844, back in Moscow, he set up a new course on practical mechanics which links theoretical and technical mechanics. Brashman wrote research articles on the Principle of Least Action which are important in the development of mechanics. In 1859 he published the article Principle of Minimum Action (Russian) and, in the same year, he published the textbook Theoretical Mechanics (Russian) which considered both the equilibrium and the motion of a point and of a system of points. In 1861 he published the article On the application of the Principle of Minimum Action to the determination of water volume in a spillway and, in the same year he published Note concernant la pression des wagons sur les rails droits et des courants d'eau sur la rive droite du movement en vertu de la rotation de la terre in Comptes rendus of the Paris Academy of Sciences. In this paper he tried to prove that the rotation of the Earth puts pressure on the same rail of a straight track of a railway irrespective of the direction of travel. Another aspect of Brashman's work for which he is remembered is for his founding of the Moscow Mathematical Society which grew out of meetings held in Brashman's own home. The first meeting of the society was 15 September 1864 when Brashman was elected as the first president and August Yulevich Davidov was elected vice-president. Brashman held this position until his death in 1866 when Davidov became president. Brashman's aims for the Society were, at first, quite limited since it was intended only for those with a Master's Degree (or higher degree) in a mathematical discipline or for those with at least one important publication. In many ways the intention was to provide the members with mutual support in their research. At the first meeting of the Society only one aim was stated, namely that "the goal of the Society is mutual cooperation in the study of the mathematical sciences". At this stage, the Society was small with only 14 members and only one, namely Pafnuty Lvovich Chebyshev, holding a position outside Moscow. However Brashman quickly became more ambitious for the new Society and, at a meeting in January 1866, the aim had extended to become a Russian wide Society; "The goal of the organisation of the Society is to promote the development of mathematical sciences in Russia." Brashman also set up the Journal of that Society, Matematicheskii Sbornik , the first part of which appeared in the year of his death. This first part contains the paper Find the pressure of a river at its bank resulting from the Earth rotation about its axis (Russian) by Brashman had a number of outstanding students, including Pafnuty Lvovich Chebyshev and Osip Ivanovich Somov. We note that Brashman's students had a huge respect for him both as a mathematician and as a person. For example, Chebyshev felt that he had been inspired by Brashman and asked him for a photograph that he might keep with him; indeed he still had Brashman's photograph at the time of his Finally, let us note that Brashman was a strong believer in the power of mathematics. For example he delivered the speech On the influence of the mathematical sciences on the development of mental facilities (Russian) on 17 June 1841 at a commemorative ceremony in Moscow University. This was intended as a refutation of William Hamilton's essay On the study of mathematics as an exercise for the mind published in the Edinburgh Review in 1836, in which Hamilton claims not only that mathematics is useless in developing mental facilities but he even claims that it is pernicious. Brashman gives a strong refutation of William Hamilton's ideas. The speech had considerable influence, for example Viktor Yakovlevich Bunyakovsky's 1846 book on probability Foundations of the mathematical theory of probability (Russian) was motivated by it as was Chebyshev's 1844 thesis An essay on elementary analysis of the theory of probabilities (Russian). Brashman was honoured for his contributions with election to the St Petersburg Academy of Sciences in 1855. He is described in [3] as follows:- Free from all prejudices, he led his life quietly not looking for practical help and not expecting or demanding gratitude from those who he helped. He was not upset when his honest and well-intentioned actions were wrongly interpreted and he walked the path of an honest man who could not be deviated from that path by any circumstances. Article by: J J O'Connor and E F Robertson List of References (6 books/articles) A Poster of Nikolai Dmetrievich Brashman Mathematicians born in the same country Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © January 2014 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Brashman.html","timestamp":"2014-04-17T12:42:36Z","content_type":null,"content_length":"21585","record_id":"<urn:uuid:be99224e-c769-4abd-baa3-832389096ab3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Math - Vectors Posted by Anonymous on Sunday, February 10, 2008 at 1:32am. Can you please help me correct my answers for the following two questions? 1) A tour boat travels 25 km due east and then 15 km S50°E. Represent these displacements in a vector diagram, then calculate the resultant displacement. My Work: I drew the vectors and connected them head to tail to form a triangle. Then, I found the resultant displacement: |r|² = 25² + 15² - 2(15)(25)cos130 √|r|² = √1332.1 |r| = 36.5 km Now, I'm having some problems finding the direction of the resultant displacement: (sin50/36.5) = (sinC/25) 31.6° = C So, I get 36.5 km S31.6°E The textbook answer is 36.5 km S54°E 2) Vectors a and b have magnitudes 2 and 3, respectively. If the angle between them is 50°, find the vectors "5a - 2b", and state its magnitude and direction. Here's my work for this question: I drew the vectors and connected them head to tail. I tried to find the magnitude by: 5a - 2b = 5(2a) - 2(3b) = 10a - 6b |r|² = 10² + 6² - 2(10)(6)cos130 √|r|² = √213 |r| = 14.6 Then I tried to find the direction by: (sinx/3) = (sin130/14.6) x = 9.1° So, I get 14.6, 9.1° to vector a Textbook answer is 7.7, 37° to vector a • Discrete Math - Vectors - Reiny, Sunday, February 10, 2008 at 9:06am I will do the second question first, since I have a question about your interpretation of S50E from the first. I drew the 2 unit vector to run east, then the 3 unit angle downwards to form the 50 degree angle. so when you construct 5a - 2b, you would draw a horizontal line 10 units long for the first part, then you must go into the opposite direction of the second vector for 6 units so the magnitude equation would be |r|² = 10² + 6² - 2(10)(6)cos50 and r = 7.67 now if x is the angle between the resultant and the first vector sinx/6 = sin50/7.67 I got x = 36.8 degrees back to your first problem, I was always under the impression and taught that a direction like your S50°E meant: face south then 50 degrees towards the east, so I thought that the end part of your equation should have been ....cos140 you had....cos130 but the ....cos130 produced the answer supplied by your text, so I am confused. I am using the Canadian interpretation of S50°E, is it different where you are??? Perhaps some of the other math or physics expert could help out here Related Questions math - a boat travels 60 km due east. it then adjusts its course by 25 degrees ... math - a boat travels 60 km due east. it then adjusts its course by 25 degrees ... calculus - Boat A leaves the dock at 12pm heading due South at a constant speed ... calculus - a boat leaves a dock at 2:00 pm and travels due south at a speed of ... physics 111 - A bicycle travels 3.2 km due east in 0.10 h, then 5.6 km at 15.0° ... math - A boat starts at point A, moves 3 km due north, then 2 km due east, then ... math - a boy travels 15 km due north,then goes 9 km due east,then 3km due south.... Advanced Maths (Vectors) AQA Level, please help? - Two ships, A and B st out ... Calculus - Can anyone help me with these two optimization problems? A boat ... Math (urgent) hw due tommorow! - The speed of a stream is 3 km/h. A boat travels...
{"url":"http://www.jiskha.com/display.cgi?id=1202625125","timestamp":"2014-04-20T12:21:01Z","content_type":null,"content_length":"10987","record_id":"<urn:uuid:46e932a3-03d4-4f1a-a9b6-af931316a7c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Zero-knowledge proof that 0 = 1 up vote 5 down vote favorite Suppose one day I came up with a proof that 0 = 1 in some formal system such as PA or ZFC that cannot prove its own consistency (unless it is inconsistent). Would it be possible to have a zero-knowledge proof of this? In other words, would it be possible for me to convince you with high probability that I had derived such a proof without (feasibly) revealing the proof of the contradiction? (I haven't by the way...found such a contradiction.) lo.logic computational-complexity proof-theory 0=1 in the zero ring... – Justin Campbell Dec 10 '10 at 5:37 2 Jason: I'm not sure what value there is in your question as is, as it sounds too much like idle speculation. Nonetheless, I think you should remind MO users that downvoting a question without useful feedback goes against MO guidelines and is not particularly helpful. – Thierry Zell Dec 10 '10 at 13:32 As for downvoting, you just did, and I agree. To your other point, I think the value of this question is asking whether if I have a proof of something, can I somehow hide it from you and yet still reveal something to you that would reasonably prove that I had found it? However, the question in its current formulation is not as I had intended, and I'm sorry I posted it like this. In fact, I had accidentally posted it before I had finished thinking about it, and then I mistakenly left it up like this. I really would have to think of a better formulation of this to qualify as a real question. – Jason Dec 10 '10 at 15:26 While I agree with everything said in David Feldman's answer (except I would attribute the first thing he said to Gödel's Second Incompleteness Theorem), I'm not sure how this answers the question. In my humble opinion, David Harris's answer best addresses the question. – Jason Dec 10 '10 at 15:32 @Jason Sure, my parenthetical belongs to Gödel, but Gentzen proved the consistency of PA using transfinite induction. – David Feldman Dec 11 '10 at 3:37 add comment 4 Answers active oldest votes In this setting, the adversary seeks to find a deduction $\phi_0, \dots, \phi_n$ of $P \wedge \neg P$ quickly. If ZFC, for example, is inconsistent, there exists such a deduction and hence there exists a (constant time) adversary, which simply publishes $\phi$. up vote 5 down vote In order to have a zero-knowledge proof problem, one needs a family of problems, for which the adversary's task becomes increasingly hard as $n \rightarrow \infty$. With just one theory, accepted such as ZFC, this does not happen. But one may view ZFC as one theory among many possible theories that use the same language. One has to work a little because ZFC isn't finitely axiomatizable, but one can isolate a class of theories with easily checkable proofs (this involves recognizing axioms as well valid deductions). One then gets various decision problems, in NP, that ask whether a given theory leads to a "short" contradiction. The length of a hypothetical ZFC contradiction will satisfy any number of bounds in terms of the length of a specification of ZFC, hence various candidates for a zero-knowledge mechanism. – David Feldman Dec 10 '10 at 21:02 Technically, if the length of the proof is, say, 10,000 then both 10,000$^2$ and $2^{10,000}$ are $O(1)$ constants, so the asymptotic analysis doesn't apply and you can't technically say it's a zero-knowledge proof. But in practice, if you need a computation of length $2^{10,000}$ to deduce any information from the proof, it's effectively zero-knowledge. – Peter Shor Jan 23 '11 at 15:08 add comment Well you're not going to prove 0=1 in PA, because PA is consistent, (though not PA-provably so), following Gentzen. But I digress. If you proved 0=1 in, say, ZFC, that would simply mean that ZFC was inconsistent - that the entities it purported to describe had no reasonable interpretation and that logical conclusions derived from the axiom had, in general, no bearing on the world. In particular, it would be irrelevant that you had proved P = NP. But I still digress. My main point: your 0=1 proof is a purely combinatorial object - a symbol sequence that satisfies syntactic constraints that can be checked in polynomial time. The standard Zero-Knowledge up vote Proof technology would apply to this proof just as to any other. The cataclysmic semantics of the proof's conclusion would simply be irrelevant. 12 down vote Surely if ZFC turns out inconsistent, much of set theory could still be saved by suitably weakening say, the particular axiom whose self-evidence turned out illusory. (Consensus in the short term concerning which axiom to give up might turn out difficult to achieve). At the end of the day, the offending axiom would simply seem overambitious, just as the occasional large cardinal axiom turns out to be a turkey, roadkill on the transfinite superhighway if you will. Most of classical mathematics will still go through intact, and the theory of finite sets, PA essentially, already strong enough to articulate the P=NP conjecture, will remain consistent. @David Feldman: mostly a lovely answer, but I’m not sure I follow your first paragraph; to find Gentzen’s proof convincing one has to accept induction over $\epsilon_0$ self-evident, and in that case one surely accepts PA as self-evident, hence consistent, anyway? (Or maybe you were being more tongue-in-cheek there than I realised.) – Peter LeFanu Lumsdaine Dec 10 '10 at 5 @Peter: Although Gentzen used induction over a bigger ordinal ($\varepsilon_0$) than PA does ($\omega$), he uses it for far simpler formulas (essentially quantifier-free) whereas PA uses induction for formulas with arbitrary alternations of quantifiers. So it does not seem absurd to me to regard Gentzen's assumptions as somehow more evident than those incorporated in PA. In effect, Gentzen exhibited a trade-off between the length of the induction and the complexity of the statement to be inductively proved. – Andreas Blass Dec 10 '10 at 15:53 @Peter Exactly, I guess, as tongue-in-cheek as Gentzen and his proof. So, true confession - I've never understood the real world interest in seeking consistency proofs for a system within that system. After all, if you find one, what does it tell you, for real anyway? Well, either it tells that the system is really consistent (provided you believe that the axioms model your metamathematical practice), or that it's really inconsistent (since then system proves everything anyway, including the formal statement of its consistency). – David Feldman Dec 10 '10 at 20:50 add comment I doubt it. If nothing else, you would have a proof of P=NP as well, and since zero-knowledge proofs depend on the hardness of certain problems, you would "have a proof" that you could not have a zero-knowledge proof. I suggest rephrasing the question so that it is less likely to be closed. Perhaps something like "Has anyone considered the impact of inconsistent theories on zero knowledge proofs (and up vote 1 published their considerations)?" down vote Gerhard "Ask Me About System Design" Paseman, 2010.12.09 But maybe being able to prove P = NP would mean that you could prove that 0 = 1 so that wouldn't be a problem? This question may have answers with paraconsistent logics, but I may be treading on dangerous ground here with the formality of this question especially since this is outside my area. I'm considering deleting it. – Jason Dec 10 '10 at 5:41 Do as you wish. I do think the question can (and should) be improved to something a little more appropriate. Gerhard "Ask Me About System Design" Paseman, 2010.12.09 – Gerhard Paseman Dec 10 '10 at 5:45 I agree there is some value in this question, but I'd feel more comfortable if people more knowledgable in paraconsistent logics edited it. Accordingly, I decided to make this a community wiki. – Jason Dec 10 '10 at 6:00 Well, you could interpret $0 = 1$ in the sense that $0$ is the additive identity and $1$ is the multiplicative identity for $\mathbb{R}$ (and in addition, that the identity element is unique for both addition and multiplication in $\mathbb{R}$). The "problem", of course, is that the additive identity does not have a multiplicative inverse (at least in $\mathbb{R}$). (You would typically call that a "pseudoproblem", but in my mind, that is the "central problem".) – Jose Arnaldo Dris Dec 10 '10 at 8:28 1 There is obviously something I'm overlooking here. But: proving that P=NP does not make it true, if the proof exists in an inconsistent axiom system. So Jason would have a "proof" that zero-knowledge proofs don't exist, but could still produce one! Do I have a point, or did I miss something? – Thierry Zell Dec 10 '10 at 13:29 show 1 more comment The [[PCP theorem]] says you can give such an argument for any theorem you have a proof of, not just 0=1. Hmm, maybe it relies on consistency though. up vote -1 down vote http://en.wikipedia.org/wiki/PCP_theorem add comment Not the answer you're looking for? Browse other questions tagged lo.logic computational-complexity proof-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/48880/zero-knowledge-proof-that-0-1","timestamp":"2014-04-18T00:42:25Z","content_type":null,"content_length":"83213","record_id":"<urn:uuid:018d5e32-6787-46e4-80ef-88261610d36b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
One of the most difficult problems of computer graphics today is producing truly robust algorithms. Working in floating point coordinate system introduces a lot of subtle problems. Computational geometry approaches the problem from two directions: 1) Implementing algorithms on top of rational numbers. 2) Introducing rounding phases to algorithm, usually in a post processing stage. For example lets look at a pretty popular example, based on the one from Hobby's paper. In this example we have three line segments with one intersection between them. Now if we round up the intersection to integer coordinates two new intersections are introduced - ones that are completely incorrect. Although we don't round up that heavily the following example scales into infinity as you uniformly divide coordinates by the same number. There are two main reasons why this behavior is so undesirable for graphics. 1) It produces obviously incorrect results. 2) Optimized algorithms often break down due to those inconsistencies. The first one is actually less important - on this scale the degeneration of our results is likely to produce errors that won't have too significant effects on what you see on the screen. Two is a lot more severe. For example a lot of the more popular geometrical algorithms includes a step of finding segments containing a point. Assuming that a segment has floating point coordinates and the point is, as well, in floating point coordinates - the containment test will very quickly break down as the limited precision will hit the computations. So there's no way to reliable perform that test, due to which all those algorithms will simple fail. As a result, an incredibly small rounding error will likely cause a complete failure of an algorithm and because of that small error you won't see any results on the screen. That's a serious problem. Going back to the solutions to this problem. John Hobby wrote a paper entitled "Practical Segment Intersection with Finite Precision Output". Algorithm which he introduced in that paper has since been named "snap rounding". Numerous papers have been written since about this algorithm, some of the better ones include "Iterated Snap Rounding" by Halperin and Packer and "Snap Rounding Line Segments Efficiently in Two and Three Dimensions" by Goodrich and Guibas. The basic principle of this algorithm is very simple; every event point is computed within some tolerance square. Every tolerance square has its middle computed and all events that we encounter, whose coordinates fall withing already existing tolerance square are "bent" in a way that changes their coordinate to be equal to the middle of the tolerance square. This technique works surprisingly well. The second method, often employed by professional grade geometrical software, involves replacing float/double by a custom "rational" number class. They produce mathematically correct results, which is ideal. The reason why we can't really even consider them is that they're about 30+ times slower than doubles. Now the reason I went on this rant is that I've been testing results of my new algorithm for rasterizing polygons, which involves decomposition, and even though on paper the algorithm was always correct and the unit tests were all passing, the algorithm was falling apart in some cases. I've spent a while going through my intersection detection code, snap rounding implementation and couldn't find anything. Then I looked at some of the data files I had and they were only in limited precision of .001 which is not even close enough to being precise enough for floating point intersections/ coordinate systems. Feel my pain =) 7 comments: You also can arbitrarily extend the precision of IEEE-754 floating-point using only floating-point operations until there's enough to satisfy the relations you need. See Jonathan Shewchuk's triangle mesh generator for an explicit example. The trade-off is that the time to execute a geometry query or op is no longer constant. Ignacio said... Zack, a good introduction to numerical robustness issues in geometric predicates is Christer Ericson book. His presentation at last year's GDC was fairly interesting, and apparently he repeated it this year. The ppt on his site is perfectly readable with oo.org. As you know, when it comes to drawing polygons, I prefer not to do tesselation to avoid the precision issues entirely, and use stencil methods instead. With the speed of current GPUs, it's usually faster, but in the most convoluted cases. And if you don't have a decent GPU you are better off using a scanline algorithm anyway. pinheiro said... Well im a user of vector graphics, i i realy lock foward to se some development in the opensource vector precision front. I come form a autocad world, and precision is their game. but now in inkscape and most all other opensource vector drawing tools and renderes i have lots of problems, the major ones come from what you talkd about and they are very visible wen for exemple i do somthing like, object to path that it the drawing is smaler than lets say 10 px it will get totaly broken with small mistakes and lots of uneaded data. The secong error is gradient precision if for exemple you make 3 excatly the same 100% alpha to 0% apha gradients and place them all on top onother and set general alpha to a level of lets say 5% you dont get a linear result what you get is a several bands of the same alpha level and that ruins the efect. Zack said... @anonymous: yes, thanks for pointing it out :) @Ignacio: take the following two simple examples: poly0 poly1 (with data points for them recorded as x,y coordinates for all in: data ) now lets say those things need to be filled with a simple linear gradient. I'm not aware of any method that would render those. Even with shaders you have a serious problem because containment test done for every pixel will be very expensive plus you'll have to count on the fact that implementors won't use any shortcuts on GL_EXT_framebuffer_multisample (like antialiasing only edges) to hope for any kind of antialiasing on those objects. Scanconversion of polygons is a technically unbound operation and since the X server is single threaded right now rendering of huge polygons could starve all clients - which is why we force client side trapezoidation before at the moment :) Ignacio said... Zack, the stencil method to draw polygons operates in two different passes. First you render to the stencil only, disabling writes to the color and depth buffer. The stencil mode depends on the filling mode of the polygon and on whether you have a previous clipshape. The polygon is rendered linearly, you choose one vertex and draw a triangle for every edge. For the xor filling mode you only need one bit of stencil and set the stencil to increment. This mode is pretty fast, and GPUs usually render fragments 4 times faster when in this mode. Once you have the stencil mask ready, you only have to render a quad on top of it. In this second pass you only write to the color buffer and use the stencil test to do the containment test. You only write the color when the stencil is not zero. This is very efficient and happens before the evaluation of the fragment program. Then you can use any fragment program to implement gradients and texture reads. In any case, I understand that implementing this in the X server is complicated, but using this method in Qt, instead of the existing glut polygon render should be fairly easy. The main problem is the extreme overdraw that could happen in some shapes, this can be reduced by breaking up the polygon in smaller pices, but that would increases the complexity of the algorithm. Antialiasing quality depends directly on what the GPU supports, and with 4xAA you only get 16 level of greys. You can render to a bigger texture and downsample to achieve higher aa I don't understand your statement about scan conversion. Can't you yield at the scanline level? Maybe we should continue the conversation by email, writting in this small box is really tedious... Zack said... Yes, I agree with most of what you said except that you keep forgetting the major problem that you can't render the primitives we want. It's not in which way do we render them that's the question, it's how do we render them at all. They're complex polygons, so you need to decompose into at least simple polys because OpenGL just can't handle them otherwise. Using GLU Nurbs tessellator is really slow which is why we're experimenting with other methods. Try for example to render the second poly from the two I gave above with odd-even rule. Also note that any kind of decomposition will yield precision problems here, no matter whether we're doing triangulation, trapezoidation or simply untangle the polygon. But yeah, lets continue via email, I think I still have an email from you that I haven't responded to =) Ignacio said... I don't forget that, avoiding the triangulation is the main purpouse of using the stencil to draw arbitrary polygons. The stencil method does not require any decomposition, nor any intersection test, nor any expensive CPU work.
{"url":"http://zrusin.blogspot.com/2006/05/precision.html","timestamp":"2014-04-17T03:48:33Z","content_type":null,"content_length":"89175","record_id":"<urn:uuid:64fe79a0-88f7-4bdc-b557-60cb425eb8d3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation of plan June 24th 2008, 05:20 PM #1 Super Member Jun 2008 Equation of plan Finding the equation of the plan that passes the straight r: $\frac{(x-1)}{3}=\frac{(y-3)}{2}=\frac{(z+2)}{-1}$ and is parallel to the straight s: How do I find the vector of directional straight (s)? June 25th 2008, 07:04 AM #2 Super Member Jun 2008
{"url":"http://mathhelpforum.com/calculus/42345-equation-plan.html","timestamp":"2014-04-20T19:10:14Z","content_type":null,"content_length":"30247","record_id":"<urn:uuid:0372a3b1-b50d-4511-8c8a-12a0f13eab1b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 66 HFG TUTORIALS Version 1.0 Tutorial 5: Determining Appropriate Sign Placement and Letter Height Requirements When determining the appropriate sign placement, it is important to consider a number of driver-related factors. The Traffic Control Devices Handbook (Pline, 2001) describes a process that utilizes these factors and is the basis for the steps described below. This method is mostly focused on guide and informational sign applications. Step 1. Calculate the Reading Distance The reading distance is the portion of the travelling distance allotted for the driver to read the message, based upon the time required to read it (reading time). The Traffic Control Devices Handbook outlines two methods for calculating the reading time. The first method, used by the Ontario Ministry of Transportation, is described in the following three steps: 1. Allocate 0.5 s per word or number and 1 s per symbol, with a 1-s minimum for the total reading time. This time should only include critical words. Drivers do not need to read every word of each destination listed on a sign to find the one they are looking for. For ex- ample, assume they are reading a sign with two destinations: Mercer St. and Union St., each with a direction arrow. Drivers only need to read the word Mercer to realize that is not the street they are looking for and the word Union to know that is their destination. They then only need to look at the arrow for Union St. 2. "If there are more than four words on a sign, a driver must glance at it more than once, and look back to the road and at the sign again. For every additional four words and numbers, or every two symbols, an additional 0.75 s should be added to the reading time." (Ontario Ministry of Transportation Traffic Office, 2001) 3. If the maneuver does not begin before the driver reaches the sign, add 0.5 s to the reading time. This extra time is to account for the extreme viewing angle immediately before the driver passes the sign, which prohibits reading. If the maneuver has already begun, the driver does not need to continue to read the sign, and thus does not need more time. These three steps are summarized in Table 22-7. Table 22-7. Three-step method for calculating base reading time. Step 1 Step 2 Step 3 Does the maneuver initiate Base Reading Time (BRT) Are there more than 4 words? before passing the sign? BRT (s) = 0.5x + 1y Yes: Add time based on the BRT Yes: Add 0 s where: 2 < BRT 4 Add 0.75 s x = the number of critical words/ 4 < BRT 6 Add 1.50 s numbers in the message 6 < BRT 8 Add 2.25 s y = the number of critical symbols ...etc in the message No: Add 0 s No: Add 0.5 s 22-39 OCR for page 66 HFG TUTORIALS Version 1.0 Another method for calculating reading time, cited in previous studies, applies to complex signs in high-speed conditions. The formula provided is: Reading Time (s) = 0.31 (Number of Familiar Words) + 1.94 After finding the reading time, convert it into a reading distance by multiplying by the travel speed. Step 2. Calculate the Decision Distance The decision distance is the distance required to make a decision and initiate any maneuver, if one is necessary. After reading the sign, the driver needs this time to decide his/her course of action based upon the sign's message. Decision times range as follows: · 1 s for simple maneuvers (e.g., stop, reduce speed, choose or reject a single destination from a D1-1 sign) · 2.5 s or more for complex maneuvers (e.g., two choice points at a complex intersection) After finding the decision time, convert it into the decision distance by multiplying by the travel speed. Step 3. Calculate the Maneuver Distance The maneuver distance is the distance required to complete the chosen maneuver. The maneu- ver distance depends on the course of action decided upon by the driver and the travel speed. The sign placement should consider all of the maneuvers that could be chosen based upon the message. An example of required maneuver distances is provided in Table 22-8 for lane changes in preparation for a turn. These distances do not apply to situations in which drivers must stop. For high-volume roadways, more time may be needed to find a gap, while for low-volume roadways, some of the deceleration distance may overlap with the lane change distance. Table 22-8. Maneuver distances required for preparatory lane changes. Operating Speed (mi/h) Gap-Search Distance (ft) Lane Change Distance (ft) Deceleration Distance (ft) Non-Freeway Maneuver Distance Requirements 25 66 139 77 35 92 195 154 45 119 251 257 55 145 306 385 Freeway Maneuver Distance Requirements 55 218 306 308 65 257 362 462 70 277 390 549 Source: Pline (2001) 22-40 OCR for page 66 HFG TUTORIALS Version 1.0 Step 4. Calculate the Information Presentation Distance The information presentation distance is the total distance from the choice point (e.g., inter- section) at which the driver needs information. This distance is calculated using the following formula: Information Presentation Distance = Reading Distance + Decision Distance + Maneuver Distance Step 5. Calculate the Legibility Distance The legibility distance is the distance at which the sign must be legible. This distance is based upon the operating speed and the advance placement of the sign from the choice point. The leg- ibility distance is calculated using the formula below: Legibility Distance = Information Presentation Distance - Advance Placement Step 6. Calculate the Minimum Letter Height The minimum letter height is the height required for the letters on the sign based upon the legibility distance calculated above. It is also based upon the legibility index provided in the MUTCD (30 ft/ in.). Legibility Distance (ft) Minimum Letter Height (in.) = Legibility Index (ft/in.) Another consideration is the minimum symbol size. The minimum symbol size is based upon the legibility distance of the specific symbol that is being used. Table 22-9 contains daytime leg- ibility distances for five types of symbols based upon research (Dewar et al., 1994). From these legibility distances, we can obtain two general trends: (1) legibility distances vary by sign type and (2) legibility distances are greatly reduced for older drivers. Legibility distances for symbols are generally greater than for word messages. Example Application As an example, a driver approaches an intersection on a 35-mi/h (51 ft/s) roadway. The driver needs to read a simple designation sign (D1-1) that contains one destination word and Table 22-9. Daytime legibility distances of five symbol types by age group. Daytime Legibility Distances (ft) Symbol Type Number of Signs Young Middle-Aged Old Mean Warning 37 736.4 714.7 581.5 677.6 School 2 573.3 634.7 501.2 569.7 Guide 21 472.3 461.5 366.0 433.3 Regulatory 12 464.4 437.9 367.4 423.1 Recreational 13 321.1 292.6 228.9 280.8 22-41 OCR for page 66 HFG TUTORIALS Version 1.0 one symbolic arrow. The sign is placed 200 ft in advance of the intersection. The legibility index is assumed to be 30 ft/in. (FHWA, 2009). See Figure 22-9. 1. Reading Distance (ft) = [(1 s/word)(1 word) + (0.5 s/symbol)(1 symbol)](51 ft/s) = 77 ft 2. Decision Distance (ft) = (1 s/simple decision)(1 simple decision)(51 ft/s) = 51 ft 3. Maneuver Distance (ft) = Gap Search + Lane Change + Deceleration = 92 ft + 195 ft + 154 ft = 441 ft 4. Information Presentation Distance (ft) = Reading Distance + Decision Distance + Maneuver Distance = 569 ft 5. Legibility Distance = Information Presentation Distance ­ Advance Placement = 569 ft ­ 200 ft = 369 ft 6. Letter Height = (369 ft)/(30 ft/in.) = 12 in. (when rounded to the nearest inch) Information Presentation Distance (569 ft) Reading Decision Maneuver Distance (441 ft) Distance Distance Gap Search Lane Change Deceleration (77 ft) (51 ft) (92 ft) (195 ft) (154 ft) Legibility Distance (369 ft) Advance Placement (200 ft) Figure 22-9. Graphic illustrating the example application of a driver approaching an intersection. 22-42
{"url":"http://www.nap.edu/openbook.php?record_id=14396&page=66","timestamp":"2014-04-21T03:05:08Z","content_type":null,"content_length":"54454","record_id":"<urn:uuid:2bba266c-0245-4cb2-989f-e65cdab40e7f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"}