content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Training time-dependence in neural networks
, 1989
"... The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for
temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precis ..."
Cited by 413 (4 self)
Add to MetaCart
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal
supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that
they require nonlocal communication in the network being trained and are computationally expensive. These algorithms are shown to allow networks having recurrent connections to learn complex tasks
requiring the retention of information over time periods having either fixed or indefinite length. 1 Introduction A major problem in connectionist theory is to develop learning algorithms that can
tap the full computational power of neural networks. Much progress has been made with feedforward networks, and attention has recently turned to developing algorithms for networks with recurrent
connections, wh...
- IEEE Trans. Pattern Anal. Machine Intell , 1994
"... Abstract-Relaxation labeling processes have been widely used in many different domains including image processing, pattern recognition, and artificial intelligence. They are iterative procedures
that aim at reducing local ambiguities and achieving global consistency through a parallel exploitation o ..."
Cited by 39 (5 self)
Add to MetaCart
Abstract-Relaxation labeling processes have been widely used in many different domains including image processing, pattern recognition, and artificial intelligence. They are iterative procedures that
aim at reducing local ambiguities and achieving global consistency through a parallel exploitation of contextual information, which is quantitatively expressed in terms of a set of “compatibility
coefficients. ” The problem of determining compatibility coefficients has received a considerable attention in the past and many heuristic, statistical-based methods have been suggested. In this
paper, we propose a rather different viewpoint to solve this problem: we derive them attempting to optimize the performance of the relaxation algorithm over a sample of training data; no statistical
interpretation is given: compatibility coefficients are simply interpreted as real numbers, for which performance is optimal. Experimental results over a novel application of relaxation are given,
which prove the effectiveness of the proposed approach. Index Terms- Compatibility coefficients, constraint satisfaction, gradient projection, learning, neural networks, nonlinear
- Gesellschaft fur Mathematik und Datenverarbeitung, D-5205 St , 1991
"... The application of reinforcement learning to control problems has received considerable attention in the last few years [And86, Bar89, Sut84]. In general there are two principles to solve
reinforcement learning problems: direct and indirect techniques, both having their advantages and disadvantag ..."
Cited by 10 (5 self)
Add to MetaCart
The application of reinforcement learning to control problems has received considerable attention in the last few years [And86, Bar89, Sut84]. In general there are two principles to solve
reinforcement learning problems: direct and indirect techniques, both having their advantages and disadvantages. We present a system that combines both methods [TML91, TML90]. By interaction with an
unknown environment a world model is progressively constructed using the backpropagation algorithm. For optimizing actions with respect to future reinforcement planning is applied in two steps: An
experience network proposes a plan which is subsequently optimized by gradient descent with a chain of model networks. While operating in a goal-oriented manner due to the planning process the
experience network is trained. Its accumulating experience is fed back into the planning process in form of initial plans, such that planning can be gradually reduced. In order to ensure complete
system identif...
, 1990
"... An extended feed-forward algorithm for recurrent connectionist networks is presented. This algorithm, which works locally in time, is derived both for discrete-in-time networks and for
continuous networks. Several standard gradient descent algorithms for connectionist networks (e.g. [48], [30], [28] ..."
Cited by 6 (4 self)
Add to MetaCart
An extended feed-forward algorithm for recurrent connectionist networks is presented. This algorithm, which works locally in time, is derived both for discrete-in-time networks and for continuous
networks. Several standard gradient descent algorithms for connectionist networks (e.g. [48], [30], [28] [15], [34]), especially the backpropagation algorithm [36], are mathematically derived as a
special case of this general algorithm. The learning algorithm presented in this paper is a superset of gradient descent learning algorithms for multilayer networks, recurrent networks and time-delay
networks that allows any combinations of their components. In addition, the paper presents feed-forward approximation procedures for initial activations and external input values. The former one is
used for optimizing starting values of the so-called context nodes, the latter one turned out to be very useful for finding spurious input attractors of a trained connectionist network. Finally, we
compare tim...
, 1998
"... The presented technical report is a preliminary English translation of selected revised sections from the first part of the book Theoretical Issues of Neural Networks [75] by the first author
which represents a brief introduction to neural networks. This work does not cover a complete survey of the ..."
Cited by 5 (0 self)
Add to MetaCart
The presented technical report is a preliminary English translation of selected revised sections from the first part of the book Theoretical Issues of Neural Networks [75] by the first author which
represents a brief introduction to neural networks. This work does not cover a complete survey of the neural network models but the exposition here is focused more on the original motivations and on
the clear technical description of several basic type models. It can be understood as an invitation to a deeper study of this field. Thus, the respective background is prepared for those who have not
met this phenomenon yet so that they could appreciate the subsequent theoretical parts of the book. In addition, this can also be profitable for those engineers who want to apply the neural networks
in the area of their expertise. The introductory part does not require deeper preliminary knowledge, it contains many pictures and the mathematical formalism is reduced to the lowest degree in the
first c...
- IEEE Transactions on Systems, MAN, and Cybernetics -PARTB: Cybernetics , 2003
"... Abstract—In this paper, a novel methodology called a reference model approach to stability analysis of neural networks is proposed. The core of the new approach is to study a neural network
model with reference to other related models, so that different modeling approaches can be combinatively used ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract—In this paper, a novel methodology called a reference model approach to stability analysis of neural networks is proposed. The core of the new approach is to study a neural network model
with reference to other related models, so that different modeling approaches can be combinatively used and powerfully cross-fertilized. Focused on two representative neural network modeling
approaches (the neuron state modeling approach and the local field modeling approach), we establish a rigorous theoretical basis on the feasibility and efficiency of the reference model approach. The
new approach has been used to develop a series of new, generic stability theories for various neural network models. These results have been applied to several typical neural network systems
including the Hopfield-type neural networks, the recurrent back-propagation neural networks, the BSB-type neural networks, the bound-constraints optimization neural networks, and the cellular neural
networks. The results obtained unify, sharpen or generalize most of the existing stability assertions, and illustrate the feasibility and power of the new method. Index Terms—Local field neural
network model, reference model approach, stability analysis, static neural network model. I.
- IEEE TRANS. ON SYSTEMS MAN AND CYBERNETICS, PART B , 1996
"... In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters : the classical weights between the units and the time constants
associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for ..."
Cited by 2 (0 self)
Add to MetaCart
In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters : the classical weights between the units and the time constants
associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for modeling and simulating dynamic recurrent neural networks. In order to achieve this, we
study the effect of the statistical distribution of the weights and of the time constants on the network dynamics and we make a statistical analysis of the neural transformation. We examine the
network power spectra (to draw some conclusions over the frequential behavior of the network) and we compute the stability regions to explore the stability of the model. We show that the network is
sensitive to the variations of the mean values of the weights and the time constants (because of the temporal aspects of the learned tasks). Nevertheless, our results highlight the improvements in
the network dynamics d...
, 1995
"... hapitre III.3) et enfin tracer les grandes lignes du systme d'apprentissage envisag dans le cadre de l'apprentissage des modles physiques (chapitre III.4). 159 III.1 Agir pour apprendre dans les
rseaux connexionnistes III.1.1 Gnralits Un rseau supervis reoit pendant l'apprentissage des coupl ..."
Cited by 1 (0 self)
Add to MetaCart
hapitre III.3) et enfin tracer les grandes lignes du systme d'apprentissage envisag dans le cadre de l'apprentissage des modles physiques (chapitre III.4). 159 III.1 Agir pour apprendre dans les
rseaux connexionnistes III.1.1 Gnralits Un rseau supervis reoit pendant l'apprentissage des couples (entre , sortie). Ceux-ci sont peuvent provenir de l'environnement de diffrentes manires. Selon les
cas, l'apprentissage actif prend diverses formes. Dans un certain nombre de cas, le rseau est directement reli l'environnement#: un gnrateur d'entres envoie des entres alatoires, ou balayant
systmatiquement l'environnement, au rseau et l'environnement#: l'environnement renvoie alors une sortie qui est la sortie dsire pour le rseau. L'action pour apprendre consiste faire gnrer les actions
par le systme connexionniste, de manire faciliter l'apprentissage (voir figure III.1). Les systme connexionniste agit alors directement vers l'environnement. Les rsea
"... In this two-part series, the writers investigate the role of artificial neural networks (ANNs) in hydrology. ANNs are gaining popularity, as is evidenced by the increasing number of papers on
this topic appearing in hydrology journals, especially over the last decade. In terms of hydrologic applica ..."
Add to MetaCart
In this two-part series, the writers investigate the role of artificial neural networks (ANNs) in hydrology. ANNs are gaining popularity, as is evidenced by the increasing number of papers on this
topic appearing in hydrology journals, especially over the last decade. In terms of hydrologic applications, this modeling tool is still in its nascent stages. The practicing hydrologic community is
just becoming aware of the potential of ANNs as an alternative modeling tool. This paper is intended to serve as an introduction to ANNs for hydrologists. Apart from descriptions of various aspects
of ANNs and some guidelines on their usage, this paper offers a brief comparison of the nature of ANNs and other modeling philosophies in hydrology. A discussion on the strengths and limitations of
ANNs brings out the similarities they have with other modeling approaches, such as the physical model.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2309446","timestamp":"2014-04-23T20:12:15Z","content_type":null,"content_length":"37512","record_id":"<urn:uuid:ade33dbb-07a1-4749-a465-0c3aac8d5496>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modeling Human Performance
From Running tips for everyone from beginners to racing marathons and ultramarathons
There are a number of approaches to modeling how training changes performance and these models have some obvious value in optimizing a training routine, especially for tapering.
1 The Models
The models assume that given training stress, also known as "training impulse" or TRIMP (TRaining IMPulse), has both a positive and a negative effect. The positive effect is called "fitness", and the
negative effect is called "fatigue", and they are combined to provide a value of "performance". This article looks at three models:
• Banister. The initial work on modeling human performance was made by Eric Banister in 1975^[1] and verified by dozens of studies including one on runners^[2].
• Busso. The work of Banister was refined by Thierry Busso to modify the Banister model to at least partly account for how Training Monotony impacts recovery^[3].
• TSB. The Training Stress Balance model is a simplification of the Banister model by Andrew Coggan which looks at just the relative changes in performance^[4].
The TSB model will be examined first, as it's the simplest to understand.
2 The TSB Model
Training Stress Balance (TSB) uses the terms Chronic Training Load (CTL) for "fitness", Acute Training Load (ATL) for "fatigue" and Training Stress Balance (TSB) for "performance". As with all the
models, both CTL and ATL are based on TRIMP, with the effect of a given workout reducing over time, but the effect lasts longer on CTL than on ATL. In addition the TSB method assumes the effect is
greater on CTL than on ATL.
2.1 TSB for a Single Workout
The image above shows the effect of a single workout on the Training Stress Balance. You can see the workout creates a peak in both ATL and CTL. The peak is greater ATL, but reduces more quickly than
in CTL. The Training Stress Balance is CTL-ATL, and you can see it initially go negative indicating a reduced ability to perform, then rises to be positive as CTL becomes greater than ATL. This green
TSB line is the same as that seen in Supercompensation.
2.2 TSB for Multiple Workouts
The effect of a continuous series of identical workouts on TSB is shown in the image above. The ATL rises sharply and reaches a steady-state, with CTL rising more slowly to a greater level. The TSB
(CTL-ATL) goes negative, slowly returning to zero.
2.3 TSB Constants
Because the TSB model is simpler than the other two, it only requires the number of days that ATL and CTL decay over. The typical starting point is 7 days for ATL and 42 days for CTL.
2.4 Calculation Details
Understanding these details is not required for using the TSB, but you may find them interesting.
2.4.1 The TSB Formulas
The basic formula for calculating ATL/CTL uses an Exponential moving average that is calculated like this:
ATL[today] = TRIMP * λ[a] + ((1 – λ[a]) * ATL[yesterday]
CTL[today] = TRIMP * λ[f] + ((1 – λ[f]) * CTL[yesterday]
As you can see this is an iterative approach, calculating the value for the oldest workout, then iterating through each subsequent workout. The calculation for λ[a] and λ[f] is:
λ[a] = 2/(N[a]+1)
λ[f] = 2/(N[f]+1)
Where N[a] and N[f] are the time decay constants, normally 7 and 42 days respectively. The half-life for a given value of N is N/2.8854, so 7 days gives a half-life of 2.4 days and 42 days is 14.5
2.5 Limitations of the TSB model
The main limitation of the TSB model can be seen in the graph above, which shows that training reduces performance and never improves it. The TSB model only indicates an improved performance when
training is reduced, as shown below.
3 The Banister Model
The Banister model is a little more complex mathematically than TSB, using an exponential decay to model the effects of training stress. Unlike the TSB model, the Banister model will show that a
steady state of training will improve performance. Of the available models, Banister's is the most experimentally validated.
3.1 The Banister Formula
The formula below looks rather complex, but is actually fairly simple to implement.
Here it is as some simple pseudo-code for those that want to implement it.
fitness = 0;
TRIMP[] = getDailyTRIMP(); //array of TRIMP for each day
for(i=0, i < count(TRIMP); i++)
fitness = fitness * exp(-1/r1) + TRIMP[today];
fatigue = fatigue * exp(-1/r2) + TRIMP[today];
performance = fitness * k1 - fatigue * k2;
4 The Busso Model
Thierry Busso created a refinement of the Banister model to try to take account of how an increase in Training Monotony also increased fatigue. This modification of Banister changes k2 from being a
constant to being an exponential decay of the training stress.
The value of k2 uses a similar exponential decay:
5 Flaws in the models
There are a number of practical flaws in these models.
• The main flaw with these models is that the values for the constants used are specific to each individual, and possibly to the particular training regime. The verification of the Banister and
Busso models reverse engineered the constants so that the model accurately predicted the performance changes.
• The values of the constants and the resulting predictions also depend on the performance metric being used.
• Another flaw is that the models assume a single value for that represents the adaptation to performance. In practice, different types of adaptation occur at different rates.
• The TSB and Banister models ignore Overtraining & Training Monotony and predict that any possible workload produces improved fitness.
• The TSB model assumes that training does not produce an improved performance. Instead the TSB model only detects a change in training level, with reduced training improving performance and
increased training reduces performance.
6 Comparison of the Models
• The TSB model is simpler than the other two models as it only has the two decay parameters.
• The Banister model has more experimental verification than the other models.
• The Busso model is the most complex, but is the only one that allows for Training Monotony.
7 Implementing the Models
These three models are implemented in the free SportTracks Dailymile Plugin.
8 References
1. ↑ Thomas W. Calvert, Eric W. Banister, Margaret V. Savage, Tim Bach, A Systems Model of the Effects of Training on Physical Performance, IEEE Transactions on Systems, Man, and Cybernetics, volume
SMC-6, issue 2, 1976, pages 94–102, ISSN 0018-9472, doi 10.1109/TSMC.1976.5409179
2. ↑ RH. Morton, JR. Fitz-Clarke, EW. Banister, Modeling human performance in running., J Appl Physiol, volume 69, issue 3, pages 1171-7, Sep 1990, PMID 2246166
3. ↑ T. Busso, Variable dose-response relationship between exercise training and performance., Med Sci Sports Exerc, volume 35, issue 7, pages 1188-95, Jul 2003, doi 10.1249/
01.MSS.0000074465.13621.37, PMID 12840641
4. ↑ TrainingPeaks, http://www.peaksware.com/articles/cycling/the-science-of-the-performance-manager.aspx, Accessed on 11 May 2013
|
{"url":"http://www.fellrnr.com/wiki/Training_Stress_Balance","timestamp":"2014-04-19T02:25:08Z","content_type":null,"content_length":"33066","record_id":"<urn:uuid:9dc80046-6baa-414e-af22-3d27b72af63b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometric symmetries
March 26th 2013, 07:38 AM #1
Feb 2013
This question tests your ability to describe symmetries geometrically and to represent them as permutations in cycle form. It also tests your understanding of conjugacy classes and their
relationship to normal subgroups.
The figure for this question is prism with three identical rectangular faces and an equilateral triangle at the top and base. The locations of the faces of the prism (numbered 1, 4 at the top and
base and each side 2, 3 and 5, 5 being the back face) have been numbered so that we may represent the group G of all symmetries of the prism as permutations of the set {1,2,3,4,5}.
a) Describe geometrically the symmetries of the prism represented in cycle form by (14)(23) and (25).
b) Write down all the symmetries of the prism in cycle form as permutations of {1,2,3,4,5}, and describe each symmetry geometrically.
c) Write down the conjugacy classes of G.
d) Determine a subgroup of G of order 2, a subgroup of order 3, and a subgroup of order 4. In each case, state whether or not your choice of subgroup is normal, justifying your answer.
|
{"url":"http://mathhelpforum.com/advanced-math-topics/215642-geometric-symmetries.html","timestamp":"2014-04-18T01:28:47Z","content_type":null,"content_length":"29988","record_id":"<urn:uuid:1cbe2b0b-de10-4a35-87c5-18dc0d94c599>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
prove the identity: \[\cos x + \sin x \tan x \over \sin x \sec x \] = csc x
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f0671bfe4b075b566529598","timestamp":"2014-04-18T19:03:15Z","content_type":null,"content_length":"44670","record_id":"<urn:uuid:9c8914b4-2fcf-4ed5-b0c9-469200eeb888>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Decimal Addition
Decimal Addition &
Subtraction Worksheets
STW Filing CabinetLogged in members can use the Super Teacher Worksheets filing cabinet to save their favorite worksheets.
Quickly access your most commonly used files AND your custom generated worksheets!
Please login to your account or become a member today to utilize this helpful new feature. :) [x] close
STW Filing CabinetThis document has been saved in your Super Teacher Worksheets filing cabinet.
Here you can quickly access all of your favorite worksheets and custom generated files in one place!
Click on My Filing Cabinet in the menu at the upper left to access it anytime! [x] close
Go to Filing Cabinet Close this Window
Grade Level Estimation
Grade Level Estimation:
5thGrade level may vary depending on location and school curriculum. The above estimate is for schools in the USA. [x] close
Common Core Standards
Common core standards listing. [x] close
Common Core Standards
All common core standards details. [x] close
Common Core Standards
If you think there should be a change in the common core standards listed for this worksheet - please let us know.
[x] close
Add and subtract decimals with tenths, hundredths, and thousandths place values.
Select the apple core icon (
Add Decimals: Tenths Free
Solve twelve decimal addition problems. All problems contain whole numbers or decimals with a tenths place only. Students write each problem vertically and find the sums.
Subtract Decimals: Tenths Member
This worksheet contains a dozen subtraction problems. Students re-write each problem vertically and solve. All problems contain whole numbers or decimals with a tenths place.
Add & Subtract: Tenths Member
This printable includes a mix of addition and subtraction problems for students to solve. Each problem needs to be re-written vertically and students must line up the columns properly.
Decimal Balance Scales Member
Balance each scale by adding and subtracting basic decimal numbers.
Add: Hundredths Free
On this worksheet students are given two decimal addends for each problem. They re-write them vertically and solve. All numbers are whole numbers, or are decimals which contain a tenths or hundredths
Subtract: Hundredths Member
Solve a dozen subtraction problems with decimal quantities. Problems must be re-written vertically to ensure students understand the place value system.
Add & Subtract: Hundredths Member
This worksheet contains a mixture of addition and subtraction problems with decimal numbers. Requires students to re-write each horizontal problem in vertical notation.
Decimal Input-Output Boxes Member
Complete each input-outbox table. All questions require students to count by quarters (example: 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, and so on).
Grid Paper: Decimal Add Subtract Member
Rewrite each decimal addition or subtraction problem vertically on the graph paper, then solve. The grid paper will help students line up columns correctly.
Add: Thousandths Free
This page contains a dozen decimal addition problems with numbers that include a thousandths place. Students re-write each problem vertically and solve.
Addition Bakery Member
Find the total cost of the bakery items shown. Add the money amounts and write the sums.
Addition: Toy Shop Member
Add to find the total prices of the toys shown on the shelf. This page has four word problems that require students to use picture clues to solve.
Subtraction: Toy Shop Member
Calculate the change for each transaction at the toy shop.
Subtraction: Money Member
Find the difference for each subtraction problem.
Subtraction: Money Across Zero Member
Find the answer to each problem. Requires students to subtract money amounts with zeroes in the first quantity.
Making Change (Word Problems) Member
Subtract money amounts across zero to make change. This page includes several word problems.
See also:
Decimal Worksheets
Worksheets for introducing decimal concepts, comparing decimals, and ordering decimals.
Decimal Addition and Subtraction
|
{"url":"http://www.superteacherworksheets.com/decimal-add-subtract.html","timestamp":"2014-04-16T07:16:12Z","content_type":null,"content_length":"56577","record_id":"<urn:uuid:5047b0ca-72a6-49c5-87a7-c37fc794f41a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Institute for Mathematics and its Applications (IMA)
- Societally Relevant Computing
Simulation and computation play a critical role in important societal problems. Examples include the role of anthropogenic emissions on climate and ocean circulation; the prediction of earthquakes
and tsunamis; the prediction of paths and storm surges of hurricanes; designing infrastructure that is capable of withstanding disasters, such as floods and terrorist attacks; the design and long
term durability of major infrastructure, such as bridges, tunnels, etc; the spread and containment of disease and epidemics, etc. These systems exhibit extreme complexity: there are a myriad of
different issues that are critical and must be accurately addressed. Models contain and interface large numbers of physical effects. All of these problems represent grand challenge computer problems
that require pushing the limits of technology, both with regards to algorithms and machines as well as the development of the physical models themselves. Critical issues include the development and
coupling of algorithms for multiphysics, multiscale applications, verification and validation of these computer models, and quantifying their predictive reliability, given potentially large
uncertainty in numerical accuracy, code reliability and ultimately the models themselves. How good is good enough? For any complex phenomenon, as data and computing power increases, more and more
physical effects can always be included, and larger computations can always be designed. However there will always be substantial uncertainty and numerical error to overcome. This workshop will have
two main parts: first we will overview the major computational efforts in a number of different problems of societal importance, including climate, atmospheric pollution, floods and earthquakes,
alternative energy sources and carbon sequestration. Speakers will assess the state of the art, highlighting the major areas of current research in algorithms, data collection and integration, and
error and uncertainty estimation. The second part will include researchers working on mathematical foundations of computer architectures, algorithms, error estimation and uncertainty quantification
methods. This will repeat (in small doses) some of the discussion previously taking place in this annual year but the hope is that we will excite a synergy between the mathematicians and the
practitioners to find areas where real progress can be made.
|
{"url":"http://ima.umn.edu/2010-2011/W4.11-15.11/","timestamp":"2014-04-21T02:21:16Z","content_type":null,"content_length":"47997","record_id":"<urn:uuid:48d3e4b1-4e5a-4976-9d09-b629a7189514>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wittmann Statistics Tutors
...I assisted many students (including my peers) on study skills while I was in nursing school and now assist the students I tutor. I have implemented many strategies that I have utilized myself
that helped me excel in my classes and studies. I am a registered nurse licensed in the state of Arizona.
10 Subjects: including statistics, nursing, public speaking, study skills
...This is done by providing them with thoughtful sequential questions. This road map I take with the students will enable them to continue the thought process without me there. That is, when it
comes to the homework and test on their own, they should ask themselves, "What questions did Dr.
20 Subjects: including statistics, chemistry, physics, calculus
...My goal is to have everyone know math well because math is part of our everyday lives and serves as logical reasoning to solve every problem we have. Throughout all of my education math has
been the easiest classes for me to earn A's in. Math comes naturally to me but on top of that I love to s...
21 Subjects: including statistics, chemistry, calculus, physics
...If you are looking for any tutoring from high school math and science up to Calculus and college level Physics I would be happy to help. I have very little official tutoring history, but I have
had many unpaid tutoring opportunities. I have helped students in Math in High School and assisted students in several classes in college.
11 Subjects: including statistics, calculus, physics, geometry
...Letters of reference available too.I am certified by the Arizona Department of Education as "Highly Qualified" to teach Middle Grades Math. I have tutored students for the AIMs test,
occupational math and college level math. I hold a teaching certificate for K-8 elementary.
24 Subjects: including statistics, chemistry, English, writing
|
{"url":"http://www.algebrahelp.com/Wittmann_statistics_tutors.jsp","timestamp":"2014-04-17T18:30:01Z","content_type":null,"content_length":"24962","record_id":"<urn:uuid:f77ce479-d7d3-4ac1-8d1a-57a9b0d17926>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 1 Concepts and Skills
Are you confused about what is taught in an Algebra 1 curriculum?
This page will give you an idea of the Algebra 1 concepts and skills that are taught and the order in which they are presented.
This is the order in which I like to teach the units; however, not all curriculum is set up exactly the same. So, please use this as a guide, but remember that there are many different Algebra 1
textbooks and curricula and each is set up a little different.
Algebra 1
Introductory Unit: Review of Essential Pre-Algebra Skills
Integers and Absolute Value
Order of Operations
Distributive Property
Evaluating Algebraic Expressions
Simplifying Algebraic Expressions
Unit 1: Solving Equations
Solving One-Step Equations
Solving Two-Step Equations
Solving Equations with the Distributive Property
Solving Equations with Fractions
Solving Literal Equations
Solving Equations With Variables on Both Sides
Making Connections with Real World Problems (Word Problems)
Unit 2: Graphing Equations
Graphing Using a Table of Values
Calculating Slope
Graphing Slope
Using Slope Intercept Form
Rate of Change (Includes a lot of word problems)
Standard Form Equations and X and Y Intercepts (Includes a lot of word problems)
Unit 3: Writing Equations
Writing Equations in Slope Intercept Form
Writing Equations in Standard Form
Writing Equations Given Slope and a Point (Includes word problems)
Writing Equations Given Two Points (Includes word problems)
Unit 4: Systems of Equations
Graphing Systems of Equations
Substitution Method
Linear Combinations Method (Addition Method)
Making Connections with Real World Problems (Word Problems)
Unit 5: Inequalities
Inequalities in One Variable (Including Word Problems)
Graphing Linear Inequalities
Systems of Inequalities
Making Connections with Real World Problems (Word Problems)
Unit 6:Relations and Functions
Introduction to Relations and Functions
Identifying Functions (Vertical Line Test)
Evaluating Functions
Linear Functions
Quadratic Functions
Step and Discontinuous Functions
Unit 7:Exponents and Monomials
Review of Exponents
Laws of Exponents
Multiplying and Dividing Monomials
Working with Negative and Zero Exponents
Scientific Notation
Unit 8: Quadratic Equations
Square Roots
Solving Simple Quadratic Equations
Using the Pythagorean Theorem
Making Connections with Real World Problems (Word Problems)
Unit 9: Introduction to Polynomials
Adding and Subtracting Polynomials
Multiplying Polynomials
Solving Quadratic Equations by Factoring
Making Connections with Real World Problems (Word Problems)
I teach two other supplemental units as well during the year. One is on Statistics and the other is Probability. I stick these units in when either the students need a "break" or when I don't have
enough time to complete a full Unit (i.e. before Winter or Spring Break). These units are necessary because they are heavily tested on state and standardized tests!
Supplemental Unit: Statistics
Measures of Central Tendancy (Mean, Median, and Mode)
Stem and Leaf Plots
Box and Whisker Plots
Time Lines, Circle Graphs
Supplemental Unit: Probability
Expected Value
Independent and Dependent Events
Geometric Probablility
Hopefully this gives you an indication of the different Algebra 1 concepts and skills that are typically taught! Just remember, the sequence of units can vary among different textbooks!
Subscribe To This Site
|
{"url":"http://www.algebra-class.com/algebra-1-concepts-and-skills.html","timestamp":"2014-04-20T21:46:12Z","content_type":null,"content_length":"28342","record_id":"<urn:uuid:53609a5a-3127-4e3e-b660-d7469e84ad14>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bolzano-Weierstrass property
February 13th 2011, 07:58 AM
Bolzano-Weierstrass property
Why would the interval [0, infinity) not have the Bolzano-Weierstrass property, which states that A set of real numbers E is closed and bounded if and only if every sequence of points chosen from
the set has a subsequence that converges to a point that belongs to E?
February 13th 2011, 08:11 AM
Is the set $[0,\infty)$ bounded?
Let $\{x_n=n:n\in\mathbb{Z}^+\}$.
Does that sequence have a convergent subsequence?
|
{"url":"http://mathhelpforum.com/differential-geometry/171112-bolzano-weierstrass-property-print.html","timestamp":"2014-04-19T07:50:13Z","content_type":null,"content_length":"4721","record_id":"<urn:uuid:d42b24d4-fe4e-44d6-9dad-6d11771bf8e0>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Supplementary Video for the Instructional Improvement Program Grant Application: "Multi-Media Courseware in Robotics"
Prof. Katie Byl, ECE Dept. (katiebyl@ece.ucsb.edu)
Rovio: a 3-wheeled robot
Below is a short video of the Rovio when commanded to move in a straight line to the left.
The robot has intentionally been aligned with the floorboards at the start, to make it easier to tell if it is traveling in a straight line. Note that it actually turns significantly. In one
anticipated Rovio-based projects, students will design feedback control to correct for these errors and track motions desired motion plans much more accurately.
Underactuated dynamic systems
One exciting - and often unavoidable - challenge in robotics in the problem of underactuation: Sometimes, there are more degrees of freedom in a system than we can control directly with the actuators
we have. Below are several examples of underactuated systems.
Hovercraft - 3 degrees of freedom, but only 2 actuators.
Below is an animation of a "hovercraft" which has 3 degrees of freedom but only 2 thrusters (the actuators), making it underactuated. The degrees of freedom are: (1) x position, (2) y position, and
(3) the rotation angle, theta.
Here, we can control any 2 out of 3 degrees of freedom, but not all 3. Thus, if we want to follow an (x,y) path, the angle theta is left uncontrolled, and the hovercraft spins. The Rovio, by
comparison, has the same 3 degrees of freedom (x, y, and theta), but it has 3 actuators (the 3 wheels), making it fully actuated. Unlike the hovercraft below, the Rovio can therefore follow a desired
motion path in x, y, and theta.
Acrobot - 2 degrees of freedom, but only 1 actuator (at "elbow").
The acrobot is a two-link pendulum with a single actuator at the elbow that can be (1) brought into an upright position and (2) stabilized. Below are two methods for achieving the first goal of
pumping energy into the system to raise both links; an additional, linearized controlled can than be added for stabilization.
The first videos shows colocated partial feedback linearization (PFL), while the second demonstrates non-colocated PFL.
Note that while the second method is much faster at getting upright, it also requires a HUGE amount of power initially to do so! It may work in simulation, but would not work on practical hardware.
Also note the long-term behavior of each approach: this method does NOT stabilize the pendulum at the top; it only raises it into a position where a second control algorithm could take over to do so.
Marc Raibert's one-legged hopper.
The final simulation example shows the Raibert hopper. It is impressive that Raibert's simple balance algorithms can stabilize this underactuated system. However, the resulting speed control is not
very accurate. Note that Raibert's control ideas, which were first developed in the 1980's, are at the basis of control for the BigDog robot that you may seen on YouTube in recent years.
|
{"url":"http://robotics.ece.ucsb.edu/IIPgrant/IIPgrant_videos.html","timestamp":"2014-04-18T00:12:34Z","content_type":null,"content_length":"9918","record_id":"<urn:uuid:9fb354dd-274d-4653-a0a7-95dc676c6d20>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Focal length of concave lens using convex lens
# A convex lens of focal length 10 cm forms a real image of an object placed at a distance of 20 cm from it. Midway between the convex lens and the position of the image a thin concave lens is
introduced. The image formed now is at a distance of 5 cm away from the earlier position. What is the focal length of the concave lens?
I solved it in the following way:
Let u, v and f be the object distance, image distance and the focal length of the convex lens.
Here u = 20 cm, f = + 10 cm
1/v = 1/f – 1/u
1/v= 1/10 – 1/20
v = 20 cm
Case- 2
Let u, v and f be the object distance, image distance and the focal length of the concave lens.
Here u = - 10(virtual object), v = + 15 cm
1/f = -1/10 + 1/15
f = - 30 cm
But the answer given in my book is – 5 cm.
|
{"url":"http://www.physicsforums.com/showthread.php?t=123335","timestamp":"2014-04-21T07:23:52Z","content_type":null,"content_length":"29303","record_id":"<urn:uuid:51f8eb99-966e-40d9-a519-c2bcf18c89de>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jackson Heights Trigonometry Tutor
...II and Trigonometry is a fairly advanced course that requires expertise and experience. I have had years of experience teaching it to students at all levels. Even some certified math teachers
are not fluent in this subject.
9 Subjects: including trigonometry, geometry, algebra 2, algebra 1
...I scored in the 99th percentile on the GRE in Quantitative Reasoning (perfect 170,) and the 96th percentile in Verbal (166). I am a successful tutor because I have a strong proficiency in the
subject material I teach and a patient and creative approach that makes any subject simple to understand...
21 Subjects: including trigonometry, calculus, statistics, geometry
...I think my approach to teaching music differs from the norm in that it is more conceptual and ear-based. I can teach how to read music, analyze chords, etc., but I'd rather teach students how
to listen to it and think about it. I have been playing guitar for almost 14 years.
22 Subjects: including trigonometry, calculus, elementary math, precalculus
...This started as a twice a week commitment with a couple of students and quickly grew into a full fledged undertaking with five to eight tutoring sessions a week for 4 or 5 students. I always
enjoyed helping students achieve their goals and found myself caring deeply about their success. I have...
19 Subjects: including trigonometry, chemistry, calculus, geometry
...I continue to actively use Russian in my everyday life. Besides, my daughter attends the School at Russian Mission in the United Nations (she is in 7th grade), and I help her with her homework,
including Russian language and literature. I took a Linear Algebra course in Moscow Institute of Physics and Technology and used it for my Master's thesis in Optimization.
24 Subjects: including trigonometry, calculus, physics, Russian
|
{"url":"http://www.purplemath.com/Jackson_Heights_Trigonometry_tutors.php","timestamp":"2014-04-21T00:18:43Z","content_type":null,"content_length":"24456","record_id":"<urn:uuid:e422b1b0-7225-4d86-9056-3b82db31df28>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User Fermion
bio website
visits member for 2 years, 1 month
seen Mar 23 at 2:39
stats profile views 307
I am a math PhD student and interested in many things: (non-commutative) algebraic geometry, arithmetic geometry and mathematical physics
Mar Non-separatedness of moduli space of sheaves
1 comment It seems to me that graded quotients (with obvious filtration) of the two bundles are different. One consists of two $\mathcal{O}$ and the other consists of $\mathcal{O}(-1)$ and $\
1 accepted Non-separatedness of moduli space of sheaves
1 asked Non-separatedness of moduli space of sheaves
24 accepted How to obtain $Z(\Sigma_f)=\text{Trace}\ \Sigma(f)$ in TQFT?
Feb How to obtain $Z(\Sigma_f)=\text{Trace}\ \Sigma(f)$ in TQFT?
24 comment Thank you very much, abx!
Feb How to obtain $Z(\Sigma_f)=\text{Trace}\ \Sigma(f)$ in TQFT?
24 comment Can I ask why your paring is the natural one?
Feb How to obtain $Z(\Sigma_f)=\text{Trace}\ \Sigma(f)$ in TQFT?
24 comment Thanks for the answer, but I don't understand the last sentence. Where did you use the associativity axiom and how did you get the trace?
24 asked How to obtain $Z(\Sigma_f)=\text{Trace}\ \Sigma(f)$ in TQFT?
22 awarded Yearling
Dec Explicit examples presheaves associated to higher direct images which fail to be sheaves
17 comment As to the first case, why don't the local sections glue to a global one? What's wrong with $H^1(X,\mathbb{Z})$?
5 awarded Nice Question
Nov How to determine $O(L)$ is finite or not?
3 comment You are right. $2$ should be $k$ in the comment above. Thank you for the clarification.
Nov How to determine $O(L)$ is finite or not?
3 comment I thought any automorphism of $U(k)$ is induced by that of $U$. Am I wrong?
Nov How to determine $O(L)$ is finite or not?
3 comment Sorry for the confusion. I denote by $U(k)$ the hyperbolic lattice mulptiplied by $k$. So there are basis $e,f$ with $e^2=f^2=(e,f)-2=0$.
Nov How to determine $O(L)$ is finite or not?
2 comment Isn't $O(U(K)))$ finite?
Oct How to determine $O(L)$ is finite or not?
31 comment Do you mean $U(k)$ by the hyperbolic plane?
Sep Classification of involutions of the lattice $H\oplus H(k)^{\oplus2}$ for $k=5,6$?
4 revised added 24 characters in body; edited tags
1 asked Moduli space of K3 surfaces and Bogomolov-Tian-Todorov theorem
Aug Classification of involutions of the lattice $H\oplus H(k)^{\oplus2}$ for $k=5,6$?
31 revised added 70 characters in body
Aug asked Classification of involutions of the lattice $H\oplus H(k)^{\oplus2}$ for $k=5,6$?
|
{"url":"http://mathoverflow.net/users/21640/fermion?tab=activity","timestamp":"2014-04-20T01:26:50Z","content_type":null,"content_length":"45401","record_id":"<urn:uuid:03fe7f65-d6d6-4416-8490-9142c516cd99>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 1 Concepts and Skills
Are you confused about what is taught in an Algebra 1 curriculum?
This page will give you an idea of the Algebra 1 concepts and skills that are taught and the order in which they are presented.
This is the order in which I like to teach the units; however, not all curriculum is set up exactly the same. So, please use this as a guide, but remember that there are many different Algebra 1
textbooks and curricula and each is set up a little different.
Algebra 1
Introductory Unit: Review of Essential Pre-Algebra Skills
Integers and Absolute Value
Order of Operations
Distributive Property
Evaluating Algebraic Expressions
Simplifying Algebraic Expressions
Unit 1: Solving Equations
Solving One-Step Equations
Solving Two-Step Equations
Solving Equations with the Distributive Property
Solving Equations with Fractions
Solving Literal Equations
Solving Equations With Variables on Both Sides
Making Connections with Real World Problems (Word Problems)
Unit 2: Graphing Equations
Graphing Using a Table of Values
Calculating Slope
Graphing Slope
Using Slope Intercept Form
Rate of Change (Includes a lot of word problems)
Standard Form Equations and X and Y Intercepts (Includes a lot of word problems)
Unit 3: Writing Equations
Writing Equations in Slope Intercept Form
Writing Equations in Standard Form
Writing Equations Given Slope and a Point (Includes word problems)
Writing Equations Given Two Points (Includes word problems)
Unit 4: Systems of Equations
Graphing Systems of Equations
Substitution Method
Linear Combinations Method (Addition Method)
Making Connections with Real World Problems (Word Problems)
Unit 5: Inequalities
Inequalities in One Variable (Including Word Problems)
Graphing Linear Inequalities
Systems of Inequalities
Making Connections with Real World Problems (Word Problems)
Unit 6:Relations and Functions
Introduction to Relations and Functions
Identifying Functions (Vertical Line Test)
Evaluating Functions
Linear Functions
Quadratic Functions
Step and Discontinuous Functions
Unit 7:Exponents and Monomials
Review of Exponents
Laws of Exponents
Multiplying and Dividing Monomials
Working with Negative and Zero Exponents
Scientific Notation
Unit 8: Quadratic Equations
Square Roots
Solving Simple Quadratic Equations
Using the Pythagorean Theorem
Making Connections with Real World Problems (Word Problems)
Unit 9: Introduction to Polynomials
Adding and Subtracting Polynomials
Multiplying Polynomials
Solving Quadratic Equations by Factoring
Making Connections with Real World Problems (Word Problems)
I teach two other supplemental units as well during the year. One is on Statistics and the other is Probability. I stick these units in when either the students need a "break" or when I don't have
enough time to complete a full Unit (i.e. before Winter or Spring Break). These units are necessary because they are heavily tested on state and standardized tests!
Supplemental Unit: Statistics
Measures of Central Tendancy (Mean, Median, and Mode)
Stem and Leaf Plots
Box and Whisker Plots
Time Lines, Circle Graphs
Supplemental Unit: Probability
Expected Value
Independent and Dependent Events
Geometric Probablility
Hopefully this gives you an indication of the different Algebra 1 concepts and skills that are typically taught! Just remember, the sequence of units can vary among different textbooks!
Subscribe To This Site
|
{"url":"http://www.algebra-class.com/algebra-1-concepts-and-skills.html","timestamp":"2014-04-20T21:46:12Z","content_type":null,"content_length":"28342","record_id":"<urn:uuid:53609a5a-3127-4e3e-b660-d7469e84ad14>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recently Active 'riemann-zeta-function' Questions
The Riemann zeta function is the function of one complex variable $s$ defined by the series $\zeta(s) = \sum_{n \geq 1} \frac{1}{n^s}$ when $\operatorname{Re}(s)>1$. It admits a meromorphic
continuation to $\mathbb{C}$ with only a simple pole at $1$. This function satisfies a functional equation ...
learn more… | top users | synonyms
|
{"url":"http://mathoverflow.net/questions/tagged/riemann-zeta-function?sort=active&pagesize=15","timestamp":"2014-04-16T07:41:10Z","content_type":null,"content_length":"181237","record_id":"<urn:uuid:a295464c-aebe-45f9-b933-c4f6f2346a72>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summon the Heroes-1
November 28th 2009, 04:49 AM #1
Junior Member
Sep 2007
From now on, I will post some problem to play with,If this problem be worked out, The next will be posted.
Let's Enjoy the great game...
Have Fun
If $a_{n}>0$ for $\forall$$n \in \mathbb{N}^+$
show that:
$\lim_{n \rightarrow \infty}\frac{a_{n}}{(1+a_{1})(1+a_{2})...(1+a_{n}) }=0$
Nice one!
Elegant and powerful !!!
Last edited by Xingyuan; November 28th 2009 at 08:37 AM.
November 28th 2009, 05:20 AM #2
MHF Contributor
Aug 2008
Paris, France
November 28th 2009, 08:25 AM #3
Junior Member
Sep 2007
|
{"url":"http://mathhelpforum.com/calculus/117153-summon-heroes-1-a.html","timestamp":"2014-04-16T15:00:20Z","content_type":null,"content_length":"36909","record_id":"<urn:uuid:b9d97964-d8ec-41a2-8bab-5532f3de2eff>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hypothesis - Power
October 1st 2012, 11:38 AM #1
Junior Member
Jan 2011
Hypothesis - Power
If x1,x2,...,xn ~ Normal(theta,1) and B(theta) is the probability of rejecting the null hypothesis H0 : theta <= 0 in favor of the alternative hypothesis H1: theta > 0. Consider a statistical
test that rejects H0 if x/(1/sqrt(n)) > c, where c is some positive number. Show that the power is B(theta) = P(Z > c - theta/(1/sqrt(n))) where Z is a standard normal variable.
Then if theta = 1, find a sample size n and a value c that will allow the probability of a type 1 error to be at most .05 and the probability of rejecting the null hypothesis to be at least 0.8
if theta = 1.
Re: Hypothesis - Power
Hey liedora.
Can you show us what you have tried? Can you state the definition of your Type I errors with respect to the significance region (in terms of your c)?
October 2nd 2012, 01:03 AM #2
MHF Contributor
Sep 2012
|
{"url":"http://mathhelpforum.com/advanced-statistics/204425-hypothesis-power.html","timestamp":"2014-04-19T02:03:39Z","content_type":null,"content_length":"31831","record_id":"<urn:uuid:3328a5b1-aadd-4462-9479-6fbe7a47775c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bremerton Geometry Tutor
...Please feel free to contact me for more information. As I would not need to cover transportation costs, my hourly rate will be lower and more consistent.*** I have a been tutoring for most of
my life, from kindergarten-aged children up to college and university-level classes. I mainly tutored accounting classes for five years at Green River Community College and Washington State
12 Subjects: including geometry, reading, accounting, ASVAB
...I have been using Access since 2001 and took a class in three levels of Access the same year. Since that time, I have revised and created three databases in Access. I also took courses in
Access in my grad program for my MBA in 2009.
39 Subjects: including geometry, reading, English, algebra 1
...I have worked as a laboratory chemist and as an instructor at Tacoma Community College for several years. I have also taught high school level sciences and mathematics. I have tutored or taught
for the past thirty years.
12 Subjects: including geometry, chemistry, algebra 1, algebra 2
...During this time I also volunteered with the YMCA Special Olympics program and am very comfortable working with special needs children.I am qualified to tutor Study Skills due to my time spent
in earning my A.A. and B.S. in Biology degree. In total I have earned 173 semester hours, and have been...
25 Subjects: including geometry, chemistry, physics, statistics
...Therefore, I have both teaching and tutoring experience, as well as hands-on expertise in the science field. My two biggest passions are science and education. I take pride in being a
full-blown nerd: I read biology books for fun, listen to science podcasts in my free time, and laugh at corny s...
9 Subjects: including geometry, chemistry, biology, algebra 1
|
{"url":"http://www.purplemath.com/Bremerton_Geometry_tutors.php","timestamp":"2014-04-16T04:23:04Z","content_type":null,"content_length":"23894","record_id":"<urn:uuid:79d2c54d-771b-45d3-ab29-35539a9167f0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Anonymous on Sunday, October 3, 2010 at 7:07pm.
Media Disk, Inc. duplicates over a million 3.5” floppy disks each year by copying masters to stacks of blank disks. The company buys its blank stock from two different suppliers, A and B. The manager
has decided to check each supplier’s stock by counting the rejected disks in the next run from 5,000 that just arrived from supplier A and 4,500 that just arrived from supplier B. During the run, the
disk duplicators rejected 73 of A’s disks and 56 of B’s. State the appropriate hypotheses to test whether the proportions of defective disks from the two suppliers are the same. At a = 0.05, what can
Media Disk conclude? This question is due on day 6.
• statistics - MathGuru, Monday, October 4, 2010 at 6:52pm
Try a binomial proportion 2-sample z-test using proportions.
Ho: pA = pB
Ha: pA does not equal pB -->this is a two-tailed test (the alternate hypothesis does not show a specific direction).
The formula is:
z = (pA - pB)/√[pq(1/n1 + 1/n2)]
...where n represents the sample sizes, p is (x1 + x2)/(n1 + n2), and q is 1-p.
I'll get you started:
p = (73 + 56)/(5000 + 4500) = ? -->once you have the fraction, convert to a decimal (decimals are easier to use in the formula).
q = 1 - p
pA = 73/5000
pB = 56/4500
Convert all fractions to decimals. Plug those decimal values into the formula and find z. Compare z to the cutoff 0.05 for a two-tailed test (cutoff value is z = + or - 1.96). If the test
statistic you calculated exceeds either the positive or negative cutoff z-value, reject the null and conclude a difference. If the test statistic does not exceed either the positive or negative
cutoff z-value, do not reject the null (you cannot conclude a difference).
I hope this will help get you started.
Related Questions
physics - There are two spinning disks A and B arranged so that they initially ...
math - Numbered disks are placed in a box and one disk is selected at random. ...
school - Two disks are spinning freely about axes that run through their ...
geometry - Samantha has a dog that loves to chase flying disks. To keep her disk...
geometery - Samantha has a dog that loves to chase flying disks. To keep her ...
Physics - Two disks are rotating about the same axis. Disk A has a moment of ...
physics - Two disks are rotating about the same axis. Disk A has a moment of ...
finite math - a box contains 40 computer disks, 5 are defective. Three disks are...
Physics - Two disks are rotating about the same axis. Disk A has a moment of ...
College Physics - Two shuffleboard disks of equal mass, one orange and the other...
|
{"url":"http://www.jiskha.com/display.cgi?id=1286147225","timestamp":"2014-04-16T18:26:08Z","content_type":null,"content_length":"9857","record_id":"<urn:uuid:8dd675d5-3600-4938-9941-674c1ee1414c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thomas Fink
79,900 visitors last year Online since Apr 1996 Last modified 1 Oct 2009
I am a physicist at the Curie Institute/CNRS and the London Institute for Mathematical Sciences; I also periodically visit Cambridge.
I use statistical mechanics to study complex systems in physics and interdisciplinary fields. My research interests are discrete dynamics, complex networks and fundamental laws of biology.
I have also written two popular books: The Man's Book, an almanac for men; and The 85 Ways to Tie a Tie, a book about ties and tie knots.
Dynamics of single input networks
A book to train manly men
The Man's Book
was published by Little, Brown in the United States on 6 May, 2009. Expanded, revised and retypeset, this is the first time that the definitive almanac for men has been available in America. New
features include: 16 more sections, 35 more figures and numerically indexed sections and subsections.
ex nihilo
, and is associated with regular and aperiodic tiling. We are interested in enumerating regular-polygonal blocks and their associated structures, as well as those of multiple species of blocks.
Directed Networks
Clustering signatures classify directed networks
We use a clustering signature, based on a recently introduced generalization of the clustering coefficient to directed networks, to analyze 16 directed real-world networks of five different types:
social networks, genetic transcription networks, word adjacency networks, food webs and electric circuits. We show that these five classes of networks are cleanly separated in the space of clustering
signatures due to the statistical properties of their local neighbourhoods, demonstrating the usefulness of clustering signatures as a classifier of directed networks.
Non-coding DNA
How much non-coding DNA do eukaryotes require?
In most eukaryotes, a large proportion of the genome does not code for proteins. The non-coding part is observed to vary greatly in size even between closely related species. We report evidence that
eukaryotes require a certain minimum amount of non-coding DNA (ncDNA), and that this minimum increases quadratically with the amount of coding DNA. Based on a simple model of the growth of regulatory
networks, we derive a theoretical prediction of the required quantity of ncDNA and find it to be in excellent agreement with observation.
Mathematics of Conkers
In Single elimination competition, we study a simple model of competition in which each player has a fixed strength. Randomly selected pairs of players compete and the stronger one wins. We show that
the best indicator of future success is not the number of wins but a player's wealth: the accumulated wealth of all defeated players. We calculate statistics of a conker with a given score and offer
advice on strategy.
Fundamental Laws of Biology
In the summer of 2007 I joined the DARPA Fundamental Laws of Biology (FunBio) program. Organised by the Defense Sciences Office, it is a team of biologists, mathematicians, and physicists working
together to
• 'Bring new mathematical perspectives to biology.
• Use the stimulus of those challenges to create new mathematics that will reveal unanticipated structures in large complex systems.
• Explain biological organization at multiple scales.
• Discover the fundamental laws of biology that span all biological scales.'
The Man's Book 2008
The Man's Book
is the authoritative handbook of men's customs, habits and pursuits - a vade mecum for modern-day manliness. At a time when the sexes are muddled and masculinity is marginalized,
The Man's Book
unabashedly celebrates being male. Chaps, cads, blokes and bounders, rejoice:
The Man's Book
will bring you back to wear you belong.
The 2008 edition of The Man's Book, published on 19 September, 2007, has been expanded, revised and updated. A free sample chapter can be downloaded.
Dynamics of Network Motifs
Complex dynamical networks are found in biology, technology and sociology. Recently it was observed that some local network structures, or network motifs, are much more frequently observed in complex
networks than would be expected by chance. Although this biased distribution of motifs appears to apply to a broad range of networks, it remains unclear why some motifs are ubiquitous and others are
In a recent paper, we study the dynamics of network motifs by treating them as small Boolean networks and comprehensively evaluating all possible binary (Boolean) update rules. We introduce a
formalism for classifying dynamical behaviour and show that some network motifs are fundamentally more versatile—capable of executing a variety of tasks—than others. Because it is more likely to be
capable of an arbitrary task, a versatile motif would ostensibly occur more frequently in real networks, all else being equal. While versatility roughly increases with motif complexity, we find
evidence of a critical complexity, after which increasing the number of bonds confers no advantage.
Can Biology Lead to New Theorems?
R. A. Fisher 7x7 Latin
square, from Caius
College Hall, Cambridge
More often than not, collaboration between biologists and mathematical scientists is a one-sided affair. Mathematicians and physicists are under pressure to make real biological contributions,
irrespective of whether the mathematics is of interest or importance. From their point of view, such interdisciplinary collaborations are of limited real interest.
This picture is starting to change. In '...Biology Is Mathematics' Next Physics, Only Better', Joel Cohen argues that in the 21st century, biology, like physics in the century before it, will drive
mathematics in new directions. Bernd Sturmfels offers more explicit evidence in his paper 'Can Biology Lead to New Theorems?'. He outlines four theorems which are direct outgrowths of biological
These prognostications suggest an exciting future for quantitative biology. While much of systems biology is applied (it has immediate practical applications), it is now clear that it also comprises
basic science. New mathematical understanding, and new mathematics, are needed to make sense of phenomena ranging from the complex dynamics in regulatory networks to the faint deviation from
randomness in genetic expression curves.
Here is an example from my work where studying microarray expression led to a theorem in number theory (Microarray expression series, Properties of up-down numbers). Let C[N](s) be the number of
permutations of length N+1 with a given up-down signature s. Then, for p prime,
|
{"url":"http://www.tcm.phy.cam.ac.uk/~tmf20/","timestamp":"2014-04-19T19:34:15Z","content_type":null,"content_length":"35845","record_id":"<urn:uuid:623381f9-9fe9-4e25-8f60-43987145e814>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The power of pluralism for automatic program synthesis
Results 1 - 10 of 35
, 1992
"... We prove that the set of all recursive functions cannot be inferred using first-order queries in the query language containing extra symbols [+; !]. The proof of this theorem involves a new
decidability result about Presburger arithmetic which is of independent interest. Using our machinery, we ..."
Cited by 35 (11 self)
Add to MetaCart
We prove that the set of all recursive functions cannot be inferred using first-order queries in the query language containing extra symbols [+; !]. The proof of this theorem involves a new
decidability result about Presburger arithmetic which is of independent interest. Using our machinery, we show that the set of all primitive recursive functions cannot be inferred with a bounded
number of mind changes, again using queries in [+; !]. Additionally, we resolve an open question in [7] about passive versus active learning. 1) Introduction This paper presents new results in the
area of query inductive inference (introduced in [7]); in addition, there are results of interest in mathematical logic. Inductive inference is the study of inductive machine learning in a
theoretical framework. In query inductive inference, we study the ability of a Query Inference Machine 1 Supported, in part, by NSF grants CCR 88-03641 and 90-20079. 2 Also with IBM Corporation,
Application Solutions...
- Journal of Computer and System Sciences , 1993
"... Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability
measures the computing power of oracles. In this paper we determine the relationship between both notions. ..."
Cited by 32 (19 self)
Add to MetaCart
Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures
the computing power of oracles. In this paper we determine the relationship between both notions. 1 Introduction We consider learning of classes of recursive functions within the framework of
inductive inference [21]. A recent theme is the study of inductive inference machines with oracles ([8, 10, 11, 17, 24] and tangentially [12]; cf. [10] for a comprehensive introduction and a
collection of all previous results.) The basic question is how the information content of the oracle (technically: its Turing degree) relates with its learning power (technically: its inference
degree---depending on the underlying inference criterion). In this paper a definitive answer is obtained for the case of recursively enumerable oracles and the case when only finitely many queries to
the oracle are allo...
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with
any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how ..."
Cited by 18 (6 self)
Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any
externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not
required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by single-celled organisms. Another is for
e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means
for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the
other prog...
, 1995
"... Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional
(algorithmic) information about f . Specifically considered, as such approximate additional informatio ..."
Cited by 17 (7 self)
Add to MetaCart
Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional
(algorithmic) information about f . Specifically considered, as such approximate additional information about f , are Rose's frequency computations for f and several natural generalizations from the
literature, each generalization involving programs for restricted trees of recursive functions which have f as a branch. Considered as the types of trees are those with bounded variation, bounded
width, and bounded rank. For the case of learning final correct programs for recursive functions, EX- learning, where the additional information involves frequency computations, an insightful and
interestingly complex combinatorial characterization of learning power is presented as a function of the frequency parameters. For EX- learning (as well as for BC-learning, where a final sequence of
, 1993
"... A team of learning machines is essentially a multiset of learning machines. ..."
, 1992
"... this paper, # 0 , # 1 , # 2 , . . . denotes an acceptable programming system [17], also known as a Godel numbering of the partial recursive functions [15]. The function # e is said to be
computed by the program e. ..."
Cited by 10 (4 self)
Add to MetaCart
this paper, # 0 , # 1 , # 2 , . . . denotes an acceptable programming system [17], also known as a Godel numbering of the partial recursive functions [15]. The function # e is said to be computed by
the program e.
"... this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also
represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence ..."
Cited by 8 (1 self)
Add to MetaCart
this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by
functions) have been previously learned. In other words, the Soar project presents empirical evidence that learning how to learn is viable for computers and this paper proves that doing so is the
only way possible for computers to make certain inferences.
- Theoretical Computer Science A , 1994
"... The present paper studies the problem of when a team of learning machines can be aggregated into a single learning machine without any loss in learning power. The main results concern
aggregation ratios for vacillatory identification of languages from texts. For a positiveinteger n,amachine is said ..."
Cited by 7 (4 self)
Add to MetaCart
The present paper studies the problem of when a team of learning machines can be aggregated into a single learning machine without any loss in learning power. The main results concern aggregation
ratios for vacillatory identification of languages from texts. For a positiveinteger n,amachine is said to TxtFex n -identify a language L just in case the machine converges to up to n grammars for L
on any text for L.For such identification criteria, the aggregation ratio is derived for the n = 2 case. It is shown that the collection of languages that can be TxtFex 2 identified by teams with
success ratio greater than 5=6 are the same as those collections of languages that can be TxtFex 2 - identified by a single machine. It is also established that 5=6 is indeed the cut-off point by
showing that there are collections of languages that can be TxtFex 2 -identified bya team employing 6 machines, at least 5 of which are required to be successful, but cannot be TxtFex 2 -identified
byany single machine. Additionally, aggregation ratios are also derived for finite identification of languages from positive data and for numerous criteria involving language learning from both
positive and negative data.
, 2008
"... U-shaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is
occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is most ..."
Cited by 6 (2 self)
Add to MetaCart
U-shaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is
occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mostly concerned with whether Ushaped learning behaviour may be necessary in the abstract
mathematical setting of inductive inference, that is, in the computational learning theory following the framework of Gold. All notions considered are learning from text, that is, from positive data.
Previous work showed that U-shaped learning behaviour is necessary for behaviourally correct learning but not for syntactically convergent, learning in the limit ( = explanatory learning). The
present paper establishes the necessity for the hierarchy of classes of vacillatory learning where a behaviourally correct learner has to satisfy the additional constraint that it vacillates in the
limit between at most b grammars, where b ∈ {2, 3,...,∗}. Non U-shaped vacillatory learning is shown to be restrictive: every non U-shaped vacillatorily learnable class is already learnable in the
limit. Furthermore, if vacillatory learning with the parameter b = 2 is possible then non U-shaped behaviourally correct learning is also possible. But for b = 3, surprisingly, there is a class
witnessing that this implication fails.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=664174","timestamp":"2014-04-20T21:41:30Z","content_type":null,"content_length":"35975","record_id":"<urn:uuid:fbd5e8a9-311a-4be7-9274-4b50606d5e3b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PELAJARAN 2011/2012
Abstract: The objectives of this research are to investigate: (1) wich learning model of Think Pair Share (TPS), Team Assisted Individualization (TAI), or conventional learning model results in a
better learning achievement in Mathematics, (2) which learning interest of the high learning interest, the moderate learning interest, and the low learning interest results in a better learning
achievement in Mathematics,(3) in the students with the high, moderate, and low learning interests, wich learning model of Think Pair Share (TPS), Team Assisted Individualization (TAI), and
conventional learning model results in a better learning achievement in Mathematics. This research used the quasi experimental research method with the factorial design of 3x3. The population of the
research were the students of Vocational High School (SMK) in Ponorogo regency on Academic Year 2011/2012. The samples of the research were taken by using the stratified cluster random sampling
technique. It was conducted at SMK 1 of Ponorogo, SMK Bakti of Ponorogo, and SMK Sore 1 of Ponorogo. The samples included two experimental classes and one control class of each of the schools. The
size of the sample was 275 students consisted of 84 students in the first experimental class, 105 sudents in the second experimental class and 86 students in control class. The data of the research
were gathered through mathematics achievement tes and quetionary of learning interest. The data was analyzed by using two-way analysis of variance with.The results of the reseach are as follows: (1)
both TPS and TAI result in the same good learning achievement in Mathematics, and result in a better learning achievement than the conventional one does; (2) the students with the higher learning
interest have a better learning achievement in Mathematics than those with the moderate learning interest and the low learning interest, but the students with the moderat learning interest have the
same learning achievement in Mathematics those with the low learning interest; (3) in the students with the high, moderat, and low learning interest, both TPS and TAI result in the same good learning
achievement in Mathematics, and result in a better learning achievement than the conventional one does.
Key words: Think Pair Share, Team Assisted Individualization, Conventional, Learning Interest
Teks Lengkap:
• Saat ini tidak ada refbacks.
|
{"url":"http://jurnal.pasca.uns.ac.id/index.php/mat/article/view/483","timestamp":"2014-04-20T23:32:00Z","content_type":null,"content_length":"17630","record_id":"<urn:uuid:c5b1e013-f3d8-4a42-92cb-b9abe67bd807>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tustin, CA Math Tutor
Find a Tustin, CA Math Tutor
...I am an experienced and credentialed elementary teacher and I work with children that are below grade level in small groups with Anaheim City School District. Thank you for your time, :) I
teach ELD (English Language Development) to second through sixth grade children and I have also taught ESL ...
9 Subjects: including prealgebra, reading, Spanish, ESL/ESOL
...I also worked for a CPA firm for 3 years where I performed mostly governmental audits, financial compilations and reviews. However, to make a long story short, I found out that public
accounting was not for me (boooorrriinnnnnng).... I have worked as a tutor for two years now. I helped many st...
3 Subjects: including statistics, accounting, finance
...Together, we can help your student become more successful and confident! All materials such as paper, pencils, calculator, erasers, etc. needed for studying and completing assignments and
projects must be provided by the parents. I typically only work with students when school is in session.
15 Subjects: including algebra 1, English, geometry, prealgebra
...You will find me friendly, patient, enthusiastic, and reliable. Contact me today, and let’s get to work!The SAT math section tests your understanding of arithmetic, algebra, and geometry. If
you are already a math whiz, I will help you learn how to avoid trap answers on the difficult questions.
3 Subjects: including SAT math, SAT reading, SAT writing
...The key to success in life isn't just hard work--it's having a genuine interest or passion for learning! I've been a tutor for three years at the high school level, working with English
language learners domestic and abroad. I have extensive tutoring and TA experience in 10th and 11th grade Eng...
16 Subjects: including algebra 2, precalculus, reading, trigonometry
Related Tustin, CA Tutors
Tustin, CA Accounting Tutors
Tustin, CA ACT Tutors
Tustin, CA Algebra Tutors
Tustin, CA Algebra 2 Tutors
Tustin, CA Calculus Tutors
Tustin, CA Geometry Tutors
Tustin, CA Math Tutors
Tustin, CA Prealgebra Tutors
Tustin, CA Precalculus Tutors
Tustin, CA SAT Tutors
Tustin, CA SAT Math Tutors
Tustin, CA Science Tutors
Tustin, CA Statistics Tutors
Tustin, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Tustin_CA_Math_tutors.php","timestamp":"2014-04-16T22:11:20Z","content_type":null,"content_length":"23673","record_id":"<urn:uuid:c6052959-9aa5-4754-953b-8237d78c3a90>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
San Ysidro, CA Trigonometry Tutor
Find a San Ysidro, CA Trigonometry Tutor
...I passed by IB Biology HL exam while in high school, and received an A in my honors biology course in high school as well. I received 5's in both of the AP calculus tests, and am a UCSD
biology student and so use Calculus on a regular basis in my classes. I have gone through math courses up until the higher levels of calculus.
42 Subjects: including trigonometry, Spanish, reading, writing
...That gives me enough time to rearrange my day but also gives you the flexibility to cancel even if you're just having a tough day. If I have the time in my schedule I am often willing to move
our session to another date/time. I look forward to hearing from you!
37 Subjects: including trigonometry, calculus, geometry, algebra 1
I started tutoring in high school when my cousin, who is like a brother to me, was struggling to pass Algebra II and, thus, was at risk of not graduating. In college I began volunteering with
students in inner city schools in urban Los Angeles. This was a great challenge because the students do no...
26 Subjects: including trigonometry, chemistry, physics, geometry
...I am an effective tutor because of my skill in assessing my student's needs, but also because of my ability to empathize with young learners. I believe there is always an unseen angle that
each learner can use to make the subject material more accessible and interesting. When tutoring, I look f...
14 Subjects: including trigonometry, French, geometry, ESL/ESOL
...I have acted as a teaching assistant for a calculus courses at UCSD, and much of the relevant material involves trigonometry. I have taken both undergraduate and graduate statistics courses
and received a PhD pass on the UCSD math department's statistics qualifying exam. I achieved a perfect score on the SAT math section and have considerable knowledge of strategies to perform well
21 Subjects: including trigonometry, physics, calculus, geometry
Related San Ysidro, CA Tutors
San Ysidro, CA Accounting Tutors
San Ysidro, CA ACT Tutors
San Ysidro, CA Algebra Tutors
San Ysidro, CA Algebra 2 Tutors
San Ysidro, CA Calculus Tutors
San Ysidro, CA Geometry Tutors
San Ysidro, CA Math Tutors
San Ysidro, CA Prealgebra Tutors
San Ysidro, CA Precalculus Tutors
San Ysidro, CA SAT Tutors
San Ysidro, CA SAT Math Tutors
San Ysidro, CA Science Tutors
San Ysidro, CA Statistics Tutors
San Ysidro, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/san_ysidro_ca_trigonometry_tutors.php","timestamp":"2014-04-21T14:57:03Z","content_type":null,"content_length":"24503","record_id":"<urn:uuid:726175b6-1782-4b95-b5ff-6a7720e104dc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Identify Plane Shapes and Solid Shapes
Lesson: Identify Plane Shapes and Solid Shapes
Introducing the Concept
Your children were formally introduced to plane shapes in Kindergarten. Now they will sort and classify plane shapes by the number of sides, number of corners, and size.
Materials: red, yellow, and blue construction paper circles; a set of four basic plane shapes (circle, square, rectangle, triangle) for each child (use small attribute blocks or the smaller shapes
from Learning Tool 27 in the Learning Tools Folder), a larger version of the four basic plane shapes for every child or pair of children (use large attribute blocks or the larger shapes from Learning
Tool 27); chart paper
Preparation: To review position words, cut out construction paper circles so that each child will have one and so that the colors will be as evenly divided as possible. If using Learning Tool 27,
copy and cut out shapes. You may want to put each set of shapes in an envelope so you can easily give them to the children.
Prerequisite Skills and Concepts: none
Begin by reviewing position words with the class. Have children work in groups of three. For each group, give one child a yellow circle, one a red circle, and one a blue circle. Then, using position
words such as between, below, left, and right, give instructions on how to arrange the circles in relation to one another. Tell children that they will next work with their circles and other shapes.
When the children are done, have volunteers share one way in which they sorted their shapes. While one partner explains the attribute by which they sorted, have the other tape the shapes in their
appropriate groups onto one of the pieces of chart paper. At the top of each paper, label how the shapes are sorted. Possible attributes by which to sort are size, shape, number of sides, and number
of corners. You may wish to display the completed charts.
|
{"url":"http://www.eduplace.com/math/mw/background/1/08a/te_1_08a_shapes_ideas.html","timestamp":"2014-04-21T07:33:44Z","content_type":null,"content_length":"8232","record_id":"<urn:uuid:efdaa10a-faa8-4311-ab45-07720bb51f23>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Type functions and ambiguity
Simon Peyton-Jones simonpj at microsoft.com
Mon Mar 9 11:56:14 EDT 2009
Dan's example fails thus:
| Map.hs:25:19:
| Couldn't match expected type `Nest n1 f b'
| against inferred type `Nest n1 f1 b'
| In the expression: fmap (deepFMap n f)
| In the definition of `deepFMap':
| deepFMap (S n) f = fmap (deepFMap n f)
| for reasons I don't really understand. So I tried the following:
For what it's worth, here's why. Suppose we have
type family N a :: *
f :: forall a. N a -> Int
f = <blah>
g :: forall b. N b -> Int
g x = 1 + f x
The defn of 'g' fails with a very similar error to the one above. Here's why. We instantiate the occurrence of 'f' with an as-yet-unknown type 'alpha', so at the call site 'f' has type
N alpha -> Int
Now, we know from g's type sig that x :: N b. Since f is applies to x, we must have
N alpha ~ N b
Does that imply that (alpha ~ b)? Alas no! If t1=t2 then (N t1 = N t2), of course, but *not* vice versa. For example, suppose that
type instance N [c] = N c
Now we could solve the above with (alpha ~ [b]), or (alpha ~ [[b]]).
You may say
a) There is no such instance. Well, but you can see it pushes the search for a (unique) solution into new territory.
b) It doesn't matter which solution the compiler chooses: any will do. True in this case, but false if f :: forall a. (Num a) => N a -> Int. Now it matters which instance is chosen.
This kind of example encapsulates the biggest single difficulty with using type families in practice. What is worse is that THERE IS NO WORKAROUND. You *ought* to be able to add an annotation to guide the type checker. Currently you can't. The most obvious solution would be to allow the programmer to specify the types at which a polymorphic function is called, like this:
g :: forall b. N b -> Int
g x = 1 + f {b} x
The {b} says that f takes a type argument 'b', which should be used to instantiate its polymorphic type variable 'a'. Being able to do this would be useful in other circumstances (eg impredicative polymorphism). The issue is really the syntax, and the order of type variables in an implicit forall.
Anyway I hope this helps clarify the issue.
| -----Original Message-----
| From: glasgow-haskell-users-bounces at haskell.org [mailto:glasgow-haskell-users-
| bounces at haskell.org] On Behalf Of Dan Doel
| Sent: 06 March 2009 03:08
| To: glasgow-haskell-users at haskell.org
| Subject: Deep fmap with GADTs and type families.
| Greetings,
| Someone on comp.lang.functional was asking how to map through arbitrary
| nestings of lists, so I thought I'd demonstrate how his non-working ML
| function could actually be typed in GHC, like so:
| --- snip ---
| {-# LANGUAGE TypeFamilies, GADTs, EmptyDataDecls,
| Rank2Types, ScopedTypeVariables #-}
| data Z
| data S n
| data Nat n where
| Z :: Nat Z
| S :: Nat n -> Nat (S n)
| type family Nest n (f :: * -> *) a :: *
| type instance Nest Z f a = f a
| type instance Nest (S n) f a = f (Nest n f a)
| deepMap :: Nat n -> (a -> b) -> Nest n [] a -> Nest n [] b
| deepMap Z f = map f
| deepMap (S n) f = map (deepMap n f)
| --- snip ---
| This works. However, the straight forward generalisation doesn't:
| --- snip ---
| deepFMap :: Functor f => Nat n -> (a -> b) -> Nest n f a -> Nest n f b
| deepFMap Z f = fmap f
| deepFMap (S n) f = fmap (deepFMap n f)
| --- snip ---
| This fails with a couple errors like:
| Map.hs:25:19:
| Couldn't match expected type `Nest n1 f b'
| against inferred type `Nest n1 f1 b'
| In the expression: fmap (deepFMap n f)
| In the definition of `deepFMap':
| deepFMap (S n) f = fmap (deepFMap n f)
| for reasons I don't really understand. So I tried the following:
...rest snipped...
More information about the Glasgow-haskell-users mailing list
|
{"url":"http://www.haskell.org/pipermail/glasgow-haskell-users/2009-March/016770.html","timestamp":"2014-04-23T10:37:23Z","content_type":null,"content_length":"7029","record_id":"<urn:uuid:9151ff82-90be-4200-b297-e843a2302768>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Having difficulty with Matrix.py
Blair Hall b.hall at irl.cri.nz
Thu Jun 5 16:23:07 CDT 2003
I have been caught out writing a simple function
that manipulates the elements of a matrix.
I have distilled the problem into the code below.
I find that 'sum' works with a Numeric
array but fails with a Matrix. Am I doing something
wrong? If not, what should I do to patch it?
(I am working under Windows with Numpy 23.0)
Here is the code:
# 'a' is a two dimensional array. The
# function simply places the sum of
# the first column in a[0,0]
def sum(a):
N = a.shape[0]
sum = 0
for i in range(N):
sum += a[i,0]
# This is the criticial line
a[0,0] = sum
return a[0,0]
if(__name__ == '__main__'):
import Numeric
import Matrix
a = Numeric.ones( (9,1) )
print sum( a ) # Ok
print sum( Matrix.Matrix( a ) ) # Fails
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2003-June/002143.html","timestamp":"2014-04-17T21:47:58Z","content_type":null,"content_length":"3067","record_id":"<urn:uuid:9dacce36-8af5-4c03-bd6a-e0b0874d6c84>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: LOCAL MULTIPLIER ALGEBRAS, INJECTIVE ENVELOPES,
AND TYPE I W
Abstract. Characterizations of those separable C
-algebras that have W
injective envelopes or W
-algebra local multiplier algebras are presented. The C
envelope and the injective envelope of a class of operator systems that generate
certain type I von Neumann algebras are also determined.
The local multiplier algebra Mloc(A) of a C
-algebra A is the C
-algebraic direct
limit of multiplier algebras M(K) along the downward-directed system E(A) of all
(closed) essential ideals K of A. Such algebras first arose in the study of derivations
and were formally introduced by Pedersen in [17], where he proves that every deriva-
tion on a separable C
-algebra A extends to an inner derivation of Mloc(A). The
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/485/2500574.html","timestamp":"2014-04-20T14:21:39Z","content_type":null,"content_length":"7900","record_id":"<urn:uuid:1f3b8051-2f4b-4fdf-abd4-85231b251c54>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Canny module implements the Canny Edge Detector first specified by John Canny is his paper "A Computational Approach to Edge Detection". The Canny edge detector is regarded as one of the best
edge detectors currently in use. The basic algorithm can be outlined as follows:
1. Blur the image slightly. This removes the effects of random black or white pixels (i.e. noise pixels) that can adversely affect the following stages. Blurring is typically accomplished using a
Gaussian Blur. However, you should be able to use a box averaging filter like the Mean filter to accomplish the same effect. The mean filter can be executed faster than a Gaussian blur so for
real-time applications the computational advantage can be worthwhile.
2. Compute the x and y derivatives of the image. This is basically an edge detection step. Similar to the Sobel edge detector the x and y derivatives are simply calculated by subtracting the values
of the two surrounding neighboring pixels depending on which axis is being computed (left and right for x derivative and top and bottom for y derivative).
3. From the x and y derivatives you can compute the edge magnitude which basically signifies the strength of the edge detected at the current point.
4. If you were to look at the edge image at this point it would look very similar to the Sobel edge filter. Namely, the edges would be thick in many areas of the resulting edge image. To 'thin' these
edges a nonmaximal filter needs to be applied that only preserves edges that are the top or ridge of a detected edge. The nonmaximal filter is applied by first analyzing what the slope of the edge is
(you can determine this by the sign and magnitude of the x and y derivatives calculated in the previous steps) and use that slope to determine what direction the current edge is heading. You can then
determine the two edge magnitudes perpendicular to the current heading to determine if the current pixel is on an edge ridge or not. If the current pixel is on a ridge its magnitude will be greater
than either of the two magnitudes of the perpendicular pixels (think of walking on a mountain ridge with the ground decreasing on both sides of your heading). Using this ridge detection and
subsequant elimination will result in very thin lined edges.
5. Once the edges have been thinned the last step is to threshold the detected edges. Edge magnitudes above the upper threshold are preserved. Edges below the upper threshold but above the lower
threshold are preserved ONLY if they connect (i.e. 8 neighborhood touching) to edges that are above the upper threshold. This process is know as hysteresis and allows edges to grow larger than they
would by using a single threshold without introducing more noise into the resulting edge image.
1. Theta - Select the Gaussian blur standard deviation amount. Also known as sigma. Typical values range from 0.60 to 2.40. The higher the value the greater the blurring of the image. Note that the
window size of the filter is determined automatically.
2. Low Threshold - Select the low threshold above which possible edges can exist. Edges below the low threshold are removed from the image.
3. High Threshold - Select the high threshold above which edges are preserved in the image. Edges between the high and low threshold are ONLY preserved if they are connected to an edge that is above
the high threshold. See hysteresis above.
Source Canny Edge Detection
See Also
Difference of Gaussian
For more information
HIPR - Canny Edge Detector
Bill Green - Canny Edge Detection Tutorial
Noah Kuntz - Canny Tutorial
Canny Related Forum Posts Last post Posts Views
Detecting deformed rings 2 months 5 165
Hi, I have the following problem and I'd appreciate your help: I need to detect, ID and track a se...
|
{"url":"http://www.roborealm.com/help/Canny.php","timestamp":"2014-04-20T13:41:59Z","content_type":null,"content_length":"12745","record_id":"<urn:uuid:402905ae-f3ad-4146-829a-11ecec3c764b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Providence Precalculus Tutor
...I am located in Cranston, RI and am willing to work with students in grades 7-12 within a 30 minute drive. I have a flexible schedule and am available most evenings as well as weekends. In
addition, I am a classically trained chef!
32 Subjects: including precalculus, physics, calculus, C
...Once you understand it, science and math is fun and easy. My style is not one who instructs. Instead, I lead and guide.
12 Subjects: including precalculus, chemistry, physics, calculus
...I am also a chemistry instructor at Wheaton College. I majored in biochemistry and psychology as an undergraduate student at Wheaton College. I have years of experience as a tutor.
17 Subjects: including precalculus, chemistry, calculus, geometry
I recently graduated from Rhode Island College with a Master's in Elementary Education, after completing a Bachelor of Science of Electrical Engineering at Illinois Institute of Technology. I have
experience teaching at elementary and middle school levels, as well as tutoring from elementary to undergraduate level.
15 Subjects: including precalculus, calculus, physics, SAT math
...Learning should be relevant and fun. That is how I try to organize the tutoring sessions. I am also interested and willing to help with test taking skills and organization.
31 Subjects: including precalculus, reading, biology, geometry
|
{"url":"http://www.purplemath.com/East_Providence_Precalculus_tutors.php","timestamp":"2014-04-18T13:28:03Z","content_type":null,"content_length":"23915","record_id":"<urn:uuid:e0378fa1-7f94-435b-bcdd-bff0ce2bc753>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Collisions between rooks taking random flights on an $N$ by $M$ chessboard
up vote 4 down vote favorite
I randomly place $k$ rooks on an (arbitrarily sized) $N$ by $M$ chessboard. Until only one rook remains, for each of $P$ time intervals we move the pieces as follows:
(1) We choose one of the $k$ rooks on the board with uniform probability.
(2) We choose a direction for the rook, $(N, W, E, S)$, with uniform probability.
(3) We choose a number of squares in which to move the rook along the direction chosen in [2] with uniform probability over the interval consisting of the rook's current position to the edge of the
(4) If the rook being moved collides with another piece while being translated in [3], just as in regular chess it will annihilate that piece and remain at the piece's former position.
NOTE - An alternative way of stating [2], [3], and [4] would be to say that the chosen rook samples all possible sets of moves, with uniform probability, and is unable to bypass other rooks without
annihilating them and stopping at their former positions.
NOTE 2 - Gerhard Paseman is correct in suggesting that the original formulation for [2] and [3] will bias the rook towards shorter path lengths. This is in part due to the choice of direction in [2]
not being weighted by the resulting possible number of choices in [3], and also the over-counting of positions in [3] due to the lack of consideration that there may be a collision. There are also
problems with [2] near the board's boundaries where a direction can be chosen in which no move can take place. Instead of [2] and [3], I'll suggest that a better method would be to number all
possible position that the chosen rook from [1] can occupy (keeping the collision constraint from [4] in mind), and then use a PRNG to select the next position.
What does the distribution look like for the number of time intervals, $P$, necessary for only a single rook to remain on the board?
pr.probability stochastic-processes geometry chess
This really calls for an actual implementation and numerical experimentation! – Per Alexandersson Sep 23 '12 at 18:44
1 @Per Alexandersson, Absolutely you're right. But I posted the question here because I would be really interested if there are any techniques for tackling this problem in lieu of simulations. –
T.R. Sep 23 '12 at 19:19
1 Actually, there are different ways of making things uniform while getting different distributions. Your choices 2 and 3 do not uniformly sample all moves open to a rook. Not only is there bias
towards shorter directions but also towards impact. In spite of the responders efforts, I bet a graph or set of statistics showing what directions and lengths were taken (or what positions were
occupied) would be markedly different with each implementation. Gerhard "Ask Me About System Design" Paseman, 2012.09.23 – Gerhard Paseman Sep 23 '12 at 23:27
Also, I might suggest the literature on random walks in graphs. I am too ignorant to give a good pointer, but perhaps Brendan McKay (or maybe Douglas Zare or Igor Rivin?) knows the literature well
enough to give you a start. A 14-regular graph on 64 vertices might give you the structure you desire, with tweaking to take care of stopping a rook upon collision. Gerhard "Ten Moves For Two
Rooks?" Paseman, 2012.09.23 – Gerhard Paseman Sep 23 '12 at 23:33
add comment
5 Answers
active oldest votes
To supplement Per Alexandersson's and Aaron Golden's data, here are two distributions for $4$ rooks on a $4 \times 4$ board (mean: 21.5 moves), and $8$ rooks on an $8 \times 8$ board
(mean: 73.6 moves):
up vote 4
down vote 10,000 trials each. Now updated to count only moves that actually move a rook! Here is a $4 \times 4$ example that took $19$ moves.
(I miscounted the moves in the first version I posted.) – Joseph O'Rourke Sep 23 '12 at 20:26
In your implementation are rooks allowed to choose non-moves, e.g. choosing to move 0 spaces or choosing to move past an edge of the board (then the move would be clamped, I guess).
After some more debugging I'm seeing numbers similar to yours, but my average move counts are consistently slightly higher than yours. – Aaron Golden Sep 23 '12 at 21:22
@Aaron: I did count an attempt to move off the board as a move. Another possible difference is that, if a rook moved into the board, and had $k$ possible steps to the edge of the board,
I chose uniformly among $\lbrace 1,2,\ldots,k \rbrace$, i.e., not including $0$. – Joseph O'Rourke Sep 23 '12 at 23:41
2 @Joseph O'Rourke, as a word of explanation for my original choice of [2], I was imagining that whatever was moving the rook wouldn't have global information about the rook's position on
the board. To make an extremely tenuous connection, I thought of this question after reading a paper on how (GPS tracked) albatrosses' execute Lévy flights while engaging in foraging
behavior: (pnas.org/content/early/2012/04/18/1121201109.abstract) – T.R. Sep 24 '12 at 12:07
2 I bet the bulk of the movements come from resolving the position of two rooks on the board. Once statistics for two rooks are developed, I predict that k rooks will be easy to derive
from the two rook situation. Gerhard "Also Selling Some Famous Bridges" Paseman, 2012.09.24 – Gerhard Paseman Sep 25 '12 at 0:38
show 10 more comments
Joseph O'Rourke beat me to it, but here is yet another set of data. This is starting from an 8x8 board completely filled with rooks. The x-axis is the number of moves needed to eliminate all
but one rook. The y-axis is the number of times this number of moves was sufficient, divided by the number of trials (1,000,000). The random number generator in use is just C's standard
library random mod whatever maximum is allowed for a given number, so the random numbers are not quite uniformly distributed. I should also note that in my implementation I am allowing a
rook to choose to move in any direction, even if not all directions are available, so sometimes a step is taken in which no rook moves. This will increase the number of moves necessary by a
up vote little bit. The mean number of moves in my trials is 199.
3 down
vote For the 4x4 with 4 rooks case I get a mean of 31 moves. For the 8x8 with 8 rooks case I get a mean of 94 moves, and for a 2x2 board with 2 rooks I get a mean of 7 moves. My implementation
now produces results that agree with Joseph O'Rourke's version. Previously my implementation had a bug that caused moves to be allowed one extra square to the East and one extra square to
the South, causing my average move counts to be too high.
I got very similar graphs as well for the distribution of the number of moves needed. I used the same algorithm; sometimes, a move is not an actual move. – Per Alexandersson Sep 23 '12 at
@Aaron: Now I altered my data to count only moves that actually move, so again our data is out of synch.:-) But I do think we are in accord in spirit now, even if not in the counting
details. – Joseph O'Rourke Sep 24 '12 at 20:51
@Joseph: Yes if I modify my version to avoid counting null moves I see the same mean move counts as you for the 4x4 (4 rooks) and 8x8 (8 rooks) case. – Aaron Golden Sep 25 '12 at 4:18
@Aaron: Reassuring that the simulations all agree! – Joseph O'Rourke Sep 25 '12 at 11:59
add comment
So, I did some numerical experiments on 4 rooks on a k times k board. Each data point is the mean of 500 runs.
The x axis is the width/height of the board, the y axis is the number of iterations needed for it to be only one rook left.
EDITED: So, I did some changes and now my data conforms with the others:
1) Choose rook. 2) Choose direction. 3) If direction does not allow moving in that direction, goto 2. 4) Move 1,2,.. or k steps in direction chosen, where k gives a boundary
up vote 3 down vote square.
I.e. this does not count non-moves (which the image ABOVE do).
The image below shows mean of 500 runs, k rooks on a k*k board, starting at k=1.
3 @Per: What does the data point mean for a $1 \times 1$ board? – Joseph O'Rourke Sep 24 '12 at 0:36
Oh, right, I was too quick creating the graph; the first data point is 4x4, the next is 5x5, etc. – Per Alexandersson Sep 24 '12 at 5:52
@jc Fixed, there was a difference in implementation. – Per Alexandersson Sep 25 '12 at 6:12
@Per: Does that look to you like exponential growth? – Joseph O'Rourke Sep 25 '12 at 12:00
@Joseph O'Rourke: I have no idea, really, I'll try a few more experiments. – Per Alexandersson Sep 25 '12 at 19:17
show 1 more comment
Depending on the precise model (see NOTE 2), each rook has probability of order $1/n$ of capturing another in a given step (they need to be on the same row, and then the probability is $O
(1)$. Note also that the positions of rooks are mixed very quickly (this random walk mixes in $O(1)$ steps.
This brings this process into the range of Kingman's coalescent. As $n\to\infty$, the model can be approximated as follows: When there are $k$ rooks left, each pair of rooks merge at rate
up vote 3 $a/(kn)$, where $a$ is some constant that can be computed. T(he $1/k$ factor is from the probability that one of them is moved; If the rooks moves were timed independently it would not be
down vote there.) There are $\binom{k}{2}$ pairs.
The time until a single rook is left is roughly a sum of independent exponentials, which will be asymptotically concentrated near $2n\log(M)/a$, where $M$ is the initial number of rooks.
add comment
It occurred to me it might be of interest to see what happens if you start with a board completely clogged with rooks*, so I decided to pluck the lowest-hanging nontrivial fruit and examine
the $2\times2$ case, which features 5 distinct states: the starting state $S$ with 4 rooks, a trio state $T$, a pair of doublet states $R$ and $D$ with the rooks lined up in a row or along a
diagonal, respectively, and the quitting state $Q$ with just one rook.
In this set-up, state $S$ transitions in one step to $T$ (I'm assuming here that when you pick a rook at random, you actually have to move it). State $T$ transitions back to itself with
probability 1/3, to $R$ with probability 1/3, and to $D$ with probability 1/3. State $D$ transitions to $R$ with probability 1, while $R$ transitions to $D$ with probability 1/2 and to $Q$
with probability 1/2. For the expected number of steps to get to $Q$, we thus have
up vote $$E(S) = 1+E(T)$$ $$E(T) = 1 +{1\over3}(E(T)+E(R)+E(D) $$ $$E(D) = 1 + E(R)$$ $$E(R) = 1 + {1\over2}E(D)$$ from which one finds $E(R) = 3$, $E(D) = 4$, $E(T) = 5$, and $E(S) = 6$.
1 down
vote It seems doubtful that the expected values will be integers in general, but it might be worth checking the $2\times3$ and $3\times3$ cases, which ought to be doable. (The $2\times3$ case,
which has 23 essentially different states, might be a good place to experiment with different conventions for the transition probabilities.) One thing worth noting: The states $R$ and $D$
are equiprobable when starting from $S$, but not if you create a 2-rook state from scratch by placing rooks at random. This makes me wonder what Joseph O'Rourke's histogram would look like
if you started, say with 16 rooks on a $4\times 4$ board but didn't start counting moves until you were down to the last 4 rooks.
*I wrote all this up before I read Aaron Golden's answer carefully. His graph shows simulated results for the $8\times8$ case starting with 64 rooks, but he's allowing rooks not to move if
they're on an edge of the board.
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability stochastic-processes geometry chess or ask your own question.
|
{"url":"http://mathoverflow.net/questions/107915/collisions-between-rooks-taking-random-flights-on-an-n-by-m-chessboard/107924","timestamp":"2014-04-18T11:11:33Z","content_type":null,"content_length":"96515","record_id":"<urn:uuid:05f8fda8-90c0-43ac-bc5c-b17ba254bca6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
ThankYOu @badreferences
Best Response
You've already chosen the best response.
There's some time series analysis in this I think... can anyone recommend any good texts on that stuff?
Best Response
You've already chosen the best response.
unsolved problems?
Best Response
You've already chosen the best response.
Well, P=NP is naturally an unsolved problem. This paper is making the assumption that it is the case, though.
Best Response
You've already chosen the best response.
Was about to say, P=NP is an unsolved problem..
Best Response
You've already chosen the best response.
Abstract: "I prove that if markets are weak-form efficient, meaning current prices fully reflect all information available in past prices, then P = NP, meaning every computational problem whose
solution can be verified in polynomial time can also be solved in polynomial time. I also prove the converse by showing how we can "program" the market to solve NP-complete problems. Since P
probably does not equal NP, markets are probably not efficient. Specifically, markets become increasingly inefficient as the time series lengthens or becomes more frequent. An illustration by way
of partitioning the excess returns to momentum strategies based on data availability confirms this prediction."
Best Response
You've already chosen the best response.
this is too heavy for me disrete (abstract) math
Best Response
You've already chosen the best response.
Looking through it I can't find anything about time series analysis, despite what the abstract might suggest. Kind of handwaved as irrelevant.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5088a20ae4b004fc96eb8337","timestamp":"2014-04-16T04:24:01Z","content_type":null,"content_length":"45283","record_id":"<urn:uuid:77add3ff-9485-48e9-a45e-c2531ce89ba7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Series RLC Circuits
An RLC circuit consists of a resistor with resistance , an inductor with inductance , and a capacitor with capacitance . The current in an RLC series circuit is determined by the differential
where and is the AC emf driving the circuit. The angular frequency ω is related to the frequency in hertz (Hz) by . In this Demonstration, the amplitude is set to 10 volts (V). You can vary the
frequency in Hz, the resistance in ohms (), the inductance in millihenries (mH), and the capacitance in microfarads (). The voltage V in volts and current in milliamperes (mA) are shown in the plot
over a 50-millisecond (msec) window.
The sinusoidal curves for voltage and current are out of phase by an angle , where
When the effect of inductance is dominant, then , and the voltage
the current. When the capacitance contribution is dominant (for small values of ), then , and the current leads the voltage. The mnemonic "ELI the ICEman" summarizes these relationships. When the
circuit has a pure resistance or when the resonance conditionis satisfied, then , meaning that the voltage and current are in phase.
Snapshot 1: effect of induction dominates; and voltage leads current (ELI)
Snapshot 2: effect of capacitance dominates; and current leads voltage (ICE)
Snapshot 3: circuit fulfills resonance condition; and current and voltage are in phase
|
{"url":"http://www.demonstrations.wolfram.com/SeriesRLCCircuits/","timestamp":"2014-04-20T11:46:33Z","content_type":null,"content_length":"46028","record_id":"<urn:uuid:99696c22-a139-42fe-9c8e-fd60ca7070ca>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: METHODS AND APPARATUS FOR ADAPTIVE COUPLED PRE-PROCESSING AND POST-PROCESSING FILTERS FOR VIDEO ENCODING AND DECODING
Inventors: Kiran Misra (Vancouver, WA, US) Kiran Misra (Vancouver, WA, US) Joel Sole (La Jolla, CA, US) Joel Sole (La Jolla, CA, US) Peng Yin (Ithaca, NY, US) Xiaoan Lu (Princeton, NJ, US) Yunfei
Zheng (San Diego, CA, US) Yunfei Zheng (San Diego, CA, US) Qian Xu (Folsom, CA, US) Qian Xu (Folsom, CA, US)
Assignees: Thomson Licensing
IPC8 Class: AH04N726FI
USPC Class: 37524003
Class name: Television or motion video signal adaptive quantization
Publication date: 2012-11-08
Patent application number: 20120281753
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Methods and apparatus are provided for adaptive coupled pre-processing and post-processing filters for video encoding and decoding. The apparatus and method encode input data for a picture into a
resultant bitstream, wherein said video encoder comprises a pre-filter and a post-filter coupled to the pre-filter, wherein said pre-filter filters the input data for the picture and the post-filter
filters in-loop reconstructed data for the picture.
An apparatus, comprising: a video encoder for encoding input data for a picture into a resultant bitstream, wherein said video encoder comprises a pre-filter and a post-filter coupled to the
pre-filter, wherein said pre-filter filters the input data for the picture and the post-filter filters in-loop reconstructed data for the picture.
In a video encoder, a method, comprising: encoding input data for a picture into a resultant bitstream, wherein said video encoder comprises a pre-filter and a post-filter coupled to the pre-filter,
wherein said pre-filter filters the input data for the picture and the post-filter filters in-loop reconstructed data for the picture.
The method of claim 2, wherein filter coefficients and filter parameters of the post-filter are selected such that the post-filter is an exact inverse of the pre-filter.
The method of claim 2, wherein at least one of the pre-filter and the post-filter comprise different filters, and chroma components of the picture are filtered using different ones of the different
filters than the luma components of the picture.
The method of claim 2, wherein at least one of filter coefficients and filter parameters for at least one of the pre-filter and the post-filter are selected responsive to at least one of a
resolution, a quantization level, a local gradient, a prediction mode, and a gradient direction.
The method of claim 2, wherein a post-filter transform matrix used by the post-filter to filter the reconstructed data is decomposed into two summands, one of the two summands being an identity
matrix and another one of the two summands being a matrix representing a multiplication of a first, a second, and a third matrix, the first and the third matrices being fixed, and the second matrix
being variable in order to reverse an adaptation of the pre-filter.
The method of claim 2, wherein filter coefficients and filter parameters of the post-filter are selected such that the post-filter is substantially an inverse of the pre-filter.
The method of claim 7, wherein the post-filter is configured to substantially provide a same output data there from as input data provided to the pre-filter by minimizing a difference between an
observation and a pre-filtered estimate, the observation relating to input data provided to the post-filter, and the pre-filtered estimate relating to an estimate of the input data for the picture
prior to filtering by the pre-filter.
The method of claim 2, wherein the pre-filter and the post-filter are integer implementations determined so as to minimize a distance relating to an exact invertibility between the pre-filter and the
The method of claim 2, wherein a filter size of the pre-filter and the post-filter is a same size as a transform applied to residue data, the residue data representing a difference between the input
data for the picture and reference data for at least one reference picture.
The method of claim 10, wherein the pre-filter and the post-filer comprise multiple filters, and at least one of the multiple filters is applied to all transform sizes of transforms applied to the
residue data.
The method of claim 2, wherein a filter size of at least one of the pre-filter and the post-filter is different from a size of a transform applied to the input data.
The method of claim 2, wherein the pre-filter and the post-filter are applied to only a portion of the input data.
The method of claim 13, wherein the portion of the input data is selected from at least one of a block boundary, and within a block.
An apparatus, comprising: a video decoder for decoding residual image data for a picture, wherein said video decoder comprises a pre-filter and a post-filter coupled to the pre-filter, wherein said
pre-filter filters a reference picture for use in decoding the residual image data and the post-filter filters in-loop reconstructed data for the picture.
In a video decoder, a method comprising: decoding residual image data for a picture, wherein said video decoder comprises a pre-filter and a post-filter coupled to the pre-filter, wherein said
pre-filter filters a reference picture for use in decoding the residual image data and the post-filter filters in-loop reconstructed data for the picture.
The method of claim 16, wherein filter coefficients and filter parameters of the post-filter are selected such that the post-filter is an exact inverse of the pre-filter.
The method of claim 16, wherein at least one of the pre-filter and the post-filter comprise different filters, and chroma components of the picture are filtered using different ones of the different
filters than the luma components of the picture.
The method of claim 16, wherein at least one of filter coefficients and filter parameters for at least one of the pre-filter and the post-filter are selected responsive to at least one of a
resolution, a quantization level, a local gradient, a prediction mode, and a gradient direction.
The method of claim 16, wherein a post-filter transform matrix used by the post-filter to filter the reconstructed data is decomposed into two summands, one of the two summands being an identity
matrix and another one of the two summands being a matrix representing a multiplication of a first, a second, and a third matrix, the first and the third matrices being fixed, and the second matrix
being variable in order to reverse an adaptation of the pre-filter.
The method of claim 16, wherein filter coefficients and filter parameters of the post-filter are selected such that the post-filter is substantially an inverse of the pre-filter.
The method of claim 21, wherein the post-filter is configured to substantially provide a same output data there from as input data provided to the pre-filter by minimizing a difference between an
observation and a pre-filtered estimate, the observation relating to input data provided to the post-filter, and the pre-filtered estimate relating to an estimate of the input data for the picture
prior to filtering by the pre-filter.
The method of claim 16, wherein the pre-filter and the post-filter are integer implementations determined so as to minimize a distance relating to an exact invertibility between the pre-filter and
the post-filter.
The method of claim 16, wherein a filter size of the pre-filter and the post-filter is a same size as a transform applied to residue data, the residue data representing a difference between the input
data for the picture and reference data for at least one reference picture.
The method of claim 24, wherein the pre-filter and the post-filer comprise multiple filters, and at least one of the multiple filters is applied to all transform sizes of transforms applied to the
residue data.
The method of claim 16, wherein a filter size of at least one of the pre-filter and the post-filter is different from a size of a transform applied to the input data.
The method of claim 16, wherein the pre-filter and the post-filter are applied to only a portion of the input data.
The method of claim 27, wherein the portion of the input data is selected from at least one of a block boundary, and within a block.
A non-transitory computer readable storage media having video signal data encoded thereupon, comprising: input data for a picture encoded, wherein the input data for the picture, when encoded, was
pre-filtered using a pre-filter, and in-loop reconstructed data for the picture, when encoded, was post-filtered using a post-filter directly coupled to the pre-filter.
This application claims the benefit of U.S. Provisional Application Ser. No. 61/291,596, filed Dec. 31, 2009, which is incorporated by reference herein in its entirety.
TECHNICAL FIELD [0002]
The present principles relate generally to video encoding and decoding and, more particularly, to methods and apparatus for adaptive coupled pre-processing and post-processing filters for video
encoding and decoding.
BACKGROUND [0003]
A block-based transform approach has been the primary choice for transforms in current video compression schemes and standards, such as the International Organization for Standardization/
International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) Standard/International Telecommunication Union, Telecommunication
Sector (ITU-T) H.264 Recommendation (hereinafter the "MPEG-4 AVC Standard"), when compared with more advanced transform approaches (such as sub-band coding, for example) due to its inherently lower
complexity and achievement of comparable performance. Lapped transforms perform significantly better than non-overlapping transforms such as discrete cosine transforms (DCTs) while incurring just a
small increase in complexity. Lapped transforms can be designed to maximize coding gain, maximize energy compaction, provide good frequency response, maximize regularity in the basis, or maximize a
combination of the above objectives. The coding gain is especially of interest since it translates directly to an improvement in the rate-distortion performance. The coding gain of a transform is
computed as the ratio of the "reconstruction distortion without transform" to that of the "reconstruction distortion with transform". Under the high-bitrate assumption, this quantity for a lapped
bi-orthogonal transform (LBT) is described in a first prior art approach as follows:
G TC
= { i = 1 M [ ( σ y i 2 σ x 2 ) P i - 1 2 ] } - 1 M ( 1 ) ##EQU00001##
is the variance of source x, y is the output of the lapped transform, σ
is the variance of the i
transform output, and P
is the i
synthesis basis (post-filter column) of the lapped transform. The design of a high bitrate lapped transform requires that the coding gain defined in Equation (1) is maximized.
In a second prior art approach, an alternate equivalent is disclosed for decomposing the quasi-optimal lapped transform into a pre-filter operation followed by a shifted DCT operation. The advantage
is that the pre-filter approach can be applied outside of existing encoder and decoder loops, therefore the second prior art approach requires little change within existing encoders and decoders.
For the pre-filtering based approach to lapped transform, the output y can be represented as follows:
=DCT[Shift(Px)] (2)
where Shift is a time
-shift between the pre-filter and the block transform, and P is the pre-filter applied on current data x.
Turning to FIG. 1, an implementation of a 4×8 lapped transform as a 4×4 pre-filter followed by a shifted 4×4 DCT operation is indicated generally by the reference numeral 100 That is, FIG. 1 depicts
two equivalent implementations of the lapped transform. In the top portion of FIG. 1, 8 input samples are directly transformed into 4 output samples by the lapped bi-orthogonal transform. Note that
in order to have the same number of total input and output samples, the next 8 input samples are taken with an overlap with the first 8 input samples, as can be observed in FIG. 1. Regarding the
bottom portion of FIG. 1, where the implementation uses a pre-filter, the following is involved: first a pre-filter is applied to 4 input samples; and then a DCT is applied. Note that the shift
between the pre-filter and the discrete cosine transform allows for the processing of 8 input samples for each 4 output samples in exactly the same way as the top portion of FIG. 1.
Previous efforts to augment the block-based coding approach such as that performed in the MPEG-4 AVC Standard include using pre-processing filters and post-processing filters and increasing the
coding gain while ignoring the impact on predictive-coding efficiency. However, such prior art pre-filters are designed to work with only a single transform. For example, a third prior art approach
involves a scheme in which the 4×4 pre-filter was designed to work with the 4×4 DCT for intra-coding only. Additionally, the third prior art approach does not give any consideration to modifying
dependent encoder and decoder blocks such as the rate-distortion optimizer and the most probable mode predictor to work in unison with pre-filtering and post-filtering and achieve a higher
compression efficiency.
SUMMARY [0008]
These and other drawbacks and disadvantages of the prior art are addressed by the present principles, which are directed to methods and apparatus for adaptive coupled pre-processing and
post-processing filters for video encoding and decoding.
According to an aspect of the present principles, there is provided an apparatus. The apparatus includes a video encoder for encoding input data for a picture into a resultant bitstream. The video
encoder includes a pre-filter and a post-filter coupled to the pre-filter. The pre-filter filters the input data for the picture and the post-filter filters in-loop reconstructed data for the
According to another aspect of the present principles, there is provided a method in a video encoder. The method includes encoding input data for a picture into a resultant bitstream. The video
encoder includes a pre-filter and a post-filter coupled to the pre-filter. The pre-filter filters the input data for the picture and the post-filter filters in-loop reconstructed data for the
According to still another aspect of the present principles, there is provided an apparatus. The apparatus includes a video decoder for decoding residual image data for a picture. The video decoder
includes a pre-filter and a post-filter coupled to the pre-filter. The pre-filter filters a reference picture for use in decoding the residual image data and the post-filter filters in-loop
reconstructed data for the picture.
According to yet another aspect of the present principles, there is provided a method in a video encoder. The method includes decoding residual image data for a picture. The video decoder includes a
pre-filter and a post-filter coupled to the pre-filter. The pre-filter filters a reference picture for use in decoding the residual image data and the post-filter filters in-loop reconstructed data
for the picture.
These and other aspects, features and advantages of the present principles will become apparent from the following detailed description of exemplary embodiments, which is to be read in connection
with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS [0014]
The present principles may be better understood in accordance with the following exemplary figures, in which:
FIG. 1 is a diagram showing a direct implementation of a 4×8 lapped transform and the equivalent implementation as a 4×4 pre-filter followed by a shifted 4×4 DCT operation;
FIG. 2 is a block diagram showing an exemplary video encoder with pre-processing filters and post-processing filters, in accordance with an embodiment of the present principles;
FIG. 3 is a block diagram showing an exemplary video decoder with pre-processing filters and post-processing filters, in accordance with an embodiment of the present principles;
FIG. 4 is a flow diagram showing an exemplary method for encoding image data involving separate luma and chroma pre-filtering, in accordance with an embodiment of the present principles;
FIG. 5 is a flow diagram showing an exemplary method for decoding image data involving separate luma and chroma post-filtering, in accordance with an embodiment of the present principles;
FIGS. 6A-4D are diagrams showing four possible choices for the function I (•,•) used to model predictive-coding (intra/inter), in accordance with an embodiment of the present principles;
FIG. 7 is a flow diagram showing an exemplary method for offline training of filter, adaptation, and enhancement parameters, in accordance with an embodiment of the present principles;
FIG. 8 is a flow diagram showing an exemplary method for performing post-filtering by minimizing the distance of a pre-filtered estimate to observed data, in accordance with an embodiment of the
present principles;
FIG. 9 is a flow diagram showing an exemplary method for encoding video data with pre-processing filtering and post-processing filtering, in accordance with an embodiment of the present principles;
FIG. 10 is a flow diagram showing an exemplary method for decoding video data with pre-processing filtering and post-processing filtering, in accordance with an embodiment of the present principles;
FIG. 11 is a flow diagram showing an exemplary method for encoding image data involving separate luma and chroma pre-filtering, in accordance with an embodiment of the present principles;
FIG. 12 is a flow diagram showing an exemplary method for decoding image data involving separate luma and chroma post-filtering, in accordance with an embodiment of the present principles;
FIG. 13 is a flow diagram showing an exemplary method for performing adaptive pre-filtering and post-filtering using singular value decomposition (SVD) in a lifting implementation, in accordance with
an embodiment of the present principles;
FIG. 14 is a flow diagram showing an exemplary method for deriving a lifting implementation of filters using matrix decomposition or Gaussian elimination, in accordance with an embodiment of the
present principles;
FIG. 15 is a flow diagram showing an exemplary method for training a rate-distortion optimizer, in accordance with an embodiment of the present principles;
FIG. 16 is a flow diagram showing an exemplary method for training a most probable mode predictor, in accordance with an embodiment of the present principles;
FIG. 17 is a flow diagram showing an exemplary method for efficiently implementing a post-filter in hardware, in accordance with an embodiment of the present principles; and
FIG. 18 is a flow diagram showing an exemplary method for determining integer implementations of original floating point adaptive filters, in accordance with an embodiment of the present principles.
DETAILED DESCRIPTION [0033]
The present principles are directed to methods and apparatus for adaptive coupled pre-processing and post-processing filters for video encoding and decoding.
The present description illustrates the present principles. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly
described or shown herein, embody the present principles and are included within its spirit and scope.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the present principles and the concepts contributed by the inventor(s) to
furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the present principles, as well as specific examples thereof, are intended to encompass both structural and functional
equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that
perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the present principles.
Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in
computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate
software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be
shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without
limitation, digital signal processor ("DSP") hardware, read-only memory ("ROM") for storing software, random access memory ("RAM"), and non-volatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program
logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically
understood from the context.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of
circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to
perform the function. The present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the
manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
Reference in the specification to "one embodiment" or "an embodiment" of the present principles, as well as other variations thereof, means that a particular feature, structure, characteristic, and
so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment",
as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
It is to be appreciated that the use of any of the following "/", "and/or", and "at least one of", for example, in the cases of "A/B", "A and/or B" and "at least one of A and B", is intended to
encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of
"A, B, and/or C" and "at least one of A, B, and C", such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or
the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only,
or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in
this and related arts, for as many items listed.
Also, as used herein, the words "picture" and "image" are used interchangeably and refer to a still image or a picture from a video sequence. As is known, a picture may be a frame or a field.
Additionally, as used herein, the terms "pre-filter" and "pre-processing filter" are used interchangeably. Similarly, the terms "post-filter" and "post-processing filter" are used interchangeably
herein. It is to be appreciated that the present principles are applicable at the encoder and decoder.
Moreover, as used herein, the phrase "exact inverse", when used to describe a relationship between the pre-filter and the post-filter, refer to filter coefficients and filter parameters for the
pre-filter and the post-filter being selected such that a filtering result obtained from the post-filter is an inverse of the filtering result obtained from the pre-filter. In other words, "exact
inverse" describes the relationship between the pre-filter and post-filter such that, in the absence of any other processing (such as, e.g., quantization), the input of the pre-filter, processed by
the pre-filter and then by the post-filter, is the same as the output of the post-filter.
Further, as used herein, the phrase "substantial inverse", when used to describe a relationship between the pre-filter and the post-filter, refer to filter coefficients and filter parameters for the
pre-filter and the post-filter being selected such that a filtering result obtained from the post-filter is substantially an inverse of the filtering result obtained from the pre-filter. Similarly,
when the phrase "substantial inverse" is used to describe a relationship between the pre-filter and the post-filter when the post-filter is adaptive, refers to an adaptation parameter for the
post-filter being selected so that "small" perturbations do not significantly impact the adaptation. This definition is related to the mathematical concept of stability. That is, given a small
variation of the input, then the output variation is also small. For example, it can be expressed mathematically by stating that the norm of the difference of the output is smaller than the norm of
the difference of the input times some constant. A typical example of this fact is a linear system, since variations of the input (for example, quantizing the data), implies variations of the output
of the same order. The same idea is meant by "substantial inverse", which does not guarantee the "exact inverse", but it is substantially close, where "substantially" can be expressed, for example,
as the constant that relates the norms of the input and output difference, for any input acceptable in the system.
For purposes of illustration and description, examples are described herein in the context of improvements over the MPEG-4 AVC Standard, using the MPEG-4 AVC Standard as the baseline for our
description and explaining the improvements and extensions beyond the MPEG-4 AVC Standard. However, it is to be appreciated that the present principles are not limited solely to the MPEG-4 AVC
Standard and/or extensions thereof. Given the teachings of the present principles provided herein, one of ordinary skill in this and related arts would readily understand that the present principles
are equally applicable and would provide at least similar benefits when applied to extensions of other standards, or when applied and/or incorporated within standards not yet developed. It is to be
further appreciated that the present principles also apply to video encoders and video decoders that do not conform to standards, but rather confirm to proprietary definitions.
As noted above, the block-based coding approach used in the MPEG-4 AVC Standard does not effectively exploit the correlation existing at inter-transform block boundaries. To be clear, as used herein,
"inter-transform block" refers to a block of data processed by different transforms. Spatial correlation within a block is removed by the transform, that is, intra-transform data is de-correlated by
the transform. However, since transforms in MPEG-4 AVC do not overlap, the correlation between data belonging to different transform blocks is not removed properly. In accordance with the present
principles, we disclose methods and apparatus that define a coupled set of adaptive pre-processing filters and post-processing filters which exploit inter-transform block correlation and reduce
blocking artifacts observed in video coding. For example, the coupled set of filters may include one or more pre-processing filters coupled to one or more post-processing filters.
In an embodiment, the filter adaptation may be based on, for example, the "gradient" calculated for the input data and/or the "quantizer step size" and/or local data statistics such as "variance". At
least one embodiment is disclosed relating to designing and optimizing the pre-processing filters and post-processing filters to "work in concert with the prediction (either spatial or temporal)
mechanism". This joint design and optimization helps achieve better overall rate-distortion performance. Further, the pre-processing filters and post-processing filters help in the preservation of
edges, leading to better perceptual quality. The present principles also outline how to modify the existing video architecture, such as the rate-distortion optimizer and the most probable mode
predictor, in order to achieve better performance gains.
The goal of the pre-processing filters and post-processing filters is to exploit this inter-transform block correlation and therefore achieve higher coding efficiency. Additionally, since the
pre-processing filters and post-processing filters are applied across transform block boundaries, they help reduce blocking artifacts often seen in block-based video coding.
The pre-filter is typically applied to the original frame before the redundancy is removed using prediction. If a strong pre-filter is applied to the original frame, it would severely perturb the
very pixels used for prediction. The poor quality of the prediction would consequently reduce the compression efficiency of predictive-coding. Consequently, the overall encoder and decoder
compression efficiency would also be reduced. To consider and compensate for the impact of pre-filtering on predictive-coding, we disclose a new pre-filter design methodology. In an embodiment, the
pre-filter is adapted to the "gradient" calculated for the input data and/or the "quantizer step size" and/or local data statistics such as "variance" for better compression efficiency. A coupled
"adaptive" post-filter is designed to reverse the operations carried out by the pre-filter at the encoder. The performance of the system is further improved by modifying the rate-distortion optimizer
and the most probable mode predictor, to work in conjunction with the pre-processing filters and post-processing filters.
Turning to FIG. 2, an exemplary video encoder with pre-processing filters and post-processing filters is indicated generally by the reference numeral 200. The video encoder 200 includes a pre-filter
205 having a first output connected in signal communication with a first input of a combiner 210. An output of the combiner is connected in signal communication with an input of a transformer (T)
215. An output of the transformer (T) 215 is connected in signal communication with an input of a quantizer (Q) 220. An output of the quantizer (Q) 220 is connected in signal communication with an
input of an entropy coder 225 and an input of an inverse quantizer (IQ) 230. An output of the inverse quantizer (IQ) 230 is connected in signal communication with an input of an inverse transformer
(IT) 235. An output of the inverse transformer (IT) 235 is connected in signal communication with a first non-inverting input of a combiner 240. An output of the combiner 240 is connected in signal
communication with a first input of a post-filter 245 and an input of an intra predictor 280. An output of the post-filter 245 is connected in signal communication with an input of a deblocking
filter 250 and an input of a loop filter 255. An output of the loop filter 255 is connected in signal communication with an input of a reference memory 260. A first output of the reference memory 260
is connected in signal communication with a first input of a motion/intra compensator 265. A second output of the reference memory 260 is connected in signal communication with a second input of a
motion estimator 270. An output of the motion estimator 270 is connected in signal communication with a second input of the motion/intra compensator 265. An output of the motion/intra compensator 265
is connected in signal communication with a first input of a pre-filter 275. An output of the pre-filter 275 is connected in signal communication with a first input of an intra/inter selector 285. An
output of the intra/inter selector 285 is connected in signal communication with a second non-inverting input of the combiner 210 and a second non-inverting input of the combiner 240. An output of
the intra predictor 280 is connected in signal communication with a second input of the intra/inter selector 285. An output of the entropy coder 225 is available as an output of the video encoder
200, for outputting a bitstream. An output of the deblocking filter 250 is available as an output of the video encoder 200, for outputting a reconstruction. An input of the pre-filter 205 is
available as an input of the video encoder, for receiving a video source and/or content related thereto. The pre-filter 205, the combiner 210, and the transformer (T) 215 form an equivalent forward
lapped transform 266 with respect to an equivalent inverse lapped transform 277 formed from the inverse transformer (I) 235, the combiner 240, and the post-filter 245.
Turning to FIG. 3, an exemplary video decoder with pre-processing filters and post-processing filters is indicated generally by the reference numeral 300. The video decoder 300 includes an input
buffer 305 having an output connected in signal communication with an input of an entropy decoder 310. An output of the entropy decoder 310 is connected in signal communication with an input of an
inverse quantizer 315. An output of the inverse quantizer 320 is connected in signal communication with a first non-inverting input of a combiner 325. An output of the combiner 325 is connected in
signal communication with an input of a post-filter 330 and an input of an intra predictor 365. An output of the post-filter 330 is connected in signal communication with an input of a loop filter
340. An output of the loop filter 340 is connected in signal communication with an input of a reference memory 345. An output of the reference memory 345 is connected in signal communication with an
input of a motion/intra compensator 350. An output of the motion/intra compensator 350 is connected in signal communication with an input of a pre-filter 355. An output of the pre-filter 355 is
connected in signal communication with a first input of an intra/inter selector 360. An output of the intra/inter selector 360 is connected in signal communication with a second non-inverting input
of the combiner 325. An output of the intra predictor 365 is connected in signal communication with a second input of the intra/inter selector 360. An input of the input buffer 305 is available as an
input of the video encoder 300, for receiving an input bitstream. An output of the deblocking filter 335 is available as an output of the video decoder 300, for outputting one or more pictures
corresponding to the bitstream. The inverse transformer (IT) 320, the combiner 325, and the post-filter 330 form an equivalent inverse lapped transform 377.
Thus, FIG. 2 shows an embodiment of a video encoder that pre-filters the original video source. The coupled post-processing filter 245 is placed outside the coding loop for intra-coding, but inside
the coding loop for inter-coding. FIG. 3 shows the corresponding decoder, where the pre-processing filter 355 and post-processing filter 330 and their adaptation parameters are derived offline for
the luma and chroma components separately. Separate parameters are also obtained for different sequence resolutions.
We note that while the pre-filters (e.g., 205 and 275) are not physically coupled to the post-filter 245, the pre-filters 205, 275 and the post-filter 245 are "coupled" in that they have a
relationship such that the post-filter 245 operates in order to provide an output that is the same or as close as possible to the (pre-filtered) input to the pre-filter 205. That is, filter
coefficients and parameters for the post-filter 245 are selected such that the filtering operation performed by the post-filter 245 is substantially inverse to the filtering operation performed by
the pre-filters 205, 275
Turning to FIG. 4, an exemplary method for encoding image data involving separate luma and chroma pre-filtering is indicated generally by the reference numeral 400. The method 400 includes a start
block 405 that passes control to a function block 407. The function block 407 inputs data (source/reconstruction data), and passes control to a decision block 410. The decision block 410 determines
whether or not the current component to be filtered is the luma component. If so, then control is passed to a function block 415 and a function block 435. Otherwise, control is passed to a function
block 440 and a function block 460. The function block 415 sets the luma adaptation parameter, and passes control to the function block 435. The function block 435 performs luma pre-filtering (using
the luma adaptation parameter set by the function block 415), and passes control to a function block 465. The function block 465 outputs pre-filtered data, and passes control to an end block 499. The
function block 440 sets the chroma adaptation parameter, and passes control to the function block 460. The function block 460 performs chroma pre-filtering (using the chroma adaptation parameter set
by the function block 440), and passes control to the function block 465. Regarding function block 415, the same sets the luma adaptation parameter based on encoder settings 420, a predictor mode
425, and a transform size and type 430. Regarding function block 440, the same sets the chroma adaptation parameter based on encoder settings 445, a predictor mode 450, and a transform size and type
Turning to FIG. 5, an exemplary method for decoding image data involving separate luma and chroma post-filtering is indicated generally by the reference numeral 500. The method 500 includes a start
block 505 that passes control to a function block 507. The function block 507 inputs reconstruction data and passes control to a decision block 510. The decision block 510 determines whether or not
the current component to be filtered is the luma component. If so, then control is passed to a function block 515 and a function block 535. Otherwise, control is passed to a function block 540 and a
function block 560. The function block 515 sets the luma adaptation parameter, and passes control to the function block 535. The function block 535 performs luma post-filtering (using the luma
adaptation parameter set by the function block 515), and passes control to a function block 565. The function block 565 outputs post-filtered data, and passes control to an end block 599. The
function block 540 sets the chroma adaptation parameter, and passes control to the function block 560. The function block 560 performs chroma post-filtering (using the chroma adaptation parameter set
by the function block 540), and passes control to the function block 565. Regarding function block 515, the same sets the luma adaptation parameter based on encoder settings 520, a predictor mode
525, and a transform size and type 530. Regarding function block 540, the same sets the chroma adaptation parameter based on encoder settings 545, a predictor mode 550, and a transform size and type
The parameter derivation begins with the design of a pre-filter. The time-domain representation of the pre-filter output is rewritten as follows:
=D[Shift(I(Px,Prev(x)))] (3)
where D is the block transform
(e.g., Discrete cosine transform (DCT), Karhunen Loeve transform (KLT), and/or Mode dependent directional transform (MDDT)), and Prev(x) is the data used in predicting x. The function I (•,•) is
modeled to capture the behavior of the predictor used for the coding scheme under consideration. Different models can be chosen to represent different predictors. Turning to FIGS. 4A-4D, four
possible choices for the function I (•,•) used to model predictive-coding (intra/inter) are indicated generally by the reference numerals 610, 620, 630, and 640, respectively. In particular, FIG. 6A
shows a choice 610 for the function where a line which minimizes the mean square error is fitted through the previous subset of pixels and extrapolated to obtain the prediction for the current
pixels. FIG. 6B shows a choice 620 for the function where the previous pixel is copied as a prediction for the current pixel. FIG. 6C shows a choice 630 for the function where the previous block is
copied as a prediction for the current block. FIG. 6D shows a choice 640 for the function where a line which minimizes the mean square error is fitted through the previous subset of pixels and
extrapolated to obtain the prediction for the current blocks. The choices 610 and 620 can be recursively applied to obtain the prediction for the entire x under consideration.
For the pre-filtering approach in the MPEG-4 AVC Standard, variance σ
is calculated as follows:
=Var[D[Shift(I(Px,Prev(x)))]] (4)
The overall MPEG-4 AVC Standard pre-filter design problem can now be stated as follows:
* = arg max P { i = 1 M [ ( Var [ D [ Shift ( I ( Px , Prev ( x ) ) ) ] ] σ x 2 ) D [ Shift ( I ( P , Prev ( ) ) ) ] i - 1 2 ] } - 1 M ( 5 ) ##EQU00002##
We approximate the above problem by the following (which corresponds to the assumption that the distortion increase in reconstruction due to the "inverse DCT" and "predictive-reconstruction" is a
fixed multiple):
* ≈ arg max P { i = 1 M [ ( Var [ D [ Shift ( I ( Px , Prev ( x ) ) ) ] ] σ x 2 ) P i - 1 2 ] } - 1 M ( 6 ) ##EQU00003##
In this way, in at least one implementation, a method of designing filters in accordance with the present principles incorporates the impact of pre-filtering on predictive-coding efficiency.
Solutions obtained by solving Equation (6) can be further improved by using them as seeds for an evolutionary optimization algorithm.
Next system blocks such as the rate-distortion optimizer (which perform coding mode selection) and the most probable mode predictor are modified and trained to work in concert with the designed
pre-processing filters and post-processing filters. The modification to the rate-distortion optimizer is an encoder only modification.
The pre-processing filter parameters and post-processing filter parameters, adaptation parameters and system parameters are obtained by maximizing an objective function, e.g., coding gain, energy
compaction, frequency response, regularity in the basis, or a linear combination of some or all the above objectives. All the training and determination of parameters is performed offline on a
representative sequence subset. Once the training is complete, these parameters can be used for all video sequences.
Turning to FIG. 7, an exemplary method for offline training of filter, adaptation, and enhancement parameters is indicated generally by the reference numeral 700. The method 700 includes a start
block 705 that passes control to a function block 710. The function block 710 inputs training data, and passes control to a function block 715. The function block 715 determines the filter,
adaptation, and enhancement parameters based on the training data, and passes control to a function block 720. The function block 720 stores the parameters, and passes control to an end block 799.
The post-filter is coupled with the pre-filter to invert the processing carried out by the pre-filter. For non-adaptive fixed filters, the post-filter is the exact inverse of the pre-filter. Fixed
filtering gives performance improvements over no filtering, but this performance improvement can be further enhanced by filter adaptation.
We now discuss the possible variants and embodiments for pre-filter adaptations and post-filter adaptations.
If the adaptation in the pre-filter is based on the original data, then the post-filter cannot be a perfect inverse, due to the non-availability of the original data (e.g., at the decoder side). In
such a case, the post-filter estimates the adaptation based on the data that is input to the post-filter. Two novel post-filtering embodiments are possible as follows.
In the first embodiment, we estimate the original data vector which, when adaptively pre-filtered, provides an output vector closest to the current observation. This estimation can be carried out
using either convex or non-convex optimization.
Turning to FIG. 8, an exemplary method for performing post-filtering by minimizing the distance of a pre-filtered estimate to observed data is indicated generally by the reference numeral 800. The
method 800 includes a start block 805 that passes control to a function block 810. The function block 810 receives an input observation, and passes control to a function block 815. The function block
815 makes a guess x, and passes control to a function block 820. The function block 820 performs adaptive filtering, and passes control to a function block 825. The function block 825 measures a
distance from the guess to the observation, and passes control to a decision block 845. The decision block 845 determines whether or not there is an improvement from the previous guess (as determined
from the distance). If so, then control is returned to the function block 815. Otherwise, control is passed to a function block 850. The function block 850 outputs the closest guess to the
observation, and passes control to an end block 899. Regarding function block 820, the same performs the adaptive filtering based on encoder settings 835, a predictor mode 830, and a transform size
and type 840.
In the second embodiment, we estimate the "adaptive pre-filter which was used" based on the observed data vector and calculate the inverse matrix corresponding to the post-filter transformation. This
inverse matrix is then used to perform post-filtering. The selection of the pre-filter depends adaptively on the input data (and possibly other data such as, for example, the predictor mode or the
encoding settings). Therefore, the selected pre-filter has to be deduced in order to apply the right post-filter to inverse the pre-filter process, that is, an estimation of the used pre-filter has
to be performed. To deduce or estimate the pre-filter used in order to apply the proper post-filter, the decoder has available the filtered quantized data, that is, the decoder can observed and
analyze the aforementioned data (e.g., including the input data, other data (e.g., predictor mode, encoding settings, etc.), and filtered quantized data) to perform the estimation of the employed
pre-filter and then, calculate the inverse matrix corresponding to the post-filter.
The above mentioned approaches to post-filtering make the post-filter an approximate inverse of the pre-filter.
Turning to FIG. 9, an exemplary method for encoding video data with pre-processing filtering and post-processing filtering is indicated generally by the reference numeral 900. The method 900 includes
a start block 905 that passes control to a function block 910. The function block 910 inputs video source data, and passes control to a function block 915, a function block 965, and a function block
960. The function block 915 pre-filters the video source data, and passes control to a function block 920. The function block 920 calculates a residue (e.g., the difference between an original
picture from the video source data and a reference picture, the reference picture also known as a prediction) for the video source data, and passes control to a function block 925. The function block
925 applies a transform to the residue to obtain coefficients there for, and passes control to a function block 930 and the function block 965. The function block 930 quantizes the coefficients to
obtain quantized coefficients, and passes control to a function block 935 and the function block 965. The function 935 inverse quantizes the quantized coefficients to obtain inverse quantized
coefficients, and passes control to a function block 940, a function block 975, and a function block 980. The function block 940 applies an inverse transform to the inverse quantized coefficients to
obtain a reconstructed residue, and passes control to a function block 945, the function block 975, and the function block 980. The function block 945 adds the reconstructed residue to the prediction
to obtain a reconstructed picture, and passes control to a function block 950, the function block 975, and a function block 970. The function block 950 performs post-filtering with respect to the
reconstructed picture to obtain a reconstructed and post-filtered picture, and passes control to a function block 955. The function block 955 performs pre-filtering of the reconstructed and
post-filtered picture, and passes control to the function block 960. The function block 960 performs inter-prediction, and returns control to the function block 920, the function block 945, the
function block 975, the function block 965, and the function block 980. The function block 965 performs an encoder adaptation involving the pre-filter selection, and returns control to the function
block 915. The function block 970 performs an intra prediction, and returns control to the function block 980, the function block 945, the function block 975, the function block 920, and the function
block 965. The function block 985 performs deblocking filtering, and passes control to a function block 990. The function block 990 outputs a reconstruction, and passes control to an end block 999.
The function block 975 mimics a decoder adaptation, and passes control to the function block 950. The function block 980 mimics a decoder adaptation, and passes control to a function block 955.
Turning to FIG. 10, an exemplary method for decoding video data with pre-processing filtering and post-processing filtering is indicated generally by the reference numeral 1000. The method 1000
includes a start block 1005 that passes control to a function block 1010. The function block 1010 inputs a bitstream, and passes control to a function block 1015. The function block 1015 inverse
quantizes the quantized residue in the bitstream to obtain coefficients there for, and passes control to a function block 1020, a function block 1055, and a function block 1060. The function block
1020 inverse transforms the coefficients to obtain a residue (e.g., for a picture in the bitstream), and passes control to a function block 1025, the function block 1055, and the function block 1060.
The function block 1025 adds the residue to a prediction to obtain a reconstructed picture, and passes control to a function block 1030, the function block 1055, and a function block 1045. The
function block 1030 performs post-filtering with respect to the reconstructed picture to obtain a reconstructed and post-filtered picture, and passes control to a function block 1035, the function
block 1060, and a function block 1050. The function block 1035 performs pre-filtering of the reconstructed and post-filtered picture, and passes control to a function block 1040. The function block
1040 performs inter-prediction, and returns control to the function block 1060, the function block 1055, and the function block 1025. The function block 1045 performs intra-prediction, and passes
control to the function block 1025, the function block 1060, and the function block 1055. The function block 1050 performs deblocking filtering, and passes control to a function block 1065. The
function block 1065 provides a decoded output, and passes control to an end block 1099.
In another embodiment, if the pre-filters and post-filters are implemented using a lifting scheme, then the adaptation can be embedded within the lifting mechanism. In such a case the adaptation at
the pre-filter can be perfectly reversed at the post-filter.
Turning to FIG. 11, an exemplary method for encoding image data involving separate luma and chroma pre-filtering with a lifting scheme is indicated generally by the reference numeral 1100. The method
1100 includes a start block 1105 that passes control to a function block 1107. The function block 1107 inputs data (source/reconstruction data), and passes control to a decision block 1110. The
decision block 1110 determines whether or not the current component to be filtered is the luma component. If so, then control is passed to a function block 1115. Otherwise, control is passed to a
function block 1140. The function block 1115 performs luma pre-filtering with a lifting implementation, and passes control to a function block 1165. The function block 1165 outputs pre-filtered data,
and passes control to an end block 1199. The function block 1140 performs chroma pre-filtering with a lifting implementation, and passes control to the function block 1165. Regarding function block
1115, the same performs the luma pre-filtering with the lifting implementation and with an adaptation based on encoder settings 1120, a predictor mode 1125, and a transform size and type 1130.
Regarding function block 1140, the same performs the chroma pre-filtering with the lifting implementation and with an adaptation based on encoder settings 1145, a predictor mode 1150, and a transform
size and type 1155.
Turning to FIG. 12, an exemplary method for decoding image data involving separate luma and chroma post-filtering is indicated generally by the reference numeral 1200. The method 1200 includes a
start block 1205 that passes control to a function block 1207. The function block 1207 inputs reconstruction data, and passes control to a decision block 1210. The decision block 1210 determines
whether or not the current component to be filtered is the luma component. If so, then control is passed to a function block 1215. Otherwise, control is passed to a function block 1260. The function
block 1215 performs luma post-filtering with a lifting implementation, and passes control to a function block 1265. The function block 1265 outputs post-filtered data, and passes control to an end
block 1299. The function block 1260 performs chroma post-filtering with a lifting implementation, and passes control to the function block 1265. Regarding function block 1215, the same performs the
luma post-filtering with the lifting implementation and with an adaptation based on encoder settings 1220, a predictor mode 1225, and a transform size and type 1230. Regarding function block 1260,
the same performs the chroma post-filtering with the lifting implementation and with an adaptation based on encoder settings 1245, a predictor mode 1250, and a transform size and type 1255.
The lifting implementation can be carried out by first decomposing a fixed pre-filter transform matrix using singular value decomposition. The singular value decomposition will yield a product of
three matrices. The first and the third matrix are orthogonal matrices. As it is known, every orthogonal matrix can be decomposed into a series of Givens plane rotations (the number of such rotations
is related to the dimension of the matrix, where we preferably employ the mathematical operation "N choose 2", that is, if n is the dimension of the matrix, then the number of rotations is n*(n-1)/
2). Each plane rotation, in turn, can be implemented using three lifting steps as follows: Predict-Update-Predict. The second matrix obtained from the singular value decomposition is a diagonal
matrix and corresponds to scaling. A novel adaptation technique which can be used in the lifting implementation is to change the values of the diagonal matrix. The values are changed in a way that
such values can be exactly or approximately reversed at the decoder. The post-filter lifting implementation is obtained by reversing the individual lifting steps carried out by the pre-filter.
Turning to FIG. 13, an exemplary method for performing adaptive pre-filtering and post-filtering using singular value decomposition (SVD) in a lifting implementation is indicated generally by the
reference numeral 1300. The method 1300 includes a start block 1305 that passes control to a function block 1310. The function block 1310 inputs a base pre-filter transformation matrix, and passes
control to a function block 1315. The function block 1315 performs singular value decomposition (SVD), and passes control to a function block 1320, a function block 1325, and a function block 1330.
The function block 1320 receives a first orthogonal matrix as a first output of the SVD (performed by the function block 1315), and passes control to a function block 1335. The function block 1325
receives a second diagonal matrix as a second output of the SVD (performed by the function block 1315), and passes control to a function block 1345. The function block 1330 receives a third
orthogonal matrix as a third output of the SVD (performed by the function block 1315), and passes control to a function block 1365. The function block 1335 decomposes the first orthogonal matrix into
given plane rotations, and passes control to a function block 1340. The function block 1340 implements each rotation as a predict-update-predict lifting step, and passes control to the function block
1345. The function block 1345 determines an adaptation for the elements of the diagonal matrix, and passes control to a function block 1350. The function block 1350 concatenates the lifting steps of
1340, the scaling matrix with the adaptation elements of 1345 and the lifting steps of 1370 to get a complete lifting implementation of a pre-filter, and passes control to a function block 1355. The
function block 1355 inverts the individual lifting step and adaptation to get a post-filter lifting implementation, and passes control to a function block 1360. The function block 1360 stores the
lifting implementations for the filters, and passes control to an end block 1399. The function block 1365 decomposes the third orthogonal matrix into given plane rotations, and passes control to a
function block 1370. The function block 1370 implements each rotation as a predict-update-predict lifting step, and passes control to the function block 1345.
Other novel ways of determining lifting implementations of a fixed pre-filter and post-filter involve matrix decomposition such as QR decomposition, PLUS decomposition. Using Gaussian elimination
(with row transformation only), one can reduce the pre-filter transform matrix to the identity matrix. Each row transformation is a lifting step. These lifting steps when concatenated lead to the
post-filter lifting implementation. Reversing the steps in the post-filter will give the pre-filter implementation.
Turning to FIG. 14, an exemplary method for deriving a lifting implementation of filters using matrix decomposition or Gaussian elimination is indicated generally by the reference numeral 1400. The
method 1400 includes a start block 1405 that passes control to a function block 1410. The function block 1410 inputs a base pre-filter transformation matrix, and passes control to a function block
1415 and a function block 1425. The function block 1415 obtains a lifting implementation of a pre-filter from matrix decompositions (of the base pre-filter transformation matrix input by function
block 1410), and passes control to a function block 1420. The function block 1425 obtains a lifting implementation of a post-filter using Gaussian elimination (row transformations, with respect to
the base pre-filter transformation matrix input by function block 1410), and passes control to a function block 1430. The function block 1420 inverts an individual lifting step and adaptation to
obtain a post-filter lifting implementation, and passes control to a function block 1435. The function block 1430 inverts an individual lifting step and adaptation to obtain a pre-filter lifting
implementation, and passes control to the function block 1435. The function block 1435 stores the lifting implementation for the filters, and passes control to an end block 1499. Thus, regarding
method 1400, adaptation can then be carried out in one or more lifting steps (in a perfectly reversible or approximately reversible fashion).
The filter adaptation which is derived from input data will be based on characteristics such as data gradient strength and/or data gradient direction and/or data variance. This novel adaptation can
be made discrete by partitioning the gradient strength and/or gradient direction and/or gradient variance into disjoint ranges and applying fixed filters for different ranges. This can be thought of
as adaptation based on a lookup table. In another embodiment, this adaptation is a continuous function of gradient strength and/or gradient direction and/or data variance. A choice of gradient
adaptation function which provides good performance is a function which exponentially decreases as the gradient increases. Gradient can be calculated as the difference between pixels being
pre-filtered. For a more accurate measure of gradient the pixel difference may include pixels beyond the pre-filter boundary and weighted appropriately. Since we are interested in detecting edges
using the gradient, by considering only the magnitude of the gradient we can obtain a better ability to detect edges.
The process of adaptation is further enhanced by the following:
Choosing different adaptive pre-filters and post-filters for the horizontal and vertical directions.
Applying the adaptive pre-filters and post-filters selectively:
To the transform block boundary only;
To the transform block boundary and within the transform block; and/or
Turning off the pre-filter and post-filter.
One novel feature of the adaptive pre-filter and post-filter is the ability to work with different transforms (discrete cosine transform (DCT), Karhunen-Loeve transform (KLT), mode-dependent
directional transform (MDDT)) and different transform sizes. Previous work restricted the pre-filter and post-filter combination to work with a single transform and transform size. FIGS. 4 and 5
described above illustrate an embodiment relating to the aforementioned adaptation. For example, referring to FIGS. 4 and 5, it can be seen that the pre-processing filter and post-processing filters
adapt their behavior according to several inputs including encoder settings, prediction mode, transform type/size and the input data.
System Enhancements
The performance of the pre-filter and post-filter can be improved by modifying certain encoder and decoder functions and making such functions work in harmony with the filtering process. The choice
of different pre-filters and post-filters for the luma and chroma components changes the coding efficiency of the two channels differently. This change in coding efficiency implies that the
rate-distortion optimizer needs to be modified in order to reflect the new coding efficiency and make better mode decisions. One simplistic way of reflecting this change in coding efficiency is to
change the Lagrangian parameter for luma and chroma in the encoder and/or decoder. Changing the Lagrangian parameter to a function which accurately reflects the coding efficiency significantly
improves the overall compression efficiency.
Turning to FIG. 15, an exemplary method for training a rate-distortion optimizer is indicated generally by the reference numeral 1500. The method 1500 includes a start block 1505 that passes control
to a function block 1510. The function block 1510 inputs training data, and passes control to a function block 1515. The function block 1515 performs pre-/post-filter based encoding and iteratively
tunes the rate-distortion optimizer to obtain the best performance (e.g., based on a rate-distortion cost), and passes control to a function block 1520. The function block 1520 saves the rate
distortion parameters, and passes control to an end block 1599. It is to be appreciated that method 1500 pertains to the encoder side only and, thus, does not impact the decoder.
The coding mode predictor used in the MPEG-4 AVC Standard video encoder or decoder makes a guess on the most probable coding mode for the current block based on blocks coded in the past. Often the
blocks on which the predictions are based are spatially or temporally adjacent to the current block. If the predictor makes an accurate guess for the best coding mode for the current block, a minimum
number of bits is used to code this mode. An accurate predictor can lead to significant bitrate savings. The pre-filtering process and post-filtering process impact the prediction process making the
prediction process less accurate. Using a subset of sequences for training, we can re-design the mode predictor to be more accurate under the filtering.
Turning to FIG. 16, an exemplary method for training a most probable mode predictor is indicated generally by the reference numeral 1600. The method 1600 includes a start block 1605 that passes
control to a function block 1610. The function block 1610 performs pre-/post-filter based encoding and collects statistics on prediction modes selected for the current and previous blocks, and passes
control to a function block 1620. The function block 1620 determines a new most probable mode predictor based on the statistics collected, and passes control to a function block 1625. The function
block 1625 stores the most probable mode predictor configuration parameters, and passes control to an end block 1699. It is to be appreciated that the most probable mode predictor can be re-designed
based on maximum likelihood estimator, maximum a posteriori estimator, or any other suitable estimator.
Speeding Up Post
-Filter Hardware Implementation:
Often the adaptation process in the pre-filter is as simple as scaling the pre-filter parameters by a floating point value, for example, s. An example for a 4×4 pre-filter is shown below as follows:
[ y 0 y 1 y 2 y 3 ] = [ I + s scaling [ p 00 p 01 - p 01 - p 00 p 10 p 11 - p 11 - p 10 - p 10 - p 11 p 11 p 10 - p 00 - p 01 p 01 p 00 ] Pre - filter parameters ] Adaptive pre - fitler [ x 0 x 1 x 2
x 3 ] = P ( s ) x ( 7 ) ##EQU00004##
A hardware implementation of the post-filter which reverses the adaptive pre-filter has to change its inversion logic based on this adaptation. A direct "matrix inverse" of the above expression leads
to a 4×4 matrix where every term of the matrix needs to be changed based on the scaling parameter s. A better way would be to isolate the scaling parameter to as few matrix elements as possible.
Towards this end, the following novel post-filter is proposed:
×4+Q(s) (8)
We can decompose Q(s=const), where the constant is typically chosen as 1, using singular value decomposition into three matrices. The first matrix (U
) and the third matrix (V
) are orthogonal matrices. We can now absorb any variation in the scaling parameter s in the second matrix of the singular value decomposition as follows:
( s ) - 1 = I 4 × 4 + U i [ 0 0 0 0 0 0 0 0 0 0 m 22 s + c 22 m 23 s + c 23 0 0 m 32 s + c 32 m 33 s + c 33 ] V i ( 9 ) ##EQU00005##
The parameters m
, m
, m
, m
and c
, c
, c
, c
are constants determined uniquely for a given set of filter parameters as follows: p
, p
, p
, p
. The advantage to a hardware implementation of this post-filter is the fact that the only varying part is restricted to four elements, the rest of the element are constants.
Turning to FIG. 17, an exemplary method for efficiently implementing a post-filter in hardware is indicated generally by the reference numeral 1700. The method 1700 includes a start block 1705 that
passes control to a function block 1710. The function block 1710 inputs reconstruction data, and passes control to a decision block 1715. The decision block 1715 determines whether or not the current
component to be filtered is the luma component. If so, then control is passed to a function block 1720, a function block 1725, and a function block 1740. Otherwise, control is passed to a function
block 1745, a function block 1750, and a function block 1767. The function block 1720 obtains a fixed transformation corresponding to the third matrix of SVD (luma), and passes control to a function
block 1730. The function block 1745 obtains a fixed transformation corresponding to the third matrix of SVD (chroma), and passes control to a function block 1755. The function block 1730 obtains a
second matrix of SVD (luma), and passes control to a function block 1735. The function block 1735 obtains a fixed transformation corresponding to the first matrix of SVD (luma), and passes control to
a function block 1740. The function block 1740 performs an add operation that adds the reconstructed luma data to the transformed data of block 1735, and passes control to a function block 1797. The
function block 1755 obtains a second matrix of SVD (chroma), and passes control to a function block 1760. The function block 1760 obtains a fixed transformation corresponding to the first matrix of
SVD (chroma), and passes control to a function block 1767. The function block 1767 performs an add operation that adds the reconstructed chroma data to the transformed data of block 1760, and passes
control to a function block 1797. The function block 1797 outputs post-filtered data, and passes control to an end block 1799. The function block 1725 sets the luma adaptation parameter, and passes
control to the function block 1730. The function block 1750 sets the chroma adaptation parameter, and passes control to the function block 1755. Regarding function block 1725, the same sets the luma
adaptation parameter based on encoder settings 1770, a predictor mode, 1775, and a transform size and type 1780. Regarding function block 1750, the same sets the chroma adaptation parameter based on
encoder settings 1785, a predictor mode 1790, and a transform size and type 1795.
A software approach to speeding up filter implementations is to approximate the adaptive filters using integer implementations. In an embodiment, under integer implementation, all the floating point
operations carried out during pre-filtering and post-filtering are converted to integer multiplications and bit shifts. The bit shifts will restrict integer divisions to division by a power of 2. The
adaptive pre-filter of Equation (7) can be implemented by converting the filter parameter matrix into integer multiplication followed finally by bit shifts. Next the scaling parameter s can be
multiplied to the obtained data prior to adding the obtained data with the original data. For the adaptive post-filter equation, we determine an integer implementation which is close to the inverse
of the integer pre-filter and at the same time close to the original floating point P(s)
Turning to FIG. 18, an exemplary method for determining integer implementations of original floating point adaptive filters is indicated generally by the reference numeral 1800. The method 1800
includes a start block 1805 that passes control to a function block 1810. The function block 1810 inputs floating point adaptive filters, and passes control to a function block 1815. The function
block 1815 determines an integer implementation of the matrix multiplications such that the floating transform matrices and integer transform matrices are close as possible, additionally ensure that
the integer post-filter is as close as possible to the inverse of the integer pre-filter, and passes control to a function block 1825. The function block 1825 stores the integer implementation for
the adaptive filters, and passes control to an end block 1899. The function block 1820 sets the maximum integer precision allowed, and passes control to the function block 1815.
Various Innovations and Points of Novelty [0101]
Pre-filter the original data and coupled post-filter in the coding loop (and decoder side)
"Coupled filters":
Post-filter is exact (perfect) inverse of the pre-filter
Adaptive post-filter is "substantially inverse" from the adaptive pre-filter
"substantially inverse": adaptation parameter is chosen so that small perturbations do not impact adaptation significantly
In the case of no quantization and with adaptation, the exact inverse is not attained because adaptation is based on the original data which is not available at the decoder
The post-filter is not the exact inverse: minimize the difference between the observation and the original data filtered
the direction of 2-D filters depends on the local direction of the gradient
Use different filters for the luma component and the chroma component
Use 1-D filters
Use different filters for the vertical and horizontal directions
Predictive prediction efficiency in the design of the coupled filters
Adaptation of coupled filters to local data (edges, gradient, and/or variance)
Continuous adaptation of the filters
Use of a discrete set of coupled filters each of which is applied to a subset of the local statistics
3 pre-filters and 3 post-filters depending on the local gradient
Most probable mode predictor adapted to the filtering mechanism
rate-distortion (RD) modified for the filtering mechanism (encoder only optimization)
The filters can be adaptively applied as follows: at transform boundary unit only; at the transform boundary and within the transform unit; and not applied to the transform unit at all. The transform
unit can be, for example, a block or a macroblock.
The method to enable comparison and selection between the above 3 modes is follows: create the 3 units and perform an RD-cost comparison to determine the best coding option for the current unit. It
is to be appreciated that the decision made for the current unit will impact units which are to be coded in the future.
Lifting scheme implementation of the coupled filters
Gaussian elimination and Givens rotation implementations
The adaptive pre-filter and post-filter combination works with different transforms (DCT, KLT, MDDT) and different transform dimensions.
A description will now be given of some of the many attendant advantages/features of the present invention, some of which have been mentioned above. For example, one advantage/feature is an apparatus
having a video encoder for encoding input data for a picture into a resultant bitstream. The video encoder includes a pre-filter and a post-filter coupled to the pre-filter. The pre-filter filters
the input data for the picture and the post-filter filters in-loop reconstructed data for the picture.
Another advantage/feature is the apparatus having the video encoder as described above, wherein filter coefficients and filter parameters of the post-filter are selected such that the post-filter is
an exact inverse of the pre-filter.
Yet another advantage/feature is the apparatus having the video encoder as described above, wherein at least one of the pre-filter and the post-filter include different filters, and chroma components
of the picture are filtered using different ones of the different filters than the luma components of the picture.
Still another advantage/feature is the apparatus having the video encoder as described above, wherein at least one of filter coefficients and filter parameters for at least one of the pre-filter and
the post-filter are selected responsive to at least one of a resolution, a quantization level, a local gradient, a prediction mode, and a gradient direction.
Still yet another advantage/feature is the apparatus having the video encoder as described above, wherein a post-filter transform matrix used by the post-filter to filter the reconstructed data is
decomposed into two summands, one of the two summands being an identity matrix and another one of the two summands being a matrix representing a multiplication of a first, a second, and a third
matrix, the first and the third matrices being fixed, and the second matrix being variable in order to reverse an adaptation of the pre-filter.
Moreover, another advantage/feature is the apparatus having the video encoder as described above, wherein filter coefficients and filter parameters of the post-filter are selected such that the
post-filter is substantially an inverse of the pre-filter.
Further, another advantage/feature is the apparatus having the video encoder wherein filter coefficients and filter parameters of the post-filter are selected such that the post-filter is
substantially an inverse of the pre-filter as described above, wherein the post-filter is configured to substantially provide a same output data there from as input data provided to the pre-filter by
minimizing a difference between an observation and a pre-filtered estimate, the observation relating to input data provided to the post-filter, and the pre-filtered estimate relating to an estimate
of the input data for the picture prior to filtering by the pre-filter.
Also, another advantage/feature is the apparatus having the video encoder as described above, wherein the pre-filter and the post-filter are integer implementations determined so as to minimize a
distance relating to an exact invertibility between the pre-filter and the post-filter.
Additionally, another advantage/feature is the apparatus having the video encoder as described above, wherein a filter size of the pre-filter and the post-filter is a same size as a transform applied
to residue data, the residue data representing a difference between the input data for the picture and reference data for at least one reference picture.
Moreover, another advantage/feature is the apparatus having the video encoder wherein a filter size of the pre-filter and the post-filter is a same size as a transform applied to residue data, the
residue data representing a difference between the input data for the picture and reference data for at least one reference picture as described above, wherein the pre-filter and the post-filer
include multiple filters, and at least one of the multiple filters is applied to all transform sizes of transforms applied to the residue data.
Further, another advantage/feature is the apparatus having the video encoder as described above, wherein a filter size of at least one of the pre-filter and the post-filter is different from a size
of a transform applied to the input data.
Also, another advantage/feature is the apparatus having the video encoder as described above, wherein the pre-filter and the post-filter are applied to only a portion of the input data.
Additionally, another advantage/feature is the apparatus having the video encoder wherein the pre-filter and the post-filter are applied to only a portion of the input data as described above,
wherein the portion of the input data is selected from at least one of a block boundary, and within a block.
These and other features and advantages of the present principles may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that
the teachings of the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
Most preferably, the teachings of the present principles are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly
embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer
platform having hardware such as one or more central processing units ("CPU"), a random access memory ("RAM"), and input/output ("I/O") interfaces. The computer platform may also include an operating
system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof,
which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections
between the system components or the process function blocks may differ depending upon the manner in which the present principles are programmed. Given the teachings herein, one of ordinary skill in
the pertinent art will be able to contemplate these and similar implementations or configurations of the present principles.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present principles is not limited to those precise
embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present principles. All
such changes and modifications are intended to be included within the scope of the present principles as set forth in the appended claims.
Patent applications by Joel Sole, La Jolla, CA US
Patent applications by Kiran Misra, Vancouver, WA US
Patent applications by Peng Yin, Ithaca, NY US
Patent applications by Qian Xu, Folsom, CA US
Patent applications by Xiaoan Lu, Princeton, NJ US
Patent applications by Yunfei Zheng, San Diego, CA US
Patent applications by Thomson Licensing
Patent applications in class Quantization
Patent applications in all subclasses Quantization
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20120281753","timestamp":"2014-04-17T22:25:38Z","content_type":null,"content_length":"124452","record_id":"<urn:uuid:72eeb652-89fc-4801-b4bf-a9863a1721cd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Higher order derivatives
September 23rd 2009, 08:38 AM
Higher order derivatives
please show me how to solve the following.
i have tried differentiation twice , but i cant get the same results..
September 23rd 2009, 03:06 PM
please show me how to solve the following.
i have tried differentiation twice , but i cant get the same results..
First, I believe there's a typo. I think it should be $y = \frac{\sin x}{1-x^2}$ and not $y = \frac{\sin x}{(1-x)^2}$.
If you consider
$(1-x^2) y = \sin x$
then differentiating both sides wrt $x$ twice and adding the same expression gives
$<br /> \left( (1-x^2)y\right)'' + (1-x^2)y = 0.<br />$ Expanding gives $(1-x^2)y'' - 4x y' - (1+x^2)y = 0$ - the first result.
If we let $u = (1-x^2)y$ then we have the hierarchy of differential equations
$<br /> u^{(n+2)} + u^{(n)} = 0\;\;(1)<br />$.
By induction we can show that
$<br /> u^{(n)} = (1-x^2) y^{(n)} - 2n x y^{(n-1)} - n(n-1)y^{(n-2)}<br />$
and at $x = 0$
$<br /> u^{(n)} = y^{(n)} - n(n-1)y^{(n-2)}.<br />$
Substitution into (1) leads to the second result
$<br /> y^{(n+2)} -(n^2+3n+1) y^{(n)} - n(n-1)y^{(n-2)} = 0<br />$.
September 23rd 2009, 07:48 PM
|
{"url":"http://mathhelpforum.com/calculus/103902-higher-order-derivatives-print.html","timestamp":"2014-04-18T08:54:47Z","content_type":null,"content_length":"9025","record_id":"<urn:uuid:3dffe694-791a-4b0d-904c-eec051e2e2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Incenter of a right triangle
March 29th 2010, 06:43 AM #1
Oct 2009
Incenter of a right triangle
The incenter of a right triangle is equidistant from the midpoint of the hypotenuse and the vertex of the right angle. Show that the triangle contains a 30 degree angle.
in center of right triangle
The incenter of a triangle is determined by the intersection of two angle bisectors.For a 30-60-90 tri the bisector of the 60 angle is also the perpendicular bisector of the median drawn to the
hypothenus.Therfore the distance from the incenter to the midpoint of hypothenuse and to the vertex of the 90 angle are equal.Is this enough for you to write a formal proof?
April 2nd 2010, 05:56 AM #2
Super Member
Nov 2007
Trumbull Ct
|
{"url":"http://mathhelpforum.com/geometry/136283-incenter-right-triangle.html","timestamp":"2014-04-17T20:24:06Z","content_type":null,"content_length":"31356","record_id":"<urn:uuid:5f6003d5-5764-45a4-a8ac-d22a5bd40972>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analysis of Flow Structures in Wake Flows for Train Aerodynamics
Analysis of Flow Structures in Wake Flows for Train Aerodynamics
Abstract (Summary)
Train transportation is a vital part of the transportation system of today anddue to its safe and environmental friendly concept it will be even more impor-tant in the future. The speeds of trains
have increased continuously and withhigher speeds the aerodynamic effects become even more important. One aero-dynamic effect that is of vital importance for passengers’ and track workers’safety
is slipstream, i.e. the flow that is dragged by the train. Earlier ex-perimental studies have found that for high-speed passenger trains the largestslipstream velocities occur in the wake. Therefore
the work in this thesis isdevoted to wake flows. First a test case, a surface-mounted cube, is simulatedto test the analysis methodology that is later applied to a train geometry, theAerodynamic
Train Model (ATM). Results on both geometries are comparedwith other studies, which are either numerical or experimental. The comparisonfor the cube between simulated results and other studies is
satisfactory, whiledue to a trip wire in the experiment the results for the ATM do not match.The computed flow fields are used to compute the POD and Koopman modes.For the cube this is done in two
regions of the flow, one to compare with a priorpublished study Manhart & Wengle (1993) and another covering more of theflow and especially the wake of the cube. For the ATM, a region containing
theimportant flow structures is identified in the wake, by looking at instantaneousand fluctuating velocities. To ensure converged POD modes two methods toinvestigate the convergence are proposed,
tested and applied. Analysis of themodes enables the identification of the important flow structures. The flowtopologies of the two geometries are very different and the flow structures arealso
different, but the same methodology can be applied in both cases. For thesurface-mounted cube, three groups of flow structures are found. First groupis the mean flow and then two kinds of
perturbations around the mean flow.The first perturbation is at the edge of the wake, relating to the shear layerbetween the free stream and the disturbed flow. The second perturbation isinside the
wake and is the convection of vortices. These groups would then betypical of the separation bubble that exists in the wake of the cube. For theATM the main flow topology consists of two counter
rotating vortices. Thiscan be seen in the decomposed modes, which, except for the mean flow, almostonly contain flow structures relating to these vortices.
Bibliographical Information:
School:Kungliga Tekniska högskolan
School Location:Sweden
Source Type:Master's Thesis
Keywords:TECHNOLOGY; Engineering mechanics; Fluid mechanics; TECHNOLOGY; Engineering mechanics; Vehicle engineering; Train Aerodynamics; Slipstream; Wake Flow; Detached-EddySimulation; Proper
Orthogonal Decomposition; Koopman Mode Decomposi-tion; Surface-mounted Cube; Aerodynamic Train Model
Date of Publication:01/01/2010
|
{"url":"http://www.openthesis.org/documents/Analysis-Flow-Structures-in-Wake-596378.html","timestamp":"2014-04-16T11:03:10Z","content_type":null,"content_length":"10624","record_id":"<urn:uuid:95e9edf1-6778-445d-a873-8f40ff50156d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Page:An introduction to linear drawing.djvu/26
This page has been
, but needs to be
An image should appear at this position in the text.
To use the entire page scan as a placeholder, edit this page and replace "{{missing image}}" with "{{raw image|An introduction to linear drawing.djvu/26}}". Otherwise, if you are able to provide
the image then please do so. For guidance, see Wikisource:Image guidelines and Help:Adding images.
Close the space between the sides of an angle with a right line, and you make a triangle, a figure which has three angles and three sides.
The base is the side on which the triangle is supposed to rest.
The apex of a triangle is the point opposite to the base.
The height of a triangle is a perpendicular drawn from the apex to the base. In the figure it is shown by the dotted line.
A triangle is called Isoceles when two sides are equal. If all three of the sides are equal, it is Equilateral, (which word means equal-sided;) and if all the sides are unequal, it is called Scalene.
24. Raise a perpendicular on a horizontal. (fig.9.)
This will produce right angles, as we have before remarked. To ascertain if the angle be exact, take a piece of what is called bonnet paper or thin pasteboard, cut it round and then cut the round
piece into quarters. Each quarter will have two sides at right angles, and by inserting the apex into the opening of the angle drawn by the pupil, any incorrectness will be detected. A small brass or
iron square will serve the same purpose, but does not satisfactorily show that a right angle is equal to a quarter of a circle, which is also called a quadrant.
35. Cross a right line with a perpendicular. (fig.10.)
The right line should be drawn in various directions, to show the pupil that a perpendicular may be raised on any right line, whether horizontal or oblique.
|
{"url":"http://en.wikisource.org/wiki/Page:An_introduction_to_linear_drawing.djvu/26","timestamp":"2014-04-21T15:14:52Z","content_type":null,"content_length":"25402","record_id":"<urn:uuid:b3ff37c3-89d1-4649-a741-0da7ca0eca1c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
kuta software infinite algebra 1 answer key
Best Results From Yahoo Answers Youtube
From Yahoo Answers
Question:I have seen this problem here but no useful answers, so i figured i put it up too. So this is the problem : X1+X3+X5=0 X2+X4+X6=0 X3+X5+X7=0 .....+.....+.....=.... This sequence would
continue forever and there are two questions. 1. Find all solutions to the this infinite system for all x's. 2. What is the dimension of the solution space.
Answers:Think of the variables as a sequence - recursively defined. This will make the solution easier than if you think of them as just variables. There is clearly an even/odd split, since evens and
odds don't appear in the same equations. So first odds: x5 = -x3 - x1 x7 = -x5 - x3 x9 = -x7 - x5 etc. So you have two you can choose - x3 and x1. So say x1 = a, x3 = c. Likewise for the evens: x6 =
-x2 - x4 x8 = -x4 - x6 etc.. So let x2=b and x4=d. Then your solutions are: x1 = a x2 = b x3 = c x4 = d x5 = -a-c x6 = -b-d x7 = a x8 = b x9 = c x10 = d etc. (it repeats from here) Now, the dimension
of the solution space is clearly 4, since it's defined by (x1,x2,x3,x4)=(a,b,c,d), despite being a subset of an infinite dimensional space.
Question:Find the value of x 1.)7x + 6 = 27 2.) - 6 + 4x + x = - 6 3.)5(6 - 5x) = - 120 4.) - 9x - 3y = - 9 5.) - 3x + 5y = - 39 6.) x2 - 5x - 66 = 0 7.) x + 7 = 5x + 11 8.)9 - [12 + (15 3)] *THIS IS
NOT MY MATH HOMEWORK. THIS IS FOR A BIO PROJECT. I am just verifying test subjects answers for an answer key.
Answers:1) x=3 2) x=0 3) x=6 4) x=y+1 5) x=13+5/3y 6)x=-6 and x=11 7)x=-2/3 8)-8 I hope all those are correct, I just did them quickly in my head. Good luck! :)
Question:i need the "HOLT algebra 1 practice workbook" answer key..... (ISBN 0-03-054288-X)
Answers:Speak to your math teacher. He will help you to obtain the answer key if he feels that would be beneficial to you./
Question:im looking for a prentice hall mathematics algebra 1 online answer key. if you could post th website like that would be great.
Answers:Why do you want it?
From Youtube
Algebra 2: Sum of Infinite Geometric Series :Watch more free lectures and examples of Algebra 2 at www.educator.com Other subjects include Trigonometry, Calculus, Biology, Chemistry, Statistics,
Physics, and Computer Science. -All lectures are broken down by individual topics -No more wasted time -Just search and jump directly to the answer
|
{"url":"http://www.edurite.com/kbase/kuta-software-infinite-algebra-1-answer-key","timestamp":"2014-04-17T21:30:21Z","content_type":null,"content_length":"68362","record_id":"<urn:uuid:c9ce589f-2bfc-48b4-b344-09fd10f7fc58>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Found first ladder
A week or so after putting together my $1 type set and posting it, I finally found my first ladder, so I'll be adding/changing it a little. This ladder is the 4 digit kind, but I still consider
it a ladder. Please offer your opinions:
Whatever spins your propeller.
I wouldn't call the serial number "2915 2914" a "ladder" — I'd call it "almost a repeater".
I agree. It's really stretching it to call it a ladder. You see these kinds of things flogged on ebay. In my book, it's a spender. Sorry.
OK, thanks for the feedback. Would you consider a two digit sequence a ladder? such as 23242526?
Thanks in advance
That's better, but one I would still pass on, being a purist.
Now I'm going to agree with you.
This is what I would call a ladder:
SteveInTampa Innocent bystander
plus 1
OK. Thanks. I'll be a purist as well and continue to look only for 1 digit ladders.
Look at what some folks on eBay call ladder notes.
□ "Broken Ladder" — L03785469* because it contains the numbers 3, 4, 5, 6, 7, 8, 9, and 0.
□ "Broken Ladder Ending" — L48244123Q, L48244567Q, and L4824890Q because the last three or four characters of each note are 123 4567 890.
□ "Ladders" — H08858380B, H0885381B, H08855382B through H08858387B because they're sequential serial numbers.
□ "Ladder Brick Set" — L34131234G, L34131567G, and L34131890G because the last three or four characters of each note are 1234 567 890.
□ "Ladder Note" — ID44567892A ("456789" is a partial ladder).
□ "Ladder with six "0's"" — I00080009A
□ "Fancy Poker Ladder" — F31323333H
□ "Fancy Ladder" — L11223300A
□ "Super Cool Ladder — B01000002A
□ "Birth-Year Ladder — I19721932A (Where's the ladder?)
□ "Triple Ladder" — 10 notes E13778000B through E13778999B with E13778111B, E13778222B, etc.
□ "Mixed Ladder" — C40673125A because it contains the numbers 0, 1, 2, 3, 4, 5, 6, and 7.
□ "Fantastic Ladder" — II66777883A
...and on and on and on...
That is kind of crazy....I've seen stuff like that on ebay and wondered what they were smoking. I really thought that a ladder could be like the one I got. Thanks so much for your posts and
helping me get up to speed....they were a lot more helpful than say, "whatever spins your propeller."
Wasn't in a great mood when I posted that. Sorry. I should have tried to be more helpful. I'll try to be better next time.
OldDogEyes Full of a lot of hot air.
I wouldn't personally keep that bill, but I know folks who would... Looks pretty cool to me!
|
{"url":"http://www.cointalk.com/threads/found-first-ladder.206272/","timestamp":"2014-04-17T13:33:09Z","content_type":null,"content_length":"62655","record_id":"<urn:uuid:1eb99eb9-5cb4-4098-a368-e617b2af8616>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
February 29: Why Is There a Leap Year? | RCScience
How long is a year? A calendar year is the amount of time that it takes the earth to move in one complete orbit (
nearly a circle, but not quite
) around the sun. A day on the other hand, is one complete rotation of the earth around it's own center.
The ancient Greeks were the first to realize that a year is not made up of a whole (1,2,3,...365...) number of days.
, a great astronomer, mathematician and scientist of the day calculated (with amazing accuracy!) that a year consists of 365.246 days. That is to say that the Earth spins around itself like a top
365.246 times every time that it traces its ellipse through space around the sun.
Noting this fact, Julius Caeser established a system of time (now known as the 'Julian' calendar) which considered the year to be made of 365.25 days. If we wanted to make every year the exact same
length, we would have two options. First, we could ignore the extra .25 days. The problem with this method is that the system of days and months would shift by roughly 6 hours per year. After 100
years, January 1st would come to be at the same time in the Earth's orbit of the sun that January 25th used to be! After 1000 years, New Year's Day would be near the end of the summer, on September
8th! Clearly unacceptable.
Our other option would be to have one roughly six-hour day at the end of every year. This would be even worse however, since dawn, daylight and dusk would all be shifted six hours on the clock! In
the second year of this calendar, sunrise would occur around noon, the third year at 6 PM, the fourth year at midnight, before arriving again at roughly 6 AM. Every four years this cycle would
The system of adding one day per year every four years thus keeps the same daylight hours by adding a day as the four 0.25 days accumulate to one. This also keeps the total shift of days of the year
to less than one day. A pretty good compromise, and certainly worth the oddness of the extra day every four years.
We no longer use the Julian calendar, but an improved version known as the "Gregorian" calendar. Because of rounding 365.246 to 365.25, the Julian calendar lost 10 days over the nearly 2000 years
between its start date and 1582 when the Gregorian calendar was implemented. The 10 days were skipped entirely by the world,
which jumped from October 4th to October 15th
. After the skipped days, a new version of leap year was put into place: every four hundred years, three leap years are omitted. 1600 was a leap year, 1700,1800 and 1900 were not. 2000 was again a
leap year, 2100, 2200 and 2300 will not include February 29th.
Even the Gregorian calendar is no longer as precise as scientific measurement. Over the course of about 500 days, this calendar currently loses one second because
the earth's rotation is gradually slowing down!
* Since the 1970s,
leap seconds
have been added occasionally to the time and calendar to correct for this change. This confusing practice
may end soon
, and the time we all use will become separate from the earth's rotation as best we can measure it. GPS satellites and other systems which require time that matched Earth's rotation would continue to
use leap-seconds.
*Note: this video attributes the slowing to the drag of the ocean; in actuality, the slowing is mostly due to the change in angular momentum of the earth due to the moon's gravity deforming it. This
is known as '
tidal acceleration
|
{"url":"http://www.realclearscience.com/blog/2012/02/why-do-we-have-leap-year.html","timestamp":"2014-04-18T20:42:57Z","content_type":null,"content_length":"28182","record_id":"<urn:uuid:e5998870-3d4b-4193-b9b5-97001938f620>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Correction of order three for the expansion of two
dimensional electromagnetic elds perturbed by the
presence of inhomogeneities of small diameter
Habib Ammari Darko Volkov y
April 2, 2003
The derivation of the correction of order 3 for the expansion of 2 dimensional
electromagnetic elds perturbed by the presence of dielectric inhomogeneities of small
diameter was completed in [3]. However previous numerical work such as that in [6] and
in [14] do not corroborate the existence of these correcting terms. The inhomogeneities
used in all those numerical simulations were collections of ellipses. In this paper we
propose to elucidate this discrepancy. We prove that the correction of order 3 is zero for
any inhomogeneity that has a center of symmetry. We present numerical experiments
for asymmetric inhomogeneities. They illustrate the importance of the correction of
order 3. Finally we prove that numerical schemes based on the usual quadrature for
solving mixed linear integral equations on a smooth contour with smooth integration
kernels and kernels involving logarithmic singularities preserve at the discrete level the
fact that correcting terms of order 3 are zero for inhomogeneities that are symmetric
about their center.
Key words: time harmonic TE Maxwell's equations, boundary integral equations,
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/231/0196727.html","timestamp":"2014-04-17T08:09:08Z","content_type":null,"content_length":"8446","record_id":"<urn:uuid:cd70ac26-f26a-4df9-9a03-1f7c1e350103>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
all 22 comments
[–]gasche[S] 2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
[–]ezyang5 points6 points7 points ago
sorry, this has been archived and can no longer be voted on
[–]gasche[S] 2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
My own simplified reading of the whole construction is more "go deep enough and you'll find that (A => B) is (A* ∨ B) again", which we already heard from process calculi in a somewhat blurred way, or
more clearly from game semantics (Abramsky's article that neelk links is precisely about the connection between the two), but also in the mu-mubar-calculus for example, where lambda is a derived
construct obtained through a such low-level construction.
[–]ezyang1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
[–]philipjf0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
if you think of negation as A -> Bottom then
(x,a) -> case x of Right b -> b Left na -> ex_falso_quodlibet (na a)
is a proof that (~A ∨ B) at least implies A -> B. Similarly
\f -> \g -> g (Left (\a -> g (Right (f a))))
should be a proof that A -> B implies ~~(~A ∨ B).
Now, you really get somewhere if you understand ~A to be a "hole" or "covalue" in a continuation based formulation. Then "filling" a hole is really nothing more than "jumping back" to where you
called the law of the excluded middle and everything just works. In classical logic -> is not special. In fact, in Curien's more recent versions of "System L Syntax" or "Lambda Mu Mu Tilde" you have
a notion of "generalized connective" for defining data types. The function type constructor becomes just another algebraic data type. Actually, you get two function type constructors (one each for
call-by-value and for call-by-name), but that is because classical logic can't really be made constructive and computable without polarizing it.
[–]ezyang0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
That interpretation makes more sense when ~A is on the left side of the turnstile... when it's on the right hand side I still think of it as trying to derive absurdity. I guess, in the context of the
process calculus, it is the "shut up and do something else" protocol.
I'm not terribly familiar with all of the historical backgrond for Curien's System L; AFAICT, it unifies a lot of existing systems, but I don't know how beyond "well it is focused".
[–]apfelmus2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
[–]menteth3 points4 points5 points ago
sorry, this has been archived and can no longer be voted on
Applicative functors can be embedded into arrows. Applicative programming with effects (McBride, Patterson) calls the images "static arrows". They are arrows with an extra restriction.
There's a later Lindley, Wadler and Yallop paper - "Idioms are oblivious, arrows are meticulous, monads are promiscuous" - that takes a very formal approach to classifying the relationships between
applicative functors (idioms), arrows and monads. That paper breaks my brain, though.
[–]apfelmus2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
I actually mean equivalent in the sense that if you have a type a ~> b which
• is an applicative functor in b
• is also a category with respect to composition
• and fulfills a couple of conditions with respect to composition interacting with the applicative interface
then this is automatically an arrow.
The other direction that every arrow is also an applicative functor is more or less obvious.
[–]ryani2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
I'm implementing the first bit in Haskell and I noticed that there's no equivalent to trace in Control.Arrow; my first reading I mistakenly assumed it was ArrowLoop, but that's a more fix-point-like
I don't think it's possible to implement trace just in terms of Arrow and ArrowChoice; but what is required to implement it? Obviously ArrowApply would do the trick, but that's a pretty big hammer to
use. And I don't think you can implement ArrowApply in terms of trace, either.
Or is it a new construct? If so, should it be in the standard arrow library?
EDIT: I guess it is implementable in terms of ArrowChoice and recursion.
trace :: ArrowChoice (~>) => (Either a b ~> Either c b) -> (a ~> c)
trace f = arr Left >>> f >>> trace' where
trace' = id ||| (arr Right >>> f >>> trace')
[–]sclv3 points4 points5 points ago
sorry, this has been archived and can no longer be voted on
I think it is ArrowLoop. Patterson's "Arrows and Computation" makes this point explicitly (ArrowLoop as a generalization of trace [as defined on (->)] ): http://citeseerx.ist.psu.edu/viewdoc/summary?
See section 2.3 in particular.
Also note that one can define fix in terms of trace and vice versa:
trace f x = fst $ fix (\(_, z) -> f (x, z))
fix f = trace (\(x, y) -> (f y, f y)) undefined
Finally, see more general work on traced premonoidal categories by Benton and Hyland: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.9.9365
That paper also makes the connection to ArrowLoop, with more explanation.
[–]ryani5 points6 points7 points ago
sorry, this has been archived and can no longer be voted on
loop :: ArrowLoop (~>) => ((a, x) ~> (b,x)) -> (a ~> b)
-- for (->)
loop f a = let (b,x) = f (a,x) in b
trace :: ArrowTrace (~>) => (Either a x ~> Either b x) -> (a ~> b)
-- for (->)
trace f a = go (f (Left a)) where
go (Left b) = b
go (Right x) = go (f (Right x))
loop is value recursion, running the side effects of the arrow once. trace repeatedly executes the arrow until it returns a Left value.
[–]sclv2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
Ok, now I'm confused. I see in the post's code that ⊗ is being interpreted as a sum type, rather than a product type.
But the typical interpretation is as something which has swap! And indeed Neel is explicitly talking about symmetric monoidal categories (categories equipped with swap)!
But now, looking at the linked paper, I see this is actually the "trace" of a "sub-traced monoidal category" where a little recursive gadget lets us recover a typical trace (I think?).
So there's more going on than I realized at first. I'd be interested in seeing the Haskell code you end up cooking up for this.
[–]sjoerd_visscher1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
[–]ryani1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
[–]sjoerd_visscher1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
|
{"url":"http://www.reddit.com/r/haskell/comments/13zzxl/from_traced_monoidal_to_compact_closed_geometry/","timestamp":"2014-04-18T00:26:14Z","content_type":null,"content_length":"109217","record_id":"<urn:uuid:b4da9610-1c77-4b7c-a1ba-39bd2d10f886>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
javascript surprising array comparison
up vote 12 down vote favorite
I'm trying to compare two arrays in javascript.
What I'd like is:
a < b ⇔ ∃ i ≥ 0 s.t. a[i] < b[i] and ∀ 0 ≤ j < i, a[j] = b[j]
So arrays of non-negative numbers work as desired:
firebug> [0,1,2,3,4] < [1,0,0]
And comparing negative numbers with zero works as expected:
firebug> [-1, 1] < [0, 0]
But comparing negative numbers with negative numbers is... suprising:
firebug> [-2] < [-1]
firebug> -2 < -1
What's going on here, so I can correct my intuition for what array comparison means in javascript?
javascript arrays comparison
add comment
2 Answers
active oldest votes
The array is converted to a string, which comes down to .join(), which in turn joins the elements with a comma (,) as delimiter.
"-1,1" < "0,0" === true
because the character code of - (45) is smaller than the character code of 0 (48).
On the other hand,
up vote 12 down vote "-2" < "-1" === false
because the second character codes are compared (the first are both -, so that doesn't give a result yet), and the character code for 2 (50) is bigger than the character code
of 1 (49), so this yields false.
It comes down to a lexographical sorting (i.e. by character codes) and not a numerical one, even if the elements are numbers (because of the string coercion).
Basically comparing arrays is not recommended. It is implicitly defined as string comparison, but this can yield surprising results.
so, +[-2] - +[-1]; should do the trick. – jAndy Nov 30 '11 at 16:10
1 @jAndy: It would, but then again it only works nicely for one-element arrays. -2 - 1 would be a bit clearer... – pimvdb Nov 30 '11 at 16:15
add comment
There's no such thing as JavaScript array comparison in any form similar to what you describe.
What's happening in all cases is that your arrays are being converted first to strings by joining their contents together. Thus, the string "-2" is not less than the string "-1", because
the character "2" comes after "1" in the character set. Similarly, "-1,1" is less than "0,0" because the "-" character comes before the digits.
You can see for yourself that in all cases your comparisons:
array1 < array2
up vote 6
down vote get exactly the same results as:
("" + array1) < ("" + array2)
array1.join(",") < array2.join(",")
add comment
Not the answer you're looking for? Browse other questions tagged javascript arrays comparison or ask your own question.
|
{"url":"http://stackoverflow.com/questions/8328908/javascript-surprising-array-comparison","timestamp":"2014-04-20T04:02:43Z","content_type":null,"content_length":"71715","record_id":"<urn:uuid:250e0e21-1317-4a1f-b965-92f949d2d409>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lesson 30 - IPv4 Subnetting - Practice
In the previous post, I showed you three major rules used in calculating subnets. This knowledge can only be verified in practice though. Let me show you a few examples related to subnet
calculations. I hope that looking at this topic from different angles is going to help you understand the concept better and feel confident when planning your IP addressing scheme. The first four
questions are merely appetizers for a bigger dish: VLSM.
I am going to refer to my
previous post's rules
while answering the questions (rule 1, rule 2 and rule 3).
If you still do not remember the weights of all bits, you may consider using this little aid presented below (pic. 1) while calculating subnets, and converting binary network masks into decimal
Pic. 1- Subnet Calculation Aid.
This tool is useful before you remember all the weights from left to right and right to left.
Pic. 2 - Example of Subnet Binary-to-Decimal Conversion.
Question 1
Given the prefix 192.168.1.0/24, what should be the length of subnet mask allowing up to 9 subnets?
Answer 1
The address belongs to the class C and uses its default network mask. That leaves us with 8 bits to play with (the last byte). Before we change anything, our address and network mask converted into
the binary notation look like shown below (pic. 3).
Pic. 3 - 192.168.1.0/24 in Binary.
In order to create 9 subnets we must extend the existing length of the network mask by 4 bits which allows up to 16 subnets (use calculation aid in pic. 1). If I tried to extend it by 3 bits only,
the maximum subnets allowed would be only 8 subnets (rule 2 in
lesson 29
). So, I must use 4 bits and the result is: 192.168.1.0/28 (192.168.1.0 255.255.255.240).
Pic. 4 - The Answer to Question 1
Question 2
Given the host address 192.168.1.177/29, what are the subnet and broadcast addresses?
Answer 2
In order to determine the subnet and broadcast address of the subnet of this host address, we must look at the length of the network mask first. It is 29 bits (24+
). This tells us that the last byte of the address has
bits masked (subnet bits) and
bits unmasked (host bits). It is a good idea to look at the the last byte of the address (177) with its network mask using binary notation. Pic. 5 below shows you this clearly.
Pic. 5 - 192.168.1.177/29 in Binary.
Since we must determine the the subnet in which the host resides (177 = 10110001), the host portion of the prefix (host bits reside in the last byte) must all be set to '0'. The byte value with the
host zeroed is the address of the subnet (rule 1 pkt.1 in
lesson 29
). This is the result:
The second part of the question relates to the broadcast address of the subnet. As you remember, in order to obtain the broadcast address, you must put '1' on all host bits of the subnet/network. The
subnet has already been determined (pic. 6), so let's put '1' on all bits of the host portion:
10110000 = 176 <- subnet address .
00000111 = 7 <- host bits set to '1'
In decimal it is: 176 + 7 =
The broadcast address is:
The below picture illustrates it using binary numbers.
Pic. 7 - Host Bits Set to '1' = Broadcast Address.
Question 3
Given the prefix 172.16.0.0/17, how many subnets can you create?
This is a bit tricky isn't it? In order to answer this question, you don't need any calculator, paper or pen. You must trust the rule 2 in
lesson 29
. The address and its network mask (called prefix) converted into binary look like presented below:
Pic. 8 - The Number of Subnets for 172.16.0.0/17
As you see the number of bits we have extended the class B address is:
. So, the number of subnets we can create with it is:
2 subnets
, since this subnet bit can be either 1 or 0.
Pic. 9 - Questions 3 Answer
Question 4
What length of network mask would be the most optimal for router's point-to-point connection?
Answer 4
The key to this question is to understand that point-to-point connection needs only 2 host addresses (two points that are connected together). Knowing this, the rest is a piece of cake. We use rule 3
lesson 29
to determine the length of the network mask that allows
2 host addresses
. Check out the picture 10.
Pic. 10 - Calculating Point-to-Point Connection Host Addresses.
If you count ones above the optimal network mask for point-to-point connection is /
. The decimal value is:
Question 5 - Variable Length Subnet Masking (VLSM)
It's time for a big one. Given the topology (pic. 11), calculate IP addresses for each subnet trying to optimize them according the host address requirements. The IP address you should use to create
subnets is: 192.168.1.0/24. The number of host addresses in the subnets are as follows:
Subnet 1 = 46 host addresses
Subnet 2 = 16 host addresses
Subnet 3 = 10 host addresses
Subnet 4 = 2 host addresses
Subnet 5 = 2 host addresses
Pic. 11 - VLSM Topology.
Icons designed by: Andrzej Szoblik - http://www.newo.pl
As always, if you know the rules and the method, it is going to be easy thing to do. The rules have been discussed in lesson 29, so let me go about this kind of task now.
If your design looks similar to mine (optimizing addresses to the number of host required) you must
start the calculation with the largest number of host
addresses requirement and work your way down to the least number of host addresses.
This is one of the many methods available. It helps quickly calculate all subnet ranges without using calculator (pen and a piece of paper should do).
Step 1
Determine the length of the network mask for each subnet in question. Keep in mind we focus in on the last byte of IP address 192.168.1.
(8 bits).
The first three bytes do not change!
Subnet 1 = 46 Host Addresses
In order to allocate 46 addresses we must use 6 host bits. Why? 5 bits will not be enough as 2 raised to the power of 5 is 32. Also, we must decrement two addresses for subnet and broadcast
addresses. So using 5 bits would give you only 30 host addresses. Here we go with 6 bits then:
Pic. 12 - Subnet 1 in Binary.
Subnet 2 = 16 Host Addresses
We must repeat the same math for the remaining subnets. How many host bits to allocate for 16 hosts (subnet 2)? We must use 5 bits. In case we wanted to use only 4 host bits, the maximum number of
hosts is 14 (16 - 2).
Pic. 13 - Subnet 2 in Binary
Subnet 3 = 10 Host Addresses
We continue using the same logic.
Subnet 4 and 5 = 2 Host Addresses Each
On point-to-point links only 2 host addresses area needed. The most optimal network mask is /30 (30 bits).
Pic. 15 - Subnet 4 and 5 in Binary.
Step 2
Now, that we know the length of network mask for each subnet, we can start calculating the IP address ranges.
The subnet 1 address is: 192.168.1.0/26.
The value of the lowest bit in the network mask is going to be our increment used to calculate the next available subnet address. With /26 the increment value is 64 (pic. 16).
So, if we add the increment to the last byte, we get the number of our next available subnet address:
= 192.168.1.
From there, this next subnet address (value) - 1 is the broadcast of our current subnet:
- 1 = 192.168.1.
(current broadcast address)
Current subnet value + 1 = the first host address:
= 192.168.1.
(first host address of current subnet)
Current broadcast address - 1 = the last host's address:
= 192.168.1.
(last host address of current subnet).
Look at the below pictures which illustrate this method.
Pic. 16 - Subnet 1 - IP addresses
Pic. 17 - Subnet 2 - IP addresses
Pic. 18 - Subnet 3 - IP addresses
Pic. 19 - Subnet 4 - IP addresses
Pic. 20 - Subnet 5 - IP addresses
Now, we're ready to start talking about routing. In my next post, I will talk about a router, its functions,and basic operation. From there, we'll start exploring routing protocols.
|
{"url":"http://ciscoiseasy.blogspot.com/2010/11/lesson-30-ipv4-subnetting-practice.html","timestamp":"2014-04-21T00:47:24Z","content_type":null,"content_length":"90403","record_id":"<urn:uuid:f3b26e45-b99e-49df-a361-d7ebfd02c75a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Musimathematics 1: Factorisation
Number line sketches
Imagine every number from 1 upwards progressing in a line. Every number is wholly divisible by 1 and by one or more prime numbers {2, 3, 5, 7, 11…}. These component values which when multiplied
together produce a given number are called its factors. Now, take one note at a time, calculate its factors and play a note for each prime number present. You might get something like this:
The score above represents the first eight bars of number line sketch #1, produced by a progam written in Max/MSP which factorises numbers using the first 15 primes and applying a series of MIDI
notes which approximate an ascending harmonic series {c1,c2, g2, c3, e3, g3…} to the prime number series. Click on the link below to hear this.
Number line sketch #1
single instrument, first 3000 numbers, 90bpm (180 notes per minute) Play MIDI file
Number line sketch #2
first 800 numbers, 150bpm (300 notes per minute). Notes transposed reassigned to small ensemble (congas, cymbals, bass etc.)
Play MIDI file
|
{"url":"http://www.sundaydance.co.uk/musimathematics-1-factorisation/","timestamp":"2014-04-17T18:29:44Z","content_type":null,"content_length":"31892","record_id":"<urn:uuid:4da658a1-cf47-4c7f-9470-38caefbc3a96>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Robertsdale, IN
Find a Robertsdale, IN Math Tutor
...As the manager of a business for several years, both marketing and employee management required effective public speaking. Several years of group tutoring and many years of group training have
given me additional experience as a speaker. As the instructor of a personal computing class I gained additional experience as a public speaker.
49 Subjects: including SAT math, English, trigonometry, prealgebra
...I have taught Trigonometry in a high school and at the University of Illinois at Chicago. I think that I can persuade any student that math is not only interesting and fun, but beautiful as
well. I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the GRE.
24 Subjects: including differential equations, discrete math, linear algebra, algebra 1
...I majored in History at the University of Illinois at Urbana-Champaign, and also achieved an Associates in Science from Parkland Community College. For my Associates degree, I studied mostly
chemistry and math, but also physics and biology. For my history major, I mostly studied European History, but also took many classes in U.S. and Ancient History.
28 Subjects: including algebra 1, ACT Math, ACT English, ACT Reading
...I have majored in math, so that is my strongest subject. I have a year of tutoring experience in basic college math and 3 years in elementary math. I am very patient with students, which I
find helps them learn better.
7 Subjects: including trigonometry, prealgebra, English, Microsoft Word
...I am very skilled at one-on-one tutoring, small group tutoring, and online math course help. I tutor the way that I teach--present the material in a way that it can understood, practice with
my clients, and give them opportunities to try themselves with me right there to troubleshoot. Whether you're in Chicago or the suburbs, I will put in the work to give you what you need.
11 Subjects: including algebra 1, algebra 2, geometry, prealgebra
Related Robertsdale, IN Tutors
Robertsdale, IN Accounting Tutors
Robertsdale, IN ACT Tutors
Robertsdale, IN Algebra Tutors
Robertsdale, IN Algebra 2 Tutors
Robertsdale, IN Calculus Tutors
Robertsdale, IN Geometry Tutors
Robertsdale, IN Math Tutors
Robertsdale, IN Prealgebra Tutors
Robertsdale, IN Precalculus Tutors
Robertsdale, IN SAT Tutors
Robertsdale, IN SAT Math Tutors
Robertsdale, IN Science Tutors
Robertsdale, IN Statistics Tutors
Robertsdale, IN Trigonometry Tutors
Nearby Cities With Math Tutor
Central Park, IL Math Tutors
Clyde, IL Math Tutors
Creston, IN Math Tutors
Eagle Lake, IL Math Tutors
Goodenow, IL Math Tutors
Grand Crossing, IL Math Tutors
Hessville, IN Math Tutors
La Grange Highlands, IL Math Tutors
Lake Dalecarlia, IN Math Tutors
Mayfair, IL Math Tutors
New Elliott, IN Math Tutors
South Suburban, IL Math Tutors
Sunnyside, IL Math Tutors
Whiting, IN Math Tutors
Wilton Center, IL Math Tutors
|
{"url":"http://www.purplemath.com/Robertsdale_IN_Math_tutors.php","timestamp":"2014-04-17T21:39:24Z","content_type":null,"content_length":"24325","record_id":"<urn:uuid:4df11461-3887-4d39-aefe-df2f98a456ec>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tutors
Seagoville, TX 75159
Building Student Self-Confidence Through Success!!
...I have more than 4 years experience in the educational setting with Dallas ISD, and have tutored reading (comprehension), writing, and
at the elementary level, reading and science at the high school, and
and science at the middle school levels. I am...
Offering 10+ subjects including prealgebra
|
{"url":"http://www.wyzant.com/Seagoville_Math_tutors.aspx","timestamp":"2014-04-18T01:08:55Z","content_type":null,"content_length":"59303","record_id":"<urn:uuid:6e9a5e7e-3190-4762-8139-9f410475f79f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Bias: Monte Carlo
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Bias: Monte Carlo
From Maarten Buis <maartenlbuis@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Bias: Monte Carlo
Date Mon, 6 May 2013 10:25:25 +0200
On Mon, May 6, 2013 at 9:49 AM, John Antonakis wrote:
> I am running some Monte Carlos where I am interested in observing the bias
> in parameter estimates across manipulated conditions. By bias I mean the
> absolute percentage difference of the simulated value from the true value.
> I was wondering whether there has been another written about how much bias
> is "acceptable"--I know that this is like asking how long is a piece of
> string and that there is no statistical fiat that can give a definitive
> answer, because it also is a very field specific issue.
It is probably not quite the answer you are looking for (and I think
you are right by wondering whether such an answer can exist), but one
thing you can do is take into account that a Monte Carlo experiment
contains a random component, so if you repeat the experiment (with a
different seed) you will get a slightly different estimate of your
bias. The logic behind this variation between Monte Carlo experiments
is pretty much the same as the logic behind statistical testing: so
you can compute standard errors and confidence intervals. This is the
idea behind: Ian R. White (2010) "simsum: Analyses of simulation
studies including Monte Carlo error" The Stata Journal, 10(3):369--385
and <http://www.maartenbuis.nl/software/simpplot.html>. It is not very
useful as a definition of what amount of bias is "acceptable" as you
can arbitrarily make the bounds around your estimate of the bias
smaller by increasing the number of iterations, but at least this type
of bounds prevents you from over-interpretting the result from your
simulation, as happend here:
Hope this helps,
Maarten L. Buis
Reichpietschufer 50
10785 Berlin
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2013-05/msg00180.html","timestamp":"2014-04-18T15:56:48Z","content_type":null,"content_length":"9509","record_id":"<urn:uuid:e1dc5aec-1671-4fc2-b807-cb691ef9483c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Browse Course Communities
The purpose of this lab is to investigate motion, and the use of the equation \(d=rt\).
2 days. Hands-on simulation activity. Used to introduce solving linear systems of 2 equations in two unknowns, with follow-up involving 3 equations and 3 unknowns.
Studies \(y=k/x\) and the meaning of asymptote. Uses fact that elimination rate for some drugs varies by population.
|
{"url":"http://www.maa.org/programs/faculty-and-departments/course-communities/browse?term_node_tid_depth=40565&page=5","timestamp":"2014-04-17T08:58:05Z","content_type":null,"content_length":"113443","record_id":"<urn:uuid:f8873f1d-29a5-4b9f-b2c5-ca1d56572827>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Some Simple Ratios Can Help Form a Snapshot of Your Store
Simple guidelines can be just the thing to obtain a memorable snapshot of your laundry business. Such guidelines could point out patterns or problems that you might not ordinarily catch. After all,
you’re swamped with work, immersed in the details, running around putting out fires or helping to solve customer problems.Whether a one-store operation or a multi-store chain, you are
tunnel-visioned. That is, you are focused on those matters that concern you. Sure you receive reports, monthly P&Ls and the like, but often you just shuffle through them, knowing in advance what you
will find. You need something more accessible, more visible to keep you on track.RATIOS: THE WAY TO GO I suggest you create several ratios, which illuminate your operation in a quick-hand fashion.
Let me start at the beginning, so we are all on the same page. What is a ratio? A ratio compares the relationship between two factors. Take labor to executive salaries. If weekly labor is $2,000 and
executive salaries are $1,000, then the labor/executive ratio is 66.6% labor and 33.3% management. Simply, labor is 67% of total payroll.Keep these ratios weekly or monthly in a little notebook
that’s always handy. Periodically take out the notebook and study the figures. You might even set up graphs or charts.Why the effort? These ratios will show you where you need to spend more time,
alert you to emerging problems, and enable you to see what’s happening on the floor. In the ratio of labor to executive pay, it could alert you to the fact that you’re taking more than you should.All
Laundromat businesses are unique, so one owner’s ratios will not work for another. You must create your own, which depend on the structure of your business. For example, if you have a drycleaning
plant to go along with a coin laundry, you might want to monitor relative volume of each operation. This will allow you to assign expenses correctly, which in turn will illuminate profit centers as
well as loss sectors.Let’s look at a simpler example. If your drycleaning drops to less than 50%, you might need to do more marketing (give out coupons) and come up with some other ways to pump up
the Laundromat business to redress the balance.What other ratios might be valuable? Area population to revenue could be a good way to assess market penetration. Say you establish that the population
five miles around your store is 30,000, and your volume is $200,000. That’s $6.67 per individual. You find that $10 is a statistical average. That means more work is required to bump up that figure.
Maybe you could insert bargain circulars in car windshields at senior centers to encourage them to give you a try.Another pivotal ratio could be utilities to revenue. Perhaps utilities are $3,500 and
monthly volume is $20,000. Then the utilities ratio is 17.5%. This states that utilities are 17.5% of volume. You can use this ratio on a month-to-month basis to see that figures are in line. If the
ratio goes up to 20%, you should investigate why. What skewed the results?One important ratio is always type-of-sales breakdown. Keep track of your wash/dry revenue, drop-off service revenue and
other revenue (vending, commercial accounts, etc.). Maybe the ratio will be 80/10/10. Every month, you can assess what is changing in your operation by putting the 80/10/10 template against actual
sales. It could be that the sales are lagging in one area, and you might not know it until you check the ratio. Possibly commercial volume is slipping, and you don’t know it because wash/dry revenue
has been particularly brisk. But identifying the decline will get you to go after more commercial accounts.Another interesting ratio might be average customer spending. Here you will have to count
the number of customers who come in for several days, add the daily revenue and divide by the number of customers. You might break it down by morning, afternoon, evening and weekend. You might find
some surprises. For instance, what if you discover that the typical night customer spends a third more? An investigation into the reasons might lead you to discover that this customer has larger
loads. Could you use this discovery in any way to generate more revenue? Perhaps approach these people and let them know about drop-off service. Maybe you should bring in high-end snacks at night and
earn extra money.If you have multiple stores, you would be interested in sales per square foot. Take the sales and do the math. If you have a $300,000 volume in a 2,500-square-foot Laundromat, that’s
$120 per square foot. How does this compare with a $150,000 volume operation in a 1,100-square-foot store? That’s $136 per square foot. Even though the larger store does more volume, the smaller
store has a higher space utilization. That might tell you to search for smaller locations instead of larger locations. Maybe you want to open another store in the same area. Retail strip malls might
also be the way to go.You could do the same ratio with occupancy costs (rental plus extras) to sales, and see if this backs up the space utilization findings. If you have attendants, you can play
with the numbers to see if business is up or down when the attendant is on duty. Does having someone around to keep your laundry clean boost business?KNOWLEDGE HELPSAgain, ratio analysis doesn’t take
the place of analyzing P&L statements. But knowing the shorthand answers to a variety of business questions will quicken your management response time. It’s also important to not just sit on the
material. When a ratio gets out of whack, you know you must take quick action, and not wait for the next report.Secondly, knowing these ratios would be useful if you got together periodically with
other laundry owners to discuss and evaluate your businesses. Each of you could lay out the ratios and compare. I bet this would lead to some interesting discussions.Consider quick ratios as
shorthand tools to better manage your laundry.
|
{"url":"https://americancoinop.com/articles/some-simple-ratios-can-help-form-snapshot-your-store","timestamp":"2014-04-16T07:28:01Z","content_type":null,"content_length":"36679","record_id":"<urn:uuid:c9b2ee87-7ce3-45c4-a469-c15ccd2ad80d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Go4Expert - View Single Post - C Interview questions
Can anyone giv me answers for this questions...
1)Which bit wise operator is suitable for turning off a particular bit in a number?
2)what will be printed out when the following code is executed:
3)Which one is equivalent to multiplying by 2?
* Left shifting a number by 1
* Left shifting an unsigned int or char by 1?
4)Write a function which gets the n bits from an unsigned integer x, starting from position p .(the right most digit is at position 0)
5)Write a function using bitwise opetators to check whether an integer is a power of 2 or not?
6)Write a Program that swaps the contents of two variables without
using any other variable,using bitwise operators?
7) Which bit wise operator is suitable for checking whether a particular bit is on or off?
8) Which bit wise operator is suitable for putting on a particular bit in a number?
9) Which bit wise operator is suitable for checking whether a particular bit is on or off?
10)Write a function setbits(x,p,n,y) that returns x with the n bits that begin at position p set to the rightmost n bits of y,leaving the other bits changed.
11)Write a function invert(x,p,n) that returns x with the n bits that begin at position p inverted leaving other unchanged.
Waiting for ur reply..
|
{"url":"http://www.go4expert.com/forums/c-interview-questions-post29768/","timestamp":"2014-04-17T13:22:21Z","content_type":null,"content_length":"6508","record_id":"<urn:uuid:857dc343-8f7b-4bc4-8404-cb8e5f2be6bd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Portability portable
Stability experimental
Maintainer bos@serpentine.com
Mathematical functions for statistics.
choose :: Int -> Int -> DoubleSource
Compute the binomial coefficient n `choose` k. For values of k > 30, this uses an approximation for performance reasons. The approximation is accurate to 12 decimal places in the worst case
7 `choose` 3 == 35
Beta function
:: Double p > 0
-> Double q > 0
-> Double x, must lie in [0,1] range
-> Double
Regularized incomplete beta function. Uses algorithm AS63 by Majumder abd Bhattachrjee.
:: Double logarithm of beta function
-> Double p > 0
-> Double q > 0
-> Double x, must lie in [0,1] range
-> Double
Regularized incomplete beta function. Same as incompleteBeta but also takes value of lo
:: Double p
-> Double q
-> Double a
-> Double
Compute inverse of regularized incomplete beta function. Uses initial approximation from AS109 and Halley method to solve equation.
Chebyshev polynomials
A Chebyshev polynomial of the first kind is defined by the following recurrence:
t 0 _ = 1
t 1 x = x
t n x = 2 * x * t (n-1) x - t (n-2) x
:: Vector v Double
=> Double Parameter of each function.
-> v Double Coefficients of each polynomial term, in increasing order.
-> Double
Evaluate a Chebyshev polynomial of the first kind. Uses Clenshaw's algorithm.
:: Vector v Double
=> Double Parameter of each function.
-> v Double Coefficients of each polynomial term, in increasing order.
-> Double
Evaluate a Chebyshev polynomial of the first kind. Uses Broucke's ECHEB algorithm, and his convention for coefficient handling, and so gives different results than chebyshev for the same inputs.
factorial :: Int -> DoubleSource
Compute the factorial function n!. Returns ∞ if the input is above 170 (above which the result cannot be represented by a 64-bit Double).
Gamma function
logGamma :: Double -> DoubleSource
Compute the logarithm of the gamma function Γ(x). Uses Algorithm AS 245 by Macleod.
Gives an accuracy of 10–12 significant decimal digits, except for small regions around x = 1 and x = 2, where the function goes to zero. For greater accuracy, use logGammaL.
Returns ∞ if the input is outside of the range (0 < x ≤ 1e305).
logGammaL :: Double -> DoubleSource
Compute the logarithm of the gamma function, Γ(x). Uses a Lanczos approximation.
This function is slower than logGamma, but gives 14 or more significant decimal digits of accuracy, except around x = 1 and x = 2, where the function goes to zero.
Returns ∞ if the input is outside of the range (0 < x ≤ 1e305).
:: Double s
-> Double x
-> Double
Compute the normalized lower incomplete gamma function γ(s,x). Normalization means that γ(s,∞)=1. Uses Algorithm AS 239 by Shea.
invIncompleteGamma :: Double -> Double -> DoubleSource
Inverse incomplete gamma function. It's approximately inverse of incompleteGamma for the same s. So following equality approximately holds:
invIncompleteGamma s . incompleteGamma s = id
For invIncompleteGamma s p s must be positive and p must be in [0,1] range.
log1p :: Double -> DoubleSource
Compute the natural logarithm of 1 + x. This is accurate even for values of x near zero, where use of log(1+x) would lose precision.
log2 :: Int -> IntSource
O(log n) Compute the logarithm in base 2 of the given value.
Stirling's approximation
stirlingError :: Double -> DoubleSource
Calculate the error term of the Stirling approximation. This is only defined for non-negative values.
stirlingError @n@ = @log(n!) - log(sqrt(2*pi*n)*(n/e)^n)
:: Double x
-> Double np
-> Double
Evaluate the deviance term x log(x/np) + np - x.
• Broucke, R. (1973) Algorithm 446: Ten subroutines for the manipulation of Chebyshev series. Communications of the ACM 16(4):254–256. http://doi.acm.org/10.1145/362003.362037
• Clenshaw, C.W. (1962) Chebyshev series for mathematical functions. National Physical Laboratory Mathematical Tables 5, Her Majesty's Stationery Office, London.
• Lanczos, C. (1964) A precision approximation of the gamma function. SIAM Journal on Numerical Analysis B 1:86–96. http://www.jstor.org/stable/2949767
• Loader, C. (2000) Fast and Accurate Computation of Binomial Probabilities. http://projects.scipy.org/scipy/raw-attachment/ticket/620/loader2000Fast.pdf
• Macleod, A.J. (1989) Algorithm AS 245: A robust and reliable algorithm for the logarithm of the gamma function. Journal of the Royal Statistical Society, Series C (Applied Statistics) 38
(2):397–402. http://www.jstor.org/stable/2348078
• Shea, B. (1988) Algorithm AS 239: Chi-squared and incomplete gamma integral. Applied Statistics 37(3):466–473. http://www.jstor.org/stable/2347328
• K. L. Majumder, G. P. Bhattacharjee (1973) Algorithm AS 63: The Incomplete Beta Integral. /Journal of the Royal Statistical Society. Series C (Applied Statistics)/ Vol. 22, No. 3 (1973), pp.
409-411. http://www.jstor.org/pss/2346797
• K. L. Majumder, G. P. Bhattacharjee (1973) Algorithm AS 64: Inverse of the Incomplete Beta Function Ratio. /Journal of the Royal Statistical Society. Series C (Applied Statistics)/ Vol. 22, No. 3
(1973), pp. 411-414 http://www.jstor.org/pss/2346798
• G. W. Cran, K. J. Martin and G. E. Thomas (1977) Remark AS R19 and Algorithm AS 109: A Remark on Algorithms: AS 63: The Incomplete Beta Integral AS 64: Inverse of the Incomplete Beta Function
Ratio. /Journal of the Royal Statistical Society. Series C (Applied Statistics)/ Vol. 26, No. 1 (1977), pp. 111-114 http://www.jstor.org/pss/2346887
|
{"url":"http://hackage.haskell.org/package/statistics-0.10.0.1/docs/Statistics-Math.html","timestamp":"2014-04-18T06:07:25Z","content_type":null,"content_length":"24783","record_id":"<urn:uuid:f4812b30-4f20-4915-9565-57282939e50b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rancho Dominguez, CA
Rancho Palos Verdes, CA 90275
Math and Physics Tutor
...In both high school and university, I enjoyed helping fellow students with maths and physics classwork and homework. I worked at a tutoring center in Rancho Cucamonga from December 2011, and I
loved tutoring students in the areas of
, algebra, geometry,...
Offering 10+ subjects including calculus
|
{"url":"http://www.wyzant.com/Rancho_Dominguez_CA_Calculus_tutors.aspx","timestamp":"2014-04-21T12:41:09Z","content_type":null,"content_length":"61887","record_id":"<urn:uuid:8fe1a6e1-533c-4a5d-94a5-39e321b79842>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Black surfaces [Archive] - OpenGL Discussion and Help Forums
I've written a very simple program that displays a small plane. Whenever I try to scale the plane it's color changes from white to grey and then to black if I scale it enough. I do not have ambient
light enabled. Light 0 is located at 0,0,1 and its color is (1,1,1,1). The viewing volume is defined by ortho with the following parameters (-w/2,w/2,-h/2,h/2,-1,3) where w and h are the viewport
width and height. The model is translated to (0,0,-1) and is not rotated. The model material is set with parameters (GL_FRONT, GL_AMBIENT_AND_DIFFUSE, (1,1,1,1)). What gives? How come color changes?
If I turn on ambient light to (1,1,1,1) I get a white surface again, but not when just light 0 is enabled.
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-138659.html","timestamp":"2014-04-17T03:59:56Z","content_type":null,"content_length":"12274","record_id":"<urn:uuid:8b3ae89e-77a9-4f65-8774-2d41ab28f801>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Walnut Park, CA Math Tutor
Find a Walnut Park, CA Math Tutor
...I am lucky enough to be able to stay home with my 3 year old son while continuing to teach students in an online environment. I would be happy to tutor anyone in need of assistance in any
science, math or elementary subject. I am able to tutor students online or at my home or your home, whatever is easier for you!
21 Subjects: including prealgebra, anatomy, reading, algebra 1
...I am currently enrolled at USC pursuing a graduate degree in engineering. I previously graduated from UCLA with a bachelor's of science degree in environmental science. I minored in
atmospheric and oceanic science, and have engaged in years of water quality research.
41 Subjects: including prealgebra, ACT Math, SAT math, English
...I have worked professionally on the stage as an actor and a musician, and also have experience working an eclectic mix of jobs including but not limited to the following: stone masonry &
carpentry; guiding spelunking tours; volunteering as a wheelchair operator in rural France; hoop breaking and ...
23 Subjects: including algebra 2, calculus, English, precalculus
...I have an exciting degree in Physics from the University of California at Irvine; I have specialized in mathematics (Algebra I, Algebra II, trigonometry, Calculus), which is the predominant
subject that I have been teaching at a local school district. Why do I like to help students in math? Because I love to see ALL students enjoy mathematics as much as I do.
8 Subjects: including algebra 1, algebra 2, calculus, trigonometry
I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always
work with students to overcome obstacles that they might have.
37 Subjects: including prealgebra, SAT math, algebra 1, algebra 2
Related Walnut Park, CA Tutors
Walnut Park, CA Accounting Tutors
Walnut Park, CA ACT Tutors
Walnut Park, CA Algebra Tutors
Walnut Park, CA Algebra 2 Tutors
Walnut Park, CA Calculus Tutors
Walnut Park, CA Geometry Tutors
Walnut Park, CA Math Tutors
Walnut Park, CA Prealgebra Tutors
Walnut Park, CA Precalculus Tutors
Walnut Park, CA SAT Tutors
Walnut Park, CA SAT Math Tutors
Walnut Park, CA Science Tutors
Walnut Park, CA Statistics Tutors
Walnut Park, CA Trigonometry Tutors
Nearby Cities With Math Tutor
August F. Haw, CA Math Tutors
Baldwin Hills, CA Math Tutors
Bicentennial, CA Math Tutors
Broadway Manchester, CA Math Tutors
Dockweiler, CA Math Tutors
Firestone Park, CA Math Tutors
Green, CA Math Tutors
Greenmead, CA Math Tutors
Hancock, CA Math Tutors
Hollyglen, CA Math Tutors
Huntington Park Math Tutors
Lennox, CA Math Tutors
Rosewood, CA Math Tutors
Sanford, CA Math Tutors
South Gate Math Tutors
|
{"url":"http://www.purplemath.com/Walnut_Park_CA_Math_tutors.php","timestamp":"2014-04-18T11:23:04Z","content_type":null,"content_length":"24166","record_id":"<urn:uuid:90dcb86c-29d9-424b-a635-1c0df9e3d8c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Divide and Conquer
I tried to understand this piece of algorithm anlysis through slides and youtube but I can't understand the part that why we multiplied T(N/2) with 4 and why it is N/2 not N at the end?
X = a 2N/2 + b
Y = c 2N/2 + d
XY = ac 2N + (ad+bc)2N/2+bd
if |X| = |Y| = 1 return (XY);
Break X into a:b and Y into c:d;
return ( Mult(a,c)·2N + (Mult(a,d) + Mult(b,c)) ·2N/2 +
T(1) = k for some constant k
T(N) = 4 T(N/2) + k’ N for some constant k’
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/beginner/116579/","timestamp":"2014-04-16T10:23:09Z","content_type":null,"content_length":"6864","record_id":"<urn:uuid:ec160e59-d96a-4038-9810-3752fdc43fe9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Auxiliary equation
In a
linear differential equation
constant coefficient
s, this is the equation you arrive at when you let
. It is also known as the
characteristic equation
. For example...
y'' + 2y' - 3y = 0
m^2e^mx + 2me^mx - 3e^mx = 0
Divide by e^mx (since it cannot equal 0)
m^2 + 2m - 3 = 0 <-----auxiliary equation
general equation : y=c[1]e^-3x+c[2]e^x
The problems don't always end up with such a simple general equation for y, but the auxiliary equation is obtained just the same way. It's then factored to solve for m, which in turn solves the
differential equation.
|
{"url":"http://everything2.com/title/Auxiliary+equation?showwidget=showCs1285980","timestamp":"2014-04-21T00:36:32Z","content_type":null,"content_length":"25829","record_id":"<urn:uuid:13bd8b17-0b98-4c31-83f9-a6ddeb733195>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of absciss
is an estimate of the
spectral density
of a signal. The term was coined by
Arthur Schuster
as in the following quote:
{{cquote|THE PERIODOGRAM. It is convenient to have a word for some representation of a variable quantity which shall correspond to the 'spectrum' of a luminous radiation. I propose the word
periodogram, and define it more particularly in the following way: Let
$frac\left\{T\right\}\left\{2\right\}a = int_\left\{t_1\right\}^\left\{t_1+T\right\}f\left(t\right)cos\left(kt\right)dt$
$frac\left\{T\right\}\left\{2\right\}b = int_\left\{t_1\right\}^\left\{t_1+T\right\}f\left(t\right)sin\left(kt\right)dt$
may for convenience be chosen to be equal to some integer multiple of
and plot a curve with
$r = sqrt\left\{a^2+b^2\right\}$
; this curve, or, better, the space between this curve and the axis of abscissæ, represents the periodogram of f(t).}}
Note that the term periodogram may also be used to describe the quantity $r^2$, which is its common meaning in astronomy (as in "the modulus-squared of the discrete Fourier transform of the time
series (with the appropriate normalisation)"). See Scargle (1982) for a detailed discussion in this context.
A spectral plot refers to a smoothed version of the periodogram. Smoothing is performed to reduce the effect of measurement noise.
In practice, the periodogram is often computed from a finite-length digital sequence using the fast Fourier transform (FFT). The raw periodogram is not a good spectral estimate because of spectral
bias and the fact that the variance at a given frequency does not decrease as the number of samples used in the computation increases.
The spectral bias problem arises from a sharp truncation of the sequence, and can be reduced by first multiplying the finite sequence by a window function which truncates the sequence gracefully
rather than abruptly.
The variance problem can be reduced by smoothing the periodogram. Various techniques to reduce spectral bias and variance are the subject of spectral estimation.
One such technique to solve the variance problems is also known as the method of averaged periodograms. The idea behind it is, to divide the set of samples into N sets of M samples, compute the DFT
of each set, square it to get the power spectral density and compute the average of all of them. This leads to a decrease in the standard deviation as $frac\left\{1\right\}\left\{sqrt\left\{N\right\}
See also
|
{"url":"http://www.reference.com/browse/absciss","timestamp":"2014-04-18T03:45:56Z","content_type":null,"content_length":"74882","record_id":"<urn:uuid:1fd5fc26-0142-40e5-b991-b32ea6144dab>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What theorem of Liouville's is Gian-Carlo Rota referring to here?
up vote 8 down vote favorite
I am very curious about this remark in Lesson Four of Rota's talk, Ten Lessons I Wish I Had Learned Before I Started Teaching Differential Equations:
"For second order linear differential equations, formulas for changes of dependent and independent variables are known, but such formulas are not to be found in any book written in this century,
even though they are of the utmost usefulness.
"Liouville discovered a differential polynomial in the coefficients of a second order linear differential equation which he called the invariant. He proved that two linear second order
differential equations can be transformed into each other by changes of variables if and only if they have the same invariant. This theorem is not to be found in any text. It was stated as an
exercise in the first edition of my book, but my coauthor insisted that it be omitted from later editions."
Does anyone know where to find this theorem?
soft-question ho.history-overview classical-invariant-theor differential-equations
Hyperbole is the worst thing in the universe, and Gian-Carlo Rota used it frequently. In this article, he wrote: "The Administrative Director of the MIT mathematics department, who exercises
2 supreme authority upon the faculty's teaching, has only to wave a copy of my book at me, while staring at me in silence. At her prompting, I bow and fall into line; I will be the lecturer in the
dreaded course for one more year, and[...]". Some years ago I called this sentence to the attention of the said "administrative director, and she said there's not a word of truth in it. – Michael
Hardy Apr 22 '11 at 21:46
Hyperbole is far from the worst thing in the universe. Rota's flair for the dramatic is part of what made him such an engaging lecturer. – AVS Apr 23 '11 at 2:03
Um.....AVS, did you miss something? But then, who needs rhetorical questions? – Michael Hardy Apr 23 '11 at 5:58
BTW, I plagiarized that line from a facebook friend. He wrote it on his wall and put the period after the word "universe" and didn't go on from there. Maybe adding more stuff confuses the issue. –
Michael Hardy Apr 23 '11 at 6:04
I apologize for misinterpreting your comment. Your meaning and tone would have been entirely clear had you ended your comment at the word universe as you suggest (and I'm sure Rota would have
heartily approved). – AVS Apr 23 '11 at 9:43
add comment
2 Answers
active oldest votes
See E. Hille, Ordinary differential equations in the complex domain, Wiley, New York, 1976. The Liouville transformation is given on Page 179. The invariant mentioned by Rota is
up vote 4 down vote the function $Q(z)$ appearing as a coefficient of the equation in the canonical form.
Thank you very much, I will look up this reference. – lkeer Apr 23 '11 at 7:21
add comment
Kamke's classic compendium [1] of ODE solutions and solution methods displays this invariant in Part I, equation §25.1(4). The invariant is given for the more famous equations (Bessel,
Legendre, hypergeometric, ...) of the large list of second order linear equations in Part III Chapter II.
up vote 0 [1] Kamke, E. Differentialgleichungen: Lösungen und Lösungernethoden. Vol. 1. First published in 1944.
down vote
Unfortunately, it seems that this book never appeared in English translation.
add comment
Not the answer you're looking for? Browse other questions tagged soft-question ho.history-overview classical-invariant-theor differential-equations or ask your own question.
|
{"url":"http://mathoverflow.net/questions/62630/what-theorem-of-liouvilles-is-gian-carlo-rota-referring-to-here/62649","timestamp":"2014-04-21T08:09:50Z","content_type":null,"content_length":"60170","record_id":"<urn:uuid:fb27c55e-78c4-49d2-a363-c4be3ee08857>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An improved bound on the minimal number of edges in color-critical graphs
A graph $G$ is $k$-color-critical (or simply $k$-critical) if $\chi(G)=k$ but $\chi(G') < k$ for every proper subgraph $G'$ of $G$, where $\chi(G)$ denotes the chromatic number of $G$. Consider the
following problem: given $k$ and $n$, what is the minimal number of edges in a $k$-critical graph on $n$ vertices? It is easy to see that every vertex of a $k$-critical graph $G$ has degree at least
$k-1$, implying $|E(G)|\geq {{k-1}\over {2}}|V(G)|$. Gallai improved this trivial bound to $|E(G)|\geq {{k-1}\over {2}}+{{k-3}\over {2(k^2-3)}}|V(G)|$ for every $k$-critical graph $G$ (where $k\geq
4$), which is not a clique $K_k$ on $k$ vertices. In this note we strengthen Gallai's result by showing
Theorem Suppose $k\geq 4$, and let $G=(V,E)$ be a $k$-critical graph on more than $k$ vertices. Then $ |E(G)|\geq ({{k-1}\over {2}}+{{k-3}\over {2(k^2-2k-1)}})|V(G)| $
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v5i1r4","timestamp":"2014-04-20T08:20:26Z","content_type":null,"content_length":"14584","record_id":"<urn:uuid:e8ae4df6-1101-4f98-8675-e2b6c2e5e7c2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definite Integrals
definite integral
of a function is closely related to the antiderivative and indefinite integral of a function. The primary difference is that the indefinite integral, if it exists, is a real number value, while the
latter two represent an infinite number of functions that differ only by a constant. The relationship between these concepts is will be discussed in the section on the Fundamental Theorem of
Calculus, and you will see that the definite integral will have applications to many problems in calculus.
The development of the definition of the definite integral begins with a function f( x), which is continuous on a closed interval [ a, b]. The given interval is partitioned into “ n” subintervals
that, although not necessary, can be taken to be of equal lengths (Δ x). An arbitrary domain value, x [i], is chosen in each subinterval, and its subsequent function value, f( x [i]), is determined.
The product of each function value times the corresponding subinterval length is determined, and these “ n” products are added to determine their sum. This sum is referred to as a Riemann sum and may
be positive, negative, or zero, depending upon the behavior of the function on the closed interval. For example, if f( x) > 0 on [ a, b], then the Riemann sum will be a positive real number. If f( x)
< 0 on [ a, b], then the Riemann sum will be a negative real number. The Riemann sum of the function f( x) on [ a, b] is expressed as
A Riemann sum may, therefore, be thought of as a “sum of n products.”
Example 1: Evaluate the Riemann sum for f( x) = x ^2 on [1,3] using the four subintervals of equal length, where x [i] is the right endpoint in the ith subinterval (see Figure ) .
Figure 1 A Riemann sum with four subintervals.
Because the subintervals are to be of equal lengths, you find that
The Riemann sum for four subintervals is
If the number of subintervals is increased repeatedly, the effect would be that the length of each subinterval would get smaller and smaller. This may be restated as follows: If the number of
subintervals increases without bound ( n → + ∞), then the length of each subinterval approaches zero (Δ x → + ∞). This limit of a Riemann sum, if it exists, is used to define the definite integral of
a function on [ a, b]. If f( x) is defined on the closed interval [ a, b] then the definite integral of f( x) from a to b is defined as
if this limit exits.
The function f( x) is called the integrand, and the variable x is the variable of integration. The numbers a and b are called the limits of integration with a referred to as the lower limit of
integration while b is referred to as the upper limit of integration.
Note that the symbol ∫, used with the indefinite integral, is the same symbol used previously for the indefinite integral of a function. The reason for this will be made more apparent in the
following discussion of the Fundamental Theorem of Calculus. Also, keep in mind that the definite integral is a unique real number and does not represent an infinite number of functions that result
from the indefinite integral of a function.
The question of the existence of the limit of a Riemann sum is important to consider because it determines whether the definite integral exists for a function on a closed interval. As with
differentiation, a significant relationship exists between continuity and integration and is summarized as follows: If a function f( x) is continuous on a closed interval [ a, b], then the definite
integral of f( x) on [ a, b] exists and f is said to be integrable on [ a, b]. In other words, continuity guarantees that the definite integral exists, but the converse is not necessarily true.
Unfortunately, the fact that the definite integral of a function exists on a closed interval does not imply that the value of the definite integral is easy to find.
Certain properties are useful in solving problems requiring the application of the definite integral. Some of the more common properties are
3. c is a constant
5. Sum Rule:
6. Difference Rule:
7. If
8. If
9. If
10. If a, b, and c are any three points on a closed interval, then
11. The Mean Value Theorem for Definite Integrals: If f( x) is continuous on the closed interval [ a, b], then at least one number c exists in the open interval ( a, b) such that
The value of f( c) is called the average or mean value of the function f( x) on the interval [ a, b] and
Example 2: Evaluate
Example 3: Given that
Example 4: Given that
Example 5 Evaluate
Example 6: Given that
Example 7: Given that
Example 8: Given that
Example 9: Given that c values that satisfy the Mean Value Theorem for the given function on the closed interval.
By the Mean Value Theorem
Because c.
The Fundamental Theorem of Calculus
The Fundamental Theorem of Calculus establishes the relationship between indefinite and definite integrals and introduces a technique for evaluating definite integrals without using Riemann sums,
which is very important because evaluating the limit of Riemann sum can be extremely time‐consuming and difficult. The statement of the theorem is: If f( x) is continuous on the interval [ a, b], and
F( x) is any antiderivative of f( x) on [ a, b], then
In other words, the value of the definite integral of a function on [ a, b] is the difference of any antiderivative of the function evaluated at the upper limit of integration minus the same
antiderivative evaluated at the lower limit of integration. Because the constants of integration are the same for both parts of this difference, they are ignored in the evaluation of the definite
integral because they subtract and yield zero. Keeping this in mind, choose the constant of integration to be zero for all definite integral evaluations after Example 10.
Example 10: Evaluate
Because the general antiderivative of x ^2 is (1/3)x ^3 + C, you find that
Example 11: Evaluate
Because an antiderivative of sin x is – cos x, you find that
Example 12: Evaluate
Example 13: Evaluate
Because an antiderivative of x ^2 − 4 x + 1 is (1/3) x ^3 − 2 x ^2 + x, you find that
The numerous techniques that can be used to evaluate indefinite integrals can also be used to evaluate definite integrals. The methods of substitution and change of variables, integration by parts,
trigonometric integrals, and trigonometric substitution are illustrated in the following examples.
Example 14: Evaluate
Using the substitution method with
the limits of integration can be converted from x values to their corresponding u values. When x = 1, u = 3 and when x = 2, u = 6, you find that
Note that when the substitution method is used to evaluate definite integrals, it is not necessary to go back to the original variable if the limits of integration are converted to the new variable
Example 15: Evaluate
Using the substitution method with u = sin x + 1, du = cos x dx, you find that u = 1 when x = π and u = 0 when x = 3π/2; hence,
Note that you never had to return to the trigonometric functions in the original integral to evaluate the definite integral.
Example 16: Evaluate
Using integration by parts with
you find that
Example 17: Evaluate
Using integration by parts with
Example 18: Evaluate
Example 19: Evaluate
Example 20: Evaluate
Because the integrand contains the form a ^2 + x ^2,
Figure 2 Diagram for Example 20.
Example 21: Evaluate
Because the radical has the form
Figure 3 Diagram for Example 21.
|
{"url":"http://www.cliffsnotes.com/math/calculus/calculus/integration/definite-integrals","timestamp":"2014-04-16T13:34:19Z","content_type":null,"content_length":"163467","record_id":"<urn:uuid:25563090-5033-4e4a-bfbb-ab6886c2250b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Possible Answer
2006 USA Math Olympiad qualification score set. According to Steve Dunbar, AMC Director, the cutoff level for AMC 12 is 217 and AMC 10 is 212. The AIME floor is 8. ... USAMO Cutoff Levels Set. 2006
USA Math Olympiad qualification score set. According to Steve Dunbar, ... - read more
2.Selection to the USAMO will be based on the USAMO index which is defined as AMC 12 Score + 10 * AIME Score. Selection to the USAJMO will be based on the USAJMO index ... 2013: 209.0 for USAMO:
210.5 for USAJMO: 264 USAMO; 231 ... Mathematical Olympiad Program (also known as "MOP") ... - read more
Share your answer: 2013 mop scores cutoff for usajmo?
Question Analizer
2013 mop scores cutoff for usajmo resources
|
{"url":"http://www.askives.com/2013-mop-scores-cutoff-for-usajmo.html","timestamp":"2014-04-18T19:50:04Z","content_type":null,"content_length":"36795","record_id":"<urn:uuid:0707a0ed-c4fe-4e68-a778-f39e571bd81a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
WTB 1200-1500rms amp.
@ 2 ohm. Lemme know what you got and shoot me a price shipped to 56721. Thanks
Re: WTB 1200-1500rms amp.
A new vfl 4480. 1400 rms at 2 ohm and 2200 at 1 ohm $350 shipped
Re: WTB 1200-1500rms amp.
Re: WTB 1200-1500rms amp.
Re: WTB 1200-1500rms amp.
im going to see if i get anymore offers within a couple hours if not i will pm you ok sir
Re: WTB 1200-1500rms amp.
No problem. FYI. I have a loose new inthe box rd 1750. Which should give you what you want power wise It's a beast of an amp. The v2 amps were very under rated I clamped 2500 at 1 ohm from this
little beast
Re: WTB 1200-1500rms amp.
Re: WTB 1200-1500rms amp.
Re: WTB 1200-1500rms amp.
I have a rockford fosgate x7a. Oldschool beast. almost 1700 at 2ohms. Its missing the cover. Its ugly, but protected.
This picture isnt mine, but just to show without cover. would do 125 shipped.
Re: WTB 1200-1500rms amp.
is it 1ohm stable?
I have a rockford fosgate x7a. Oldschool beast. almost 1700 at 2ohms. Its missing the cover. Its ugly, but protected.
This picture isnt mine, but just to show without cover. would do 125 shipped.
Re: WTB 1200-1500rms amp.
Ive heard it is with good electric, never tried it. i lost the birth sheet, it was around 1700@2ohms
Re: WTB 1200-1500rms amp.
so you've never powered it yet? how do you know it works because i am interested just seems kind of sketchy
---------- Post added at 12:50 PM ---------- Previous post was at 12:49 PM ----------
sorry miss read your post my bad forget my previous post lol
Re: WTB 1200-1500rms amp.
I wouldn't run it at 1 ohm with out a very strong electrical
Re: WTB 1200-1500rms amp.
Re: WTB 1200-1500rms amp.
Just wondering if it has to be 2 ohm. I've got a mmats m2000.05D that is rated at 2000@.5, 1100@1 and 550@2. However has been clamped doing 2200 rise to 1.4 @12.6v. So it should do 1100+@2
|
{"url":"http://www.caraudio.com/forums/wanting-buy-wtb-car-audio/593345-wtb-1200-1500rms-amp-print.html","timestamp":"2014-04-19T18:07:25Z","content_type":null,"content_length":"18219","record_id":"<urn:uuid:ccd86614-e1f6-4722-861c-4d363365d2f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newton and Gravitation
Gravity between planets
We can now use Newton's Law to derive some results concerning planets in circular orbits. Although we know from Kepler's Laws that the orbits are not circular, in most cases approximating the orbit
by a circle gives satisfactory results. When two massive bodies exert a gravitational force on one another, we shall see (in the SparkNote on Orbits) that planets describe circular or elliptical
paths around their common center of mass. In the case of a planet orbiting the sun, however, the sun's mass is so much greater than the planets, that the center of mass lies well within the sun, and
in fact very close to its center. For this reason it is a good approximation to assume that the sun stays fixed (say at the origin) and the planets move around it. The force is then given by:
Figure %: Circular orbit around the sun.
From the central force acting on the planet is exerting a centripetal force. We know that a centripetal motion has acceleration
and thus
. We can therefore write (note that in what follows
, without the vector arrow denote the magnitude of
--that is
r = |
Rearranging we have that:
Thus we have derived an expression for the speed of the planet orbiting the sun. However, we can also express the speed as the distance around the orbit divided by the time taken
(the period):
Squaring this and equating this with the result from above:
Thus we have derived Kepler's Third Law for circular orbits from the Universal Law of Gravitation.
Gravity near the earth
We can apply the Universal Law of Gravitation to objects near the earth also. For an object at or near the surface of the earth, the force due to gravity acts (for reasons that will become clearer in
the section on Newton's Shell Theory) toward the center of the earth. That is, it acts downwards because every particle in the earth is attracting the object. The magnitude of the force on an object
of mass m is given by:
r [e] ^2
is the radius of the earth. Let us calculate the constant
This is the acceleration due to gravity on the earth (the figure is usually given as
9.8 m/sec ^2
, but the value varies considerably at different places on the earth's surface). Thus if we rename the constants
, then we have the familiar equation
F = mg
which determines all free-fall motion near the earth.
We can also calculate the value of g that an astronaut in a space shuttle would feel orbiting at a height of 200 kilometers above the earth:
g [1] =
= (6.67×10^-11)(5.98×10^24)(6.4×10^6 +2×10^5)^-2
= 9.16
This small reduction in
is not sufficient to explain why the astronauts feel "weightless." In fact, this is caused by the fact that the shuttle's orbit is in fact a constant free-fall around the earth. An orbit is
essentially a perpetual "falling" around a planet--since an orbiting shuttle and its occupant astronauts are falling with the same acceleration as the gravitational field, they feel no gravitational
Determining G
Figure %: A schematic diagram of Cavendishís torsion apparatus.
Because the gravitational force between everyday-sized objects is very small, the gravitational constant, G , is extremely difficult to measure accurately. Henry Cavendish (1731-1810) devised a
clever apparatus for measuring the gravitational constant. A fiber is attached to the center of the beam to which m and m' are attached, as shown in . This is allowed to reach an equilibrium,
untwisted state before, the two larger masses M and M' are lowered next to them. The gravitational force between the two pairs of masses causes the string to twist such that the amount of twisting is
just balanced by the gravitational force. By appropriate calibration (knowing how much force causes how much twisting), the gravitational force may be measured. Since the masses and the distances
between them may also be measured, only G remains unknown in the Universal Law of Gravitation. Thus G can be calculated from the measured quantities. Accurate measurements of G now place the value at
6.673×10^-11 N.m ^2 /kg ^2 .
|
{"url":"http://www.sparknotes.com/physics/gravitation/newton/section2.rhtml","timestamp":"2014-04-16T13:38:29Z","content_type":null,"content_length":"68153","record_id":"<urn:uuid:82db10ec-07b9-432c-95a5-07b61266e8dc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westmont, NJ Algebra 2 Tutor
Find a Westmont, NJ Algebra 2 Tutor
...In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems. In addition, I offer FREE ALL NIGHT email/phone support just
before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy...
14 Subjects: including algebra 2, physics, calculus, ASVAB
I am a fun, helpful, and experienced tutor for the Sciences (biology and chemistry), Math (geometry, pre-algebra, algebra, and pre-calulus), English/Grammar, and the SATs. For the SAT, I implement
a results driven and rigorous 7 week strategy. PLEASE NOTE: I only take serious SAT students who have...
26 Subjects: including algebra 2, English, chemistry, reading
...Thank You, Robert O.Learn Music Theory from a classically trained composer. I graduated in May 2010 from West Chester University of Pennsylvania, as a bachelor of music in composition.
Throughout my time as a musician, I've been tutoring and helping friends and classmates.
8 Subjects: including algebra 2, algebra 1, Java, prealgebra
...Math is a subject that can be a bit difficult for some folks, so I really love the chance to break down barriers and make math accessible for students that are struggling with aspects of math.
I believe that I have a unique ability to present and demonstrate various topics in mathematics in a fu...
22 Subjects: including algebra 2, calculus, statistics, geometry
I am a graduate from the University of Rochester's 2013 class. I have a B.S. in biology and, while this is my first time tutoring, I have extensive experience working with other students. I was
teaching assistant for the introductory biology class at my undergraduate institution for two years.
16 Subjects: including algebra 2, chemistry, biology, algebra 1
|
{"url":"http://www.purplemath.com/Westmont_NJ_algebra_2_tutors.php","timestamp":"2014-04-19T23:45:08Z","content_type":null,"content_length":"24337","record_id":"<urn:uuid:aafb6f76-8a2e-411e-90a4-2026c13a86ca>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Computer algebra package for Lie group computations
LiE is a computer algebra system that is specialised in computations involving (reductive) Lie groups and their representations. It is publically available for free in source code. For a description
of its characteristics, we refer to the following sources of information.
Version 2.2.2
The current version of LiE is 2.2.2, which is more or less unchanged since about the year 2000. Minor bugfixes do get applied now and then.
The LiE manual
As indicated above, the manual is included in electronic form in the distrubution. As a convenience for those without TeX installed, here is a PDF version of the manual. A paper version has also been
published; although it is no longer in print, it may referred to as follows
M. A. A. van Leeuwen, A. M. Cohen and B. Lisser, "LiE, A Package for Lie Group Computations", Computer Algebra Nederland, Amsterdam, ISBN 90-74116-02-7, 1992
Obtaining LiE
Since July 1996, LiE is publically available for free. There have been questions about what this means exactly, notably whether it can be redistributed by thrid parties, and under what licence. The
answer is that the institute CWI at which LiE was developed (and therefore holds the original copyright) has given the developers permission to do whatever they want with the source code, and that
the project leader Arjeh Cohen has decided that LiE can be distributed under the GNU Lesser General Public Licence. The current distribution does not contain any of the blurb necessary to make this
clear from the distribution itself; feel free to add this if you want to distribute.
There are various versions of the software you may obtain:
If you want to try out some computations with LiE without having to install it, you can invoke LiE through the WWW, by simply filling in a form, and the result will be displayed.
If you want to compile and run the program, but need no documentation of the source files, you can fetch the tarfile for a compile only version.
You can also get a much more documented version of the source files, which use the CWEBx software documentation system. To use this version you need to download and compile the CWEBx system in
addition to the documented sources for LiE.
Detailed instruction of how to install each of these versions can be found in the README files contained in the individual tarfiles. Currently, the Makefiles use the functionality of GNU make.
LiE on Macintosh OS X
If you want to install LiE on Mac OS X, please read this note.
|
{"url":"http://young.sp2mi.univ-poitiers.fr/~marc/LiE/","timestamp":"2014-04-21T04:35:00Z","content_type":null,"content_length":"4149","record_id":"<urn:uuid:2dec7063-2720-481c-8d54-6bad9f08237b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Section 4.1
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Section 4.1: Blackbody Radiation
Please wait for the animation to completely load.
One of the first failures of classical theory came about during the analysis of radiation from opaque, or black, objects. Such black bodies radiated and had energy densities, energy per volume per
wavelength, u(λ), that depended on their temperature. Restart. The Rayleigh-Jeans formula for blackbody radiation, derived from the classical equipartition of energy theorem, gives the following
functional form for such an energy density:
u(λ) = 8πλ^−4k[B]T , (4.1)
where k[B] = 1.381 × 10^−23 J/K is Boltzmann's constant and T is the temperature in Kelvin. Select the "R - J" button on the animation and change the temperature to see how this curve varies. Note
the units on the graph (J/m^4 vs. microns) and that the graph's scale changes as you change the temperature. The Rayleigh-Jeans formula agrees well with the experimental results for very long
wavelengths (at low frequencies). As the wavelength of the radiation gets smaller (at high frequencies), the Rayleigh-Jeans formula states that the energy density of the radiation approaches
infinity. This does not agree with experiment, however, and the failure of this classical result to agree with experiment is called the ultraviolet catastrophe.
Planck solved this problem by treating energy not as continuously variable, but instead, as if it came in discrete units, E[γ], and for light, energy was proportional to frequency
E[γ] = hν = hc/λ , (4.2)
where h = 6.626 × 10^−34 J·s was a new constant, now called Planck's constant, tuned to fit the blackbody radiation data. When Planck did the u(λ) calculation with this assumption, he found:^1
u(λ) = 8πhcλ^−5/(e^hc/λ k[B]T−1) , (4.3)
which agrees with the experimental data. Select the "Planck" button on the animation and change the temperature to see how this curve varies with temperature. In addition, Wien's displacement law,
λ[max]T = 2.898 × 10^-3 m·K, (4.4)
for the wavelength corresponding to the maximum energy density per wavelength, can be verified by looking at the Planck curve.
Because of the agreement between the data and the Planck blackbody radiation law, selecting the "Planck and R - J" button shows just how poorly the Rayleigh-Jeans formula does in replicating the true
blackbody radiation curve in the small-wavelength limit.^2
^1For more details on the derivation of Planck's blackbody radiation law, see P. A. Tipler and R. A. Llewellyn, Modern Physics, W. H. Freeman and Company (1999) or see Section 15.5.
^2You may also see the Rayleigh-Jeans and Planck formulas in terms of frequency as
u(ν) = 8πν^2c^−3k[B]T (4.5)
u(ν) = 8πhν^3c^−3/(e^hν/k[B]T − 1), (4.6)
respectively. Note that the difference in form is not just due to the substitution ν = c/λ. Also note that because of this difference, the graphs of u(ν) vs. ν will look "flipped around" as compared
to u(λ) vs. λ. These graphs will also be peaked at different values of λ (c/ν).
next »
|
{"url":"http://www.compadre.org/PQP/quantum-need/section4_1.cfm","timestamp":"2014-04-19T04:59:59Z","content_type":null,"content_length":"19849","record_id":"<urn:uuid:f675e4bf-79d5-4070-abaf-9d72b4291902>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exercise 2.2
There are several ways to solve this problem. I used the approach of rotating around the origin incrementing the angle by 2π/n each time. The radius of the figure is given by application of the sine
rule to the isosceles triangle formed by two adjacent vertices and the origin.
type Side = Float
Construct a regular polygon, given number of sides n and side length s.
regularPolygon :: Int -> Side -> Shape
regularPolygon n s
= let angleinc = pi * 2 / fromIntegral n
radius = s * sin ((pi - angleinc) / 2) / sin angleinc
regularVerts 0 _ = []
regularVerts n angle = (radius * cos angle, radius * sin angle)
: regularVerts (n-1) (angle + angleinc)
in Polygon (regularVerts n 0)
Note that the vertex list is in anticlockwise order. This is also the case with Exercise 2.1. Note also the use of fromIntegral to convert between types.
This solution uses a transformation of a list rather than building one up with recursion:
regularPolygon :: Int -> Side -> Shape
regularPolygon n s = Polygon (map makeVertex [1 .. n])
where makeVertex i = (radius * cos (angle i), radius * sin (angle i))
radius = s / (2 * sin halfExterior)
angle i = fromIntegral (2 * i) * halfExterior
halfExterior = pi / fromIntegral n
What the hell! I figured that I needed to use recursion to construct the list, but why do I need to use fromIntegral? I understand that my implementation had typing errors (ghci was telling me), but
that function hasn’t been introduced yet.
Does this mean that the exercises assume that I know the language already?
They don’t assume you know the language, but there are areas where some extra digging comes in handy. I’d say this is true of some of the early exercises, which are sometimes a little contrived in
the absence of higher-level functional knowledge.
If you are coming from a weakly-typed language with implicit conversions between numerical types, then Haskell can be a bit irksome to start with.
Also, I take the view that no one book need be your only reference, anyway.
I’m with Jeff on this one! I did search around – let me tell you – and solved the problem via list comprehension. I had an earlier solution that should have worked but I got so much type confusion
that I scrapped it and wrote another. Thereafter I could see the solution sitting there producing correct answers in the WinHugs console when I would directly enter the values for the number of sides
and the side length, but I could *not* get the code to work using passed in parameters, even when the parameters had the exact same values! Very frustrating. The error messages were unhelpful and the
solution non-intuitive. I think some mention of fromIntegral was truly in order, even if only in a Details box. At this point there are so many new concepts that could be causing the problem…
Anyway, thank you for writing up your solutions.
I also tried to compute the vertices with the radius of the whole shape, i.e.
radius = s / (2 * sin (pi / fromIntegral n))
As it seems, this computes an incorrect value. Why? The formula is correct!
wronhg alarm, it computes correct
|
{"url":"http://www.elbeno.com/haskell_soe_blog/?p=7","timestamp":"2014-04-21T10:26:33Z","content_type":null,"content_length":"20623","record_id":"<urn:uuid:a920d336-41eb-4488-9d7d-72ee8019cda9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wire in Pipe for Drill Shaft Communication
Wire in Pipe for Drill Shaft Communication
Apps.DrillComm History
Hide minor edits - Show changes to markup
February 02, 2013, at 09:57 PM by -
Changed line 30 from:
t[2:10] = t[1:9] + 10 ! temperature increases by 5°C with each segment
t[2:10] = t[1:9] + 10 ! temperature increases by 10°C with each segment
June 16, 2012, at 11:05 PM by -
Changed line 15 from:
Below is a simple example of a mathematical model of inductor-connected wire-in-pipe technology provided by IntelliServ, a JV between Slumberger and NOV. The model describes the behavior of the
communication platform in transmitting signals from the Bottom Hole Assembly (BHA) to the top-side computers.
Below is a simple example of a mathematical model of inductor-connected wire-in-pipe technology provided by IntelliServ, a joint venture between Slumberger and NOV. The model describes the behavior
of the communication platform in transmitting signals from the Bottom Hole Assembly (BHA) to the top-side computers.
June 16, 2012, at 11:04 PM by -
Changed line 2 from:
(:keywords upstream, wire in pipe, mud pulsing, mud pulse, Slumberger, Intelliserv, NOV:)
(:keywords upstream, wire in pipe, mud pulsing, mud pulse, Slumberger, IntelliServ, NOV:)
Changed line 15 from:
Below is a simple example of a mathematical model of inductor-connected wire-in-pipe technology provided by Intelliserv, a JV between Slumberger and NOV. The model describes the behavior of the
communication platform in transmitting signals from the Bottom Hole Assembly (BHA) to the top-side computers.
Below is a simple example of a mathematical model of inductor-connected wire-in-pipe technology provided by IntelliServ, a JV between Slumberger and NOV. The model describes the behavior of the
communication platform in transmitting signals from the Bottom Hole Assembly (BHA) to the top-side computers.
June 16, 2012, at 11:03 PM by -
Changed lines 7-9 from:
Drowning in Data, Starving for Information
Measurement technology is advancing in the oil and gas industry. Innovations such as wireless transmitters, reduced cost of measurement technology, and increased regulations that require active
monitoring have the effect of increasing the number of available measurements.
Measurement technology is advancing in the oil and gas industry. Innovations such as wireless transmitters, reduced cost of measurement technology, and increased regulations that require active
monitoring have the effect of increasing the number of available measurements. Increased bandwidth does not necessarily lead to improved operations. Some describe this as Drowning in Data, Starving
for Information.
June 16, 2012, at 11:01 PM by -
Changed line 17 from:
Below is a simple example of a mathematical model of inductor-connected wire-in-pipe technology provided by Intelliserv, a JV between Slumberger and NOV.
Below is a simple example of a mathematical model of inductor-connected wire-in-pipe technology provided by Intelliserv, a JV between Slumberger and NOV. The model describes the behavior of the
communication platform in transmitting signals from the Bottom Hole Assembly (BHA) to the top-side computers.
June 16, 2012, at 11:00 PM by -
Changed lines 65-67 from:
Contact support@apmonitor.com to learn more about Advanced Process Monitoring for upstream drilling and production systems.
June 16, 2012, at 10:58 PM by -
Changed line 11 from:
June 16, 2012, at 10:58 PM by -
Added lines 1-4:
(:title Wire in Pipe for Drill Shaft Communication:) (:keywords upstream, wire in pipe, mud pulsing, mud pulse, Slumberger, Intelliserv, NOV:) (:description Detailed modeling of wire-in-pipe
communication for increased data communication rates for horizontal drilling.:)
Added lines 7-14:
Drowning in Data, Starving for Information
Measurement technology is advancing in the oil and gas industry. Innovations such as wireless transmitters, reduced cost of measurement technology, and increased regulations that require active
monitoring have the effect of increasing the number of available measurements.
This flood of information can be distilled into relevant and actionable information with Advanced Process Monitoring. The purpose of APM is to validate measurements and align imperfect mathematical
models to the actual process. The objective of this approach is to determine a best estimate of the current state of the process and any potential disturbances. The opportunity is in earlier
detection of disturbances, process equipment faults, and improved state estimates for optimization and control.
Added lines 17-18:
Below is a simple example of a mathematical model of inductor-connected wire-in-pipe technology provided by Intelliserv, a JV between Slumberger and NOV.
Changed lines 21-23 from:
The principal input to the model is the voltage to the motor.
June 16, 2012, at 10:42 PM by -
Added lines 2-3:
March 06, 2010, at 02:37 AM by -
Added lines 1-51:
Drill Shaft Communication
(:html:)<font size=2><pre>
APMonitor Modeling Language
The principal input to the model is the voltage to the motor.
Parameters include resistance (ohm), winding inductance (henrys)
Model pipe
! communication parameters
v_in = 0 ! input voltage (Volt)
R = 0.1 ! resistance (Ohm)
L = 1e-5 ! inductance (Henry)
C = 1e-8 ! capacitance (Farad)
t[1] = 23 ! temperature in first pipe segment (°C)
t[2:10] = t[1:9] + 10 ! temperature increases by 5°C with each segment
n[1:10] = 100 ! number of windings on each pipe segment
! can modify for different number of windings on each end
End Parameters
i[1:10] = 0 ! current (Amps)
v[1:10] = 0 ! voltage (Volt)
End Variables
! loss across pipe connections (dB)
! linear correlation (20°C = 0.5 dB, 120°C = 1.5 dB)
dB[1:10] = (t[1:10] - 20) * (1.5-0.5)/(120-20) + 0.5
! inductor to inductor transfer efficiency
eff[1:10] = 10^(-db[1:10]/20), >=0, <=1
eff_avg[1:9] = (eff[1:9] + eff[2:10]) / 2
End Intermediates
! input voltage effect on current in 1st pipe
L*$i[1] = -R*i[1] + v_in
! current dynamics in each segment
L*$i[2:10] = -R*i[2:10] + (n[2:10]/n[1:9]) * v[1:9] * eff_avg[1:9]
! voltage dynamics from capacitance
C * $v[1:10] = i[1:10] - v[1:10]/R
End Equations
End Model </pre></font>(:htmlend:)
|
{"url":"http://www.apmonitor.com/wiki/index.php/Apps/DrillComm?action=diff","timestamp":"2014-04-19T14:36:49Z","content_type":null,"content_length":"32515","record_id":"<urn:uuid:141ad813-12a2-4697-90a7-2c2ffd65d4c7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
divisibility by 12
Divisibility by 12
In this topic divisibility by 12 first let us see the definition. All the numbers which are divisible by 3 and divisible by 4 are divisible by 12.
Example 1:
Check 8520 is divisible by 12 or not.
To check any number is divisible by 12 or not first we have to try the following steps:
1. We have to check whether the given number is divisible by 3.
2. Then we have to check whether the given number is divisible by 4.
If the given number is satisfying those conditions we can decide that the given number is divisible by 12.Otherwise we can say that the given number is not divisible by 12. Here the given number is
8520.According to the procedure first let us check the given number is divisible by 3.For that we have to calculate the sum of the digits that is 8 + 5 + 2 + 0 = 15
So it is divisible by 3.To check whether 8520 is divisible by 4 we need to consider the last two digits that is 20
So it is divisible by 4.Therefore we can say the given number 8520 is divisible by 12.
In the following worksheet you can find 5 questions in the form of quiz and you can get direct answers too.
Example 2:
Check 9110 is divisible by 12 or not.
To check any number is divisible by 12 or not first we have to try the following steps:
1. We have to check whether the given number is divisible by 3.
2. Then we have to check whether the given number is divisible by 4.
If the given number is satisfying those conditions we can decide that the given number is divisible by 12.Otherwise we can say that the given number is not divisible by 12.
Here the given number is 9110.According to the procedure first let us check the given number is divisible by 3.For that we have to calculate the sum of the digits that is 9 + 1 + 1 + 0 = 11
Here the given number is not divisible by 3.So we can say the given number is not divisible by 12.
Divisibility by 12 to divisibility _test
practice questions
|
{"url":"http://www.onlinemath4all.com/divisibility_by_12.html","timestamp":"2014-04-18T03:17:48Z","content_type":null,"content_length":"10281","record_id":"<urn:uuid:5909eff4-99e5-4816-b439-42965309fff6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Minimal Complexity of Adapting Agents Increases with Fitness
What is the relationship between the complexity and the fitness of evolved organisms, whether natural or artificial? It has been asserted, primarily based on empirical data, that the complexity of
plants and animals increases as their fitness within a particular environment increases via evolution by natural selection. We simulate the evolution of the brains of simple organisms living in a
planar maze that they have to traverse as rapidly as possible. Their connectome evolves over 10,000s of generations. We evaluate their circuit complexity, using four information-theoretical measures,
including one that emphasizes the extent to which any network is an irreducible entity. We find that their minimal complexity increases with their fitness.
Author Summary
It has often been asserted that as organisms adapt to natural environments with many independent forces and actors acting over a variety of different time scales, they become more complex. We
investigate this question from the point of view of information theory as applied to the nervous systems of simple creatures evolving in a stereotyped environment. We performed a controlled in silico
evolution experiment to study the relationship between complexity, as measured using different information-theoretic measures, and fitness, by evolving animats with brains of twelve binary variables
over 60,000 generations. We compute the complexity of these evolved networks using three measures based on mutual information and one measure based on the extent to which their brain contain states
that are both differentiated and integrated. All measures show the same trend - the minimal complexity at any one fitness level increases as the organisms become more adapted to their environment,
that is, as they become fitter. Above this minimum, there exists a large degree of degeneracy in evidence.
Citation: Joshi NJ, Tononi G, Koch C (2013) The Minimal Complexity of Adapting Agents Increases with Fitness. PLoS Comput Biol 9(7): e1003111. doi:10.1371/journal.pcbi.1003111
Editor: Nihat Ay, Max Planck Institute for Mathematics in the Sciences, Germany
Received: September 14, 2012; Accepted: May 3, 2013; Published: July 11, 2013
Copyright: © 2013 Joshi et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This work was funded in part by the Paul G. Allen Family Foundation and by the G. Harold and Leila Y. Mathers Foundation. Neither played a role in the study design, data collection and
analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
What is the relationship between complexity and the fitness of evolved organisms, whether natural or artificial? It is often assumed [1]–[4] that while evolving organisms grow in fitness, they
develop functionally useful forms, and hence necessarily exhibit increasing complexity [5]. Some, however, argue against this notion [6], [7], pointing to examples of decreases in complexity, while
others assert that any apparent growth of complexity with fitness is an admixture of chance and necessity [8], [9]. One reason behind this absence of a consensus is the lack of formal or analytical
definitions that permit relating complexity and fitness within a single framework. While many context-dependent definitions of complexity exist [3], [10]–[13], fitness has been less frequently
formalized into an information-theoretic framework [14]. One such attempt [15] showed analytically that the fitness gain due to a predictive cue was tightly related to the amount of information about
the environment carried by the cue. Another study using an artificial life setup demonstrated that the observed evolutionary trends in complexity, measured as in [16], could be associated with a
systematic driving force such as natural selection, but could also result from an occasional random drift away from the equilibrium [17].
Recently, a computer model of simple animats evolving in an environment with fixed statistics, randomly generated mazes that they had to traverse as quickly as possible (Fig. 1), reported [18] that
the complexity of their brains was strongly correlated with their fitness. Using integrated information of the main complex, (defined in the latter part of this work), as a measure of complexity,
Spearmans rank correlation coefficient between complexity and fitness was . However, no specific relation between these two quantities was established.
Figure 1. Distribution of the Spearman rank correlation coefficients between and fitness.
The analysis in [18] was repeated several times to obtain Spearman rank correlation coefficients. The distribution for the 126 correlation coefficients shows a very broad spectrum with a mean at 0.69
and a variance of 0.24. The red arrow indicates a value of 0.94 obtained in [18] over 64 evolutionary histories, while the green arrow points to the value of 0.79 obtained for the current 126
histories in the same manner. Error bars are Poisson errors due to binning.
In all experiments - and also in our setup - the evolutionary change takes place via two mutually disjoint processes, namely a purely stochastic mutation of the genome followed by a selection
process. The stochastic nature of the genetic mutation allows us to equate ensemble-averages over many evolutionary histories to the time-averages over a single history, provided sufficient time has
passed for an equilibrium to be established locally. By exploiting this ergodicity, we could greatly scale up the statistic from our evolutionary runs. This enabled us to reproduce the simulations of
Edlund et. al. [18] for 126 new evolutionary histories (see below) for a more extensive analysis. We obtained a very broad distribution of Spearmans rank correlation coefficients between fitness and
, with a mean of 0.69 and a variance of 0.24 (Fig. 1). Even though the distribution shows a tendency for high values, the broad variance hints towards the presence of an uncontrolled, noisy factor
that lessens the correlation.
Most information-theoretic definitions of functional or structural complexity of a finite system are bounded from above by the total entropy of the system. The law of requisite variety of Ashby [19]
connects the notion of complexity in a control system with the total information flowing between sensory input and the motor output, given by the corresponding sensory-motor mutual information (SMMI)
[20]. This relation provides a convenient tool for studying the connection between evolved complexity and fitness. Here, we probe the relationship between fitness and the SMMI in the context of
10,000s of generations of evolving agents, or animats, adapting to a simulated environment inside a computer [18]. In addition to SMMI, we compute three other measures of complexity: the predictive
information [12], the state-averaged version of integrated information (or [21]) of a network of interacting parts using the minimal information partition (MIP) as well as the atomic version of ,
also known as stochastic interaction [22], [23]. We relate all four measures to the extent to which these artificial agents adapt to their environment.
In order to test the relationship between the SMMI and the fitness of an agent undergoing adaptation in a static environment, we performed an in silico evolution experiment, in which the agent needs
to solve a particular task without altering the state of the environment. Our experimental setup is similar to that pioneered by Edlund and others [18], where simple agents evolve a suitable Markov
decision process [24], [25] in order to survive in a locally observable environment (described in detail in the Methods section). Agents must navigate and pass through a planar maze (Fig. 2A), along
the shortest possible path connecting the entrance on the left with the exit on the right. At every maze door, the agent is instructed about the relative lateral position of the next door with
respect to the current position via a single bit (red arrows in Fig. 2A) available only while the agent is standing in the doorway. In effect, an agent must evolve a mechanism to store this
information in a one-bit memory and use it at a future time, optimizing the navigation path. For this purpose, the agent is provided with a set of internal binary units, not directly accessible to
its environment.
Figure 2. Experimental setup for evolving a population of agents under natural selection in an environment with fixed statistical properties.
A. A section of the planar maze that the animats have to cross from left to right as quickly as possible. The arrows in each doorway represent a door bit that is set to 1 whenever the next door is on
the right-hand-side of the current one and set to 0 otherwise. B. The agent, with 12 binary units that make up its brain: b0–b2 (retinal collision sensors), b3 (door-information sensor), b4–b5
(lateral collision sensors), b6–b9 (internal logic), and b10–b11 (movement actuators). In the first generation of each evolutionary history, the connectivity matrix is initiated to be random. The
networks for all subsequent generations are selected for their fitness. Taken from [18] with permission from the authors.
The evolutionary setup, based purely on stochastic mutation and driven by natural selection, allows us to monitor trends in the complexity of the brain of the agents. Our experiment consists of data
collected over 126 independent evolutionary trials or histories, where each evolutionary history was run through 60,000 generations. The evolution experiment was carried out using one randomly
generated test maze, which was renewed after every 100th generation. Frequent renewal of the test maze confirms that each generation of animats does not adapt to a particular maze, by developing an
optimal strategy for that particular maze, but enforces evolving a general rule to find the shortest path through the maze. For examples of this evolution, we refer the readers to the movies S1, S2,
S3 in the supplementary material.
After every 1000th generation, we estimate the SMMI and complexity in terms of the predictive and stochastic interaction, and information integration of the network evolved so far. To systematically
monitor the evolution of network connectivity, we use the data along the line-of-descent (LOD) of the fittest agent resulted after 60,000 generations. To reduce the error in fitness as well as
complexity estimation, we generated 20 random mazes each time over which performance of an agent is tested to calculate fitness. SMMI and other complexity measures are calculated using the
sensory-motor data collected while the agent was navigating through these mazes.
The Sensory-Motor Mutual Information
The mutual information between two variables and is given by(1)
and is a measure of statistical dependence between the two variables [26]. Note, that throughout this work, a boldface symbol such as signifies a system (or subsystem) variable, while a particular
state of the variable is denoted as a regular-face-type , sometimes subscripted as per context as . In particular, the SMMI for an agent connectome is evaluated as(2)
This corresponds to the average information transmitted from the sensors at time , affecting the motor state at one time step later. Our definition of SMMI is a variant of the predictive information
used in studies [27], [28] involving a Markovian control system or autonomous robots where sensory input variables and motor or action variables can be distinguished [18]. Depending on whether or not
the state-update mechanism uses feedback or memory, these definitions may differ from each other.
Fig. 3 shows the distribution of SMMI calculated for 126 evolutionary histories after every 1000th generation. The data shows increasing lower SMMI values as the fitness of the agents increase.
Figure 3. The sensory-motor mutual information, SMMI, as a function of fitness.
Along each of 126 evolutionary histories the line-of-descent (LOD) of the fittest agent after 60,000th generation is traced back. Absence of cross-over in the evolution confirms that only one agent
lies on LOD in every generation. SMMI is calculated every generation for the agent along the LOD. The data is color-mapped according to the number of generation the agent belongs to. The magenta star
at correspond to SMMI of 1.08 bits for Einstein - an optimally designed, rather than evolved, network that still retains some stochasticity. Note that SSMI is bounded from above by 2 bits.
Predictive information
The predictive information of a time series, as defined in its original form [12], is a characteristic of the statistic, which quantifies the amount of information about a future state of a system
contained in the current state assumed by the system. It can be loosely interpreted as the ability of an external user - as opposed to the intrinsic ability of the system - to predict a future state
of a system, based on its current state, hence the name predictive information. Considering the system as a channel connecting two consecutive states, the predictive information has been proposed as
a possible measure of functional complexity of the system. The predictive information of a system being observed during a time interval of is defined as(3a)
where and denote the entire past and entire future of the system with respect to an instance at time .
We here consider the predictive information between one discrete time step, and , that is for above, or(3b)
Fig. 4 shows the distribution of estimated for the evolved agent connectomes along the LODs of the best fit agent at the 60,000th generation in each of the 126 evolutionary histories. Similar to
SMMI, too shows a boundary on the lower side, confirming our expectation of an increasing minimal bound on the complexity with increasing fitness. Indeed, a lower boundary was observed (not shown
here) in all cases when we calculated (an approximate) between two states up to 8 time-steps apart.
Figure 4. The predictive information, , as a function of fitness.
is calculated for the same networks and in the same manner as in Fig. 3. The magenta star is the value of 2.98 bits for Einstein - an optimally designed agent - with fitness of . is bounded from
above by 12 bits.
Information integration
We use the state-averaged version of integrated information or [21] of a network of interacting variables (or nodes) as a measure of complexity and relate it to the degree to which these agents adapt
to their environment. The state-averaged version of the integrated information measure is defined as the minimal irreducible part of the information generated synergistically by mutually exclusive
non-overlapping parts or components of a system above the information generated by the parts themselves.
One proceeds by defining a quantity called the effective information(4)
where is the whole system and its parts belonging to some arbitrary partition . The subscript indices represent temporal ordering of the states. The function represents the probability of the system
making a transition from a state to a state . In other words, indicates the probability that a variable takes a state immediately following . is the Kullback-Leibler divergence or the relative
entropy between two probability distributions and , given by(5)
The partition of the system that minimizes the effective information is called minimal information partition or MIP. The effective information, defined over the MIP, is thus an intrinsic property of
the connectivity of the system and signifies the degree of integration or irreducibility of the information generated within the system. This quantity is called and is given by(6)
Note that the effective information minimization has a trivial solution, whereby all nodes are included in the same part, yielding a partition of the entire system into a single part. This
uninteresting situation is avoided by dividing by a normalization factor, given by(7)
in eq. 4, while searching for a MIP [21]. , however, is the non-normalized as defined in eq. 6. here denotes the number of parts in the partition , while is the maximum entropy.
The main complex and
By definition, of a network reduces to zero if there are disconnected parts, since this topology allows for a method of partitioning the network into two disjoint parts across which no information
flows. That is, the system can be decomposed into two separate sub-systems, rather than being a single system. For each agent, we then find the subset of the original system, called the main complex
(MC), which maximizes over the power-set of the set of all nodes in the system. This is done by iteratively removing one node at a time and recalculating for the resulting sub-network. The
corresponding maximal value of the is denoted as .
Fig. 5 plots against fitness . As for the two other complexity measures (SMMI and ), shows a broadly increasing trend with . Yet this curve also displays a very sharp lower boundary. That is, the
minimal irreducible circuit complexity of our animats, for any one level of fitness, is an increasing but bounded function of the animat's fitness.
Figure 5. The information integration measure for the main complex, , against fitness.
is calculated for the same networks and in the same manner as in Fig. 3. The magenta star is the value of 1.68 bits for Einstein. is bounded from above by 12 bits.
Atomic partition and the
Evaluating for a system requires searching for MIP of the system - partition that minimizes the effective information for the given dynamical system. MIP search, in turn, necessitates iterating over
every possible partition of the system and calculating the as given in eq. 4. This is computationally very expensive, as the number of possible partitions of a discrete system comprised of components
is given by the Bell number, , which grows faster than exponentially. As a consequence, determining is, in general, only possible for small systems, excluding any realistic biological network [29].
In such cases, a method for approximating either MIP or needs to be used.
We denote the effective information calculated over the atomic partition - the finest partition, in which each singleton or elementary unit of the system is treated as its part - by . This completely
eliminates the need for iterating over the set of partitions of a system. Thus,(8a)
For a system comprised of binary units - as is the case with our agents () - reduces to(8b)
a measure of complexity, previously introduced as the stochastic interaction [22], [23] with the conditional entropy function defined as(9)
The against fitness calculated for the same networks as in Fig. 3 is shown in Fig. 6A. Note, that , i.e. the integrated information when considering a partition with each node as its own part, is
always larger than that of the main complex, , as seen from Fig. 6B. This is expected, since is defined as the minimum over all partitions, which includes the atomic partition over which is
calculated. In other words, will be necessarily as large as or larger than .
Figure 6. An information integration measure for the atomic partition, , also known as stochastic interaction, as a function of the fitness of the organism.
A. is calculated for the same networks and in the same manner as in Fig. 3. B. against for the same network. The line in red indicates = . Our data shows that the former is always larger than the
latter, as expected from their definitions. The magenta star in both figures are the value of 5.06 bits for Einstein. is bounded from above by 12 bits.
Control run
To confirm that selection by fitness is actually necessary to selectively evolve high creatures, we carried out two control experiments in which selection by fitness was replaced by random selection
followed by stochastic mutation of the parent genome.
In a first control experiment, agents never experienced any selection-pressure, as each new generation was populated by randomly selecting agents from the previous one. Animats unsurprisingly failed
to evolve any significant fitness - maximal fitness was with .
In a second control experiment, organisms evolved as usual for 45,000 generations. This selected for agents able to rapidly traverse through the maze. The resulting along the LODs over 64 independent
runs show a broad distribution, with a maximum of 1.57 bits. The maximal fitness obtained in these runs was 91.27% (Fig. 7A). We then turned off selection via fitness as in the previous experiment.
The population quickly degenerated, losing any previously acquired navigational skills within 1,000 generations due to genetic drift - the highest fitness was 0.03%, with an associated of 0.12 bits (
Fig. 7B).
Figure 7. Distribution of evolved complexity with and without selection-pressure for 64 independent histories along their line-of-descent (LOD).
A. Distribution of along the LOD using our standard selection based on the fitness after 45,000 generation. Fitness is as high as 91.27%, with a maximal value of 1.57 bit. B. Fitness-based selection
is then replaced by random selection followed by the usual stochastic mutation of the genome. 1,000 generation later, the population along the LOD has degenerated such that both the fitness as well
as drop to vanishingly small values. The error bars are due to Poisson counting error.
Analyzing various information-theoretical measures that capture the complexity of the processing of the animats as they evolve over 60,000 generations demonstrate that in order to achieve any fixed
level of fitness, a minimum level of complexity has to be exceeded. It also demonstrates that this minimal level of complexity increases as the fitness of these organisms increase.
Not only SMMI, but also predictive information and integrated information show features similar to SMMI. Indeed our numerical experiments replicate those of [18]. There is a clear trend for
integrated information of the main complex, (and also the and the predictive information) to grow with fitness , computed relative to a perfectly adapted agent (with ). By way of comparison, the
fitness of Einstein, a near-optimal hand-designed agent within the constraints of our stochastic Markov network, is plotted as a magenta asterisk in Figs. 3–5.
It should be noted, that our terminologies differ slightly from those in [18]; we preserve the original definition of the predictive information [12], termed in [18], while our SMMI was originally
named predictive information.
Even a cursory inspection of the plots of SMMI, and versus fitness reveal a lower boundary - most evident in case of - for any fitness level . The complete absence of any data points below these
boundaries, combined with the high density of points just above them, implies that developing some minimal level of complexity is necessary to attain a particular level of fitness. The existence of
such a boundary had been previously surmised in empirical studies [1], [2], where complexity was measured crudely in terms of organismal size, number of cell-types, and fractal dimensions in shells.
Conversely, no upper value for complexity is apparent in any of the plots (apart from the entropic bounds of 2 bits for SMMI and 12 bits for and ). That is, once minimal circuit complexity has been
achieved, organisms can develop additional complexity without altering their fitness. This is an instance of degeneracy, which is ubiquitous in biology, and which might even drive further increases
in complexity [30].
Degeneracy, the ability of elements that are structurally different to perform the same function, is a prominent property of many biological systems ranging from genes to neural networks to evolution
itself. Because structurally different elements may produce different outputs in different contexts, degeneracy should be distinguished from redundancy, which occurs when the same function is
performed by identical elements. Degeneracy matters not with respect to a particular function, but more generally with respect to fitness. That is, there are many different ways (connectomes) to
achieve the same level of fitness, which is exactly what we observe. This provides enough diversity for future selection to occur when the environment changes in unpredictable ways. Curiously, the
hand-designed agent, Einstein, has little degeneracy, lying just above the minimal complexity level appropriate for its fitness level. In our simulations, any additional processing complexity did not
entail any cost to the organisms. This is not realistic as in the real world, any additional processing will come with an associated metabolic or other costs [31]–[33]. We have not considered such
additional costs here.
In two control experiments, we showed that selection by fitness is necessary to attain fitness and high circuit complexity. Yet complexity and fitness were neither explicitly connected by
construction nor measured in terms of each other. Hence, any network complexity evolved in this manner must be a consequence of the underlying relationship between fitness and complexity. While this
complexity is completely determined by the transition table associated with the brain's nodes, its fitness can only be evaluating by monitoring the performance of the agent in a particular
environment. This and the fact that all complexity measures studied in this work show similar behaviors support the notion of a general trend between fitness and minimal required complexity.
Thus, complexity can be understood as arising out of chance and necessity [8]. The additional complexity is not directly relevant for survival, though it may become so at a later stage in evolution.
On the other hand, a certain amount of redundancy [34], even though not useful for enhancing fitness at any stage, may be necessary for evolutionary stability by providing repair and back-up
mechanisms. The previously reported correlation between integrated information and fitness [18] should be understood in this light. High correlation values correspond to data points close to the
lower boundary. This strong correlation deteriorates as more and more data lies away from the boundary.
Experimental setup
Our maze is a two-dimensional labyrinth that needs to be traversed from left to right (Fig. 2A) and that is obstructed with numerous orthogonal walls with only one opening or door bored at random. At
each point in time, an agent can remain stationary, move forward or move laterally, searching for the open door in each wall in order to pass through. Inside each doorway, a single bit is set that
contains information about the relative lateral position of the next door (for e.g. arrows in Fig. 2A; a value of 1 implies that the next door is to the right, i.e., downward, from the current door,
while a value of 0 means the next door could be anywhere but to the right, i.e., either upward or straight ahead). This door bit can only be read by the agent inside the doorway. Thus, the organism
must evolve a simple one-bit memory that would enable it to efficiently move through the maze and it must evolve circuitry to store this information in a 1-bit memory.
The maze has circular-periodic boundary conditions. Thus, if the agent passes exit door before its life ends after 300 time steps, it reappears on the left side of the same maze.
Fig. 2B shows the anatomy of the agent's brain with a total of twelve binary units. It comprises a three bit retina, two wall-collision sensors, two actuators, a brain with four internal binary
units, and a door-bit sensor. The agent can sense a wall in front with its retina - one bit in front of it and one each on left and right front sides respectively - and a wall on the lateral sides
via two collision sensors - one on each side. The two actuator bits decide the direction of motion of the agent: step forward, step laterally right- or left-ward, or stay put. The four binary units,
accessible only internally, can be used to develop logic, including memory. The door bit can only be set inside a doorway.
While the wall sensors receive information about the current local environment faced by the agent at each time-step, the information received from the door bit only has relevance for its future
behavior. During evolution of the brain of these animats, they have to assimilate the importance of this one bit, store it internally and use it to seek passage through the next wall as quickly as
The connectome of the agent, encoded in a set of stochastic transition tables or hidden Markov modeling units [18], [35], is completely determined by its genome. That is, there is no learning at the
individual level.
Each evolutionary history was initiated with a population of 300 randomly generated genomes and subsequently evolved through 60,000 generations. At the end of each generation, the agents ranked
according to their fitness populate the next generation of 300 agents. The genome of the fittest agent, or the elite, from every generation is copied exactly to the next generation without mutation,
while those of other agents selected with probabilities proportional to their fitness are operated over by mutation, deletion and insertion. The probabilities that a site on the genome is affected by
these evolutionary operators are respectively 2.5%, 5% and 2.5%.
Evolutionary operators are applied purely stochastically and the selection acts only after the random mutations have taken place. This allows us to relate the fitness-complexity data sampled along
each evolutionary line after every 1000th generation - similar to time averaging - to that sampled only after 50,000th generation over 64 evolutionary histories - or ensemble averaged - as in [36],
provided that each evolutionary trial has been run over large enough times confirming exploration of a significant part, if not the entire, of the genomic parameter-space. Fig. 1 shows the
distribution of 126 such Spearman rank correlation coefficients calculated per evolutionary trial, with respect to that reported with a red arrow for the 64 evolutionary histories in [18]. The green
arrow indicates the rank coefficient value obtained in the same manner for the 126 evolutionary trials from this study.
The fitness of the agent is a decreasing function of how much it deviates from the shortest possible path between the entrance and exit of the maze, calculated using the Dijkstra search algorithm
[36]. To assign fitness to each agent as it stumbles and navigates through a maze during its lifetime (of 300 time steps), its fitness is calculated as follows: first, the shortest distance to exit,
is calculated for every location in the maze that can be occupied using the Dikjstra algorithm. Each position in the maze receives a fitness score of(10)
where is the maximum of shortest path distances from all positions in . The fitness of an agent over one trial run of time-steps through is given by(11)
where is the position occupied by the agent at time-step and we use the convention in eq 12, which accounts for the offset due to a non-zero fitness score at the start of the trial, when agent begins
navigating from an arbitrary position, but not necessarily at corresponding to . counts how many times the agent has reached the exit in its life and reappeared on the left-extreme of the maze. To
reduce the sampling error, final fitness of the agent is then calculated as the geometric mean of its fitness relative to the optimal score from 10 such repetitions.(12)
To avoid adaptation bias to any particular maze-design, the maze was renewed after every generations.
Supporting Information
Typical behavior of an agent from early generations. The movie shows behavior of an agent from one of the evolutionary trials at generation in a randomly generated maze. This agent has a fitness of
about . The agent has developed a retina to follow through the doors and always prefers to turn on its right. The top panel is an overview of the agent trajectory throughout the trial, while the
lower panel on the left shows a zoomed in area around the agents current position at any time step. The panel on the lower right part displays activity in the Markov units connecting various binary
nodes of agent's anatomy. An active node or transition is shown with green color.
An evolved agent traversing through a maze. The movie shows behavior of an agent from the same evolutionary trial as in Movie S1, but after generation. The agent has evolved to a fitness of and shows
a near-ideal behavior. Due to the stochasticity in the Markov transitions, the agent can make a wrong decision sometimes (for e.g. at around in this movie, it mistakenly turns to left), contributing
to its fitness value of less than . The top panel is an overview of the agent trajectory throughout the trial, while the lower panel on the left shows a zoomed in area around the agents current
position at any time step. The panel on the lower right part displays activity in the Markov units connecting various binary nodes of agent's anatomy. A green colored node, state or transition
implies current activity.
An optimally designed agent - Einstein, traversing through a maze. This movie shows the maze-solving capabilities of an agent with optimally engineered connectome. It exhibited a fitness of and SMMI,
and values of 1.08, 2.98 and 1.68 bits, respectively (shown with a magenta asterisk in Figs. 3–5). As in other movies, the top panel is an overview of the agent trajectory throughout the trial, while
the lower panel on the left shows a zoomed in area around the agents current position at any time step. The panel on the lower right part displays activity in the Markov units connecting various
binary nodes of agent's anatomy. An active node, state or transition are depicted with green color.
We would like to thank Chris Adami, Jeffrey Edlund and Nicolas Chaumont for developing the evolutionary framework, Virgil Griffith for stimulating discussions, and Samruddhi Ghaisas-Joshi for help
with the English editing.
Author Contributions
Conceived and designed the experiments: NJJ GT CK. Performed the experiments: NJJ. Analyzed the data: NJJ CK. Contributed reagents/materials/analysis tools: NJJ. Wrote the paper: NJJ CK.
|
{"url":"http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003111","timestamp":"2014-04-20T21:48:12Z","content_type":null,"content_length":"210785","record_id":"<urn:uuid:4ed2d22b-c6a2-46cf-8eda-e7c21ec840ff>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Linear Interpolation Question
Andrea Gavana andrea.gavana@gmail....
Mon Apr 28 06:41:16 CDT 2008
Hi All,
I have 2 matrices coming from 2 different simulations: the first
column of the matrices is a date (time) at which all the other results
in the matrix have been reported (simulation step). In these 2
matrices, very often the simulation steps do not coincide, so I just
want to interpolate the results in the second matrix using the dates
in the first matrix. The problem is, I have close to 13,000 columns in
every matrices, and repeating interp1d all over the columns is quite
expensive. An example of what I am doing is as follows:
# Loop over all the columns
for indx in indices:
# Set up a linear interpolation with:
# x = dates in the second simulation
# y = single column in the second matrix simulation
function = interp1d(secondaryMatrixDates,
secondaryMatrixResults[:, indx], kind='linear')
# Interpolate the second matrix results using the first simulation dates
interpolationResults = function(mainMatrixDates)
# I need the difference between the first simulation and the second
newMatrix[:, indx] = mainMatrixResults[:, indx] - interpolationResults
This is somehow a costly step, as it's taking up a lot of CPU
(increasing at every iteration) and quite a long time (every column
has about 350 data). Is there anything I can do to speed up this loop?
Or may someone suggest a better approach?
Thank you very much for your suggestions.
"Imagination Is The Only Weapon In The War Against Reality."
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/033267.html","timestamp":"2014-04-19T08:04:41Z","content_type":null,"content_length":"4120","record_id":"<urn:uuid:0d5ea553-88c9-4f57-93ff-40f08486d938>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relativity and The Stopped Clock Paradox
Einstein's "coordinate clocks" are stationary with respect to him and all have the same time on them and all run at the same rate
It would be a bit pointless to have a half dozen clocks spread out that all show the same time and run at the same rate.
We have t as the station time,
and we have t' as Einstein's time.
We have [t,x]s and,
we have [t',x']e
That number is calculated using the Lorentz Transform for the event in the station frame [14,17] for the station-master when the stop-button is pressed. Here's the calculation for the time
t' = γ(t-xβ) = 1.1547(14-17*0.5) = 1.1547(14-8.5) = 1.1547(5.5) = 6.351
So with that, you are saying that when t=14 (the stop-time for the station POV) and x=17 (the distance to the train from the button for the station POV),
[14,17]s = [6.351,x']t for the moment and distance of stop-time at distance x'.
x' = γ(x-tβ) = 1.1547(17-14*0.5) = 1.1547(17-7) = 1.1547(10) = 11.547.
[14,17]s = [6.351,11.547]e
[6.351,11.547] Station-master when the stop-button is pressed.
Now how did the train get up to 11.547 from the button from anyone's POV at press time?
The original stipulation was that when the button was pressed, the train was only 10 from the button from the station POV. Now we are talking about a 17 and an 11.547. I accepted that you took it
that the stationmaster saw "10" and thus pressed the button, which wasn't the actual stipulation, but we can work with that. So now we have the stop-clock at 14 when the button is pressed.
Now the stipulation was also that the button was pressed when the train was 6 from the first clock, or 10 from the button. If again, you assume that the stationmaster must perceive the train to be at
6 from the first clock, then it would take 10 more for him to perceive that (complicating it further) and leading to the assessment that the button was pressed when the train was 5 closer to the
button, making it 5 away.
The button is pressed at t=10 and Einstein is only x=10 away (from station POV) or x=5 if you assume the perception issue with the stationmaster. Either way, I am not seeing how you are now coming up
with him being 11.547 away from his POV. He cannot be perceiving himself any further way than the stationmaster perceives him.
You seem to be saying that the
station sees Einstein
10 away from the button when
Einstein sees the button
11.547 away from himself. Those distances can't be different.
I haven't seen your method and I am not aware of a conundrum so I can't point out your error.
If we can clear up your notation enough to see what is what, or if it gets too confusing, I'll just show you my method and we can go from there.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3784638","timestamp":"2014-04-20T05:49:46Z","content_type":null,"content_length":"134627","record_id":"<urn:uuid:3b98e83c-cb0a-416b-ad9f-bc1834008b16>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Narrow results:
Time Period
Additional Subjects
Book of Royal Gemstones
This work, by Abu al-‛Abbās Ahmad b. Yūsuf al-Qaysī al-Tīfāshī, a 13th-century writer and mineralogist who was born in Tunisia and worked in Egypt, describes precious gems found in the treasuries
of kings and rulers. The author lists 25 gemstones and dedicates a chapter to each. They include the ruby (yāqūt), emerald (zumurrud), topaz (zabarjad), diamond (almās), turquoise (fīrūzaj),
magnetite (maghnātīs), agate (‛aqīq), lapis lazuli (lāzward), coral (marjān), and quartz (talq). In each chapter, the author discusses the causes of the gemstone’s formation, provenance, criteria
for appraisal of ...
Contributed by
The Book of Remedies from Deficiencies in Setting Up Marble Sundials
This work is a treatise for timekeepers (singular muwaqqit), and discusses the telling of time from such astronomical observations as the sun’s angle of inclination (mayl), altitude (irtifā‛), as
well as the direction (samt) and length of cast shadows (zill). In 14 chapters, the author goes through methods for the computation of these factors, determination of the direction of prayer (qibla
), and time of the day. He observes that using instruments (ālāt), such as markings on the ruler (mistara) and the compass (bargār, from the Persian pargār), and geometric ...
Contributed by
A Guide for the Perplexed on the Drawing of the Circle of Projection
The author of this work, Ibn al-Majdī (1366-1447 [767-850 A.H.]), was a renowned mathematician, geometrician, and astronomer. He was linked with the influential Marāgha School through his teacher,
Jamāl al-Dīn al-Māridīnī, who in turn had studied with Ibn al-Shātir al-Dimashqī’. As a descendant of a powerful local family with Mamlūk ties, Ibn al-Majdī served as the official astronomer and
timekeeper at Al-Azhar. The work is divided into three chapters and a conclusion. Chapter 1 covers the procedure for projecting the circle of projection (fadl al-dā’ir) onto planes that ...
Contributed by
Comprehensive Reference on Algebra and Equations
This manuscript is a didactic work on arithmetic and algebra, composed in versified form, as a qasīda of 59 verses. It was composed by Ibn al-Hā’im al-Fardī in 1402 (804 A.H.). The beginning of the
work also names ‛Alī b. ‛Abd al-Samad al-Muqrī al-Mālikī (died Dhu al-Ḥijja 1381 [782 A.H.]), a scholar and teacher who had come to Egypt and taught at the ‛Amr b. ‛As madrasa for several years.
The main part of the qasīda begins by introducing and defining key terms in arithmetic and algebra ...
Contributed by
Arithmetic Conventions for Conversion Between Roman [i.e. Ottoman] and Egyptian Measurement
This treatise, written on ten folio pages for an Ottoman official and patron of books known as Ismā‘īl Afandī, is on the inter-conversion of units of measurement. It is a useful guide for merchants
and others engaged in the measurement of quantities. It provides instructions for converting arṭāl (plural of raṭl) into uqaq (plural of auqiya), and back; darāhim (plural of dirham) into mathāqīl
(plural of mithqāl) and back; and converting the number of Ottoman (referred to as Roman, rūmī) loading bags into the number of Egyptian loading bags ...
Contributed by
Sakhāqī’s Book [of Arithmetic]
This work is a tutorial text on elementary arithmetic, in 20 folios. It is divided into an introduction, 11 chapters, and a conclusion. In the beginning, the sign for zero is introduced, along with
the nine Indian numerals, written in two alternative forms. This is followed by a presentation of the place system. The first four chapters cover, respectively, addition, subtraction,
multiplication, and division. Chapter five introduces operations on non-whole numbers. The remaining six chapters discuss fractions and operations on them.
Contributed by
Treatise for Observers on Constructing the Circle of Projection
This work is a treatise on the important subject of timekeeping. It is a work of technical astronomy, in 19 folios, that begins by emphasizing the religious significance of knowledge of time. It is
divided into an introduction, two chapters, and a conclusion. Comprehensive procedures for the construction of tables and their use are provided. The work was completed in 1473 (878 A.H.).
Contributed by
Easing the Difficulty of Arithmetic and Planar Geometry
This work is a comprehensive tutorial guide on arithmetic and plane geometry, in 197 folio pages. It also discusses monetary conversion. The work is composed in verse form, and is meant as a
commentary on existing textbooks. The author gives the following personal account of the writing of this guide: In Rajab 827 A.H. (May 1424) he traveled from Damascus to Quds al-Sharīf (in
Palestine), where he met two scholars named Ismā‘īl ibn Sharaf and Zayn al-Dīn Māhir. There he took lessons on arithmetic, using an introductory book ...
Contributed by
Compendium on Using the Device Known as the Almucantar Quarter
This work, by a timekeeper at the Al-Azhar Mosque in Cairo, is an important and comprehensive textbook on timekeeping. It introduces the useful device of dividing a quarter of a circle of
projection into sections known as almucantars (muqanṭarāt). The work, comprising 100 folio pages, contains 30 chapters and a conclusion. The work was composed in 1440-1 (844 A.H.) and was copied in
1757 (1170 A.H.).
Contributed by
The Light of the Eyes and the Enlightened Landscape of Vision
This work is a noteworthy treatise on optics that covers such basic topics as direct vision, reflection and refraction, and the length of shadows. It discusses convex and concave mirrors and the
physiology of vision, and has a section on optical illusions. It is a cogent work on geometrical optics. It is particularly significant because it was written under the Ottoman sulṭān, Murāt ibn
Selīm (reigned 1574-95 [982-1003 A.H.]). The name of the author is illegible on the front page, and seems to have been deliberately wiped off for ...
Contributed by
The Travelers Guide on Drawing the Circle of Projection
This is a work on timekeeping and the determination of the direction of prayer (qibla), particularly intended for people who travel. The author, Abu al-‛Abbās Shihāb al-Dīn Ahmad b. Zayn al-Dīn
Rajab b. Tubayghā al-Atābakī, known as al-Majdī or Ibn al-Majdī (1366-1447 [767-850 A.H.]), was descended from a powerful family with ties to Mamlūk rulers and was a renowned and prominent
mathematician, geometrician, and astronomer. He served as the timekeeper of the Al-Azhar Mosque. This work is an abridgment of his other major book, Irshād al-ḥā’ir ilā ...
Contributed by
Maximum Benefit from the Knowledge of Circles of Projection on the 30 Degree Northern Latitude
This work, a treatise on practical astronomy, deals with such issues as timekeeping and determining the proper direction of prayer. The work begins with a brief introduction, but the bulk of the
manuscript contains tables used to determine time. The introductory section contains illustrative examples on how to use the tables.
Contributed by
Deliverance from Error on Knowledge of Times of Day and the Direction of Prayer
This work on elementary knowledge of practical astronomy begins by emphasizing the religious significance of knowing how to keep the time and how to determine the proper direction of prayer (qibla
). It describes the conventional correspondence between ordinal numbers and the letters of the Arabic alphabet. It then enumerates, and goes through, the names of the months in the lunar Arabic
calendar and in the solar Coptic calendar. It highlights certain important dates, such as the beginning of the New Year, and introduces the 12 zodiacal signs. The front page ...
Contributed by
The Book of Instruction on Deviant Planes and Simple Planes
This manuscript is a work on practical astronomy and the drawing of the circle of projection and related concepts from spherical trigonometry. It is rich with geometric diagrams, tables of
empirical observations, and computations based upon these observations. An interesting feature of the manuscript is the appearance on the margins of the cover, and on several pages in the
manuscript, of edifying verses, proverbs, and witty remarks. One reads, for example, “It is strange to find in the world a jaundiced physician, a dim-eyed ophthalmologist, and a blind astronomer.”
Most ...
Contributed by
A Friendly Gift on the Science of Arithmetic
This treatise deals specifically with basic arithmetic, as needed for computing the division of inheritance according to Islamic law. It contains 48 folios and is divided into an introduction,
three chapters, and a conclusion. The introduction discusses the idea of numbers as an introduction to the science of arithmetic. Chapter I discusses the multiplication of integers. Chapter II is
on the division of integers and the computation of common factors. Chapter III deals extensively with fractions and arithmetic operations on them. The author, an Egyptian jurist and mathematician,
was the ...
Contributed by
Desired Transformations, or, On Negations and Affirmations in Rectifying Wisdom
This treatise contains information on a medley of subjects, including alchemy, numerology, mineralogy, and magic. It begins with quotations from Kashf al-asrār wa hatk al-astār (Unveiling of
secrets and tearing of covers), a well-known eighth-century (second-century A.H.) work attributed to Jābir (ibn Ḥayyān). A whole other work seems to be written in the margins. The text mentions
such authorities as Galen (Jālīnūs), Zīsmūs, Hermes, Democrates, Shaykh Abu al-‘Abbās Aḥmad al-Baunī, and Ghazālī. Parts of the manuscript are smudged and damaged.
Contributed by
Jokes Relating to the Commentary on Al-Mataalia and Its Honorable Marginal Notes
The present work is a further commentary on the ḥāshiyah (gloss) by al-Sayyid al-Sharīf al-Jurjānī (died 816 AH [1413 AD]) on the Lawāmi’ al-asrār by Qutb al-Dīn al-Taḥtānī al-Rāzī (died 766 AH
[1364 AD]). The latter is, in turn, a commentary on a book of logic entitled Maṭāli’ al-anwār by Sirāj al-Dīn Maḥmūd al-Urmawī (died 682 AH [1283 AD]). The scribe of this work, who may also have
been the author, was Muhammad ibn Pir Ahmad al-Shahir bi-Ibn Arghun al-Shirazi. Written for the library of the Ottoman Sultan Selim I ...
Contributed by
|
{"url":"http://www.wdl.org/en/search/?additional_subjects=Naskh%20script","timestamp":"2014-04-16T07:19:23Z","content_type":null,"content_length":"52384","record_id":"<urn:uuid:5fa8c547-3091-4bed-bc9e-20731395e03d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to calculate LOD & LOQ in HPLC Validation Method
Question How to calculate LOD & LOQ in HPLC Validation Method
Question Submitted By :: Bhavesh
I also faced this Question!! Rank Answer Posted By
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer Based on standard deviation of the response and the slope 3 Abdulkalim
# 1 obtained will be calculated as: [Dr.Reddys Laboratories]
Limit of Detection (LOD):
DL= 3(SD/Slope)
Limit of quantification(LOQ):
Is This Answer Correct ? 171 Yes 17 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer There are three different method indiated in ICH guidelines 5 M
# 2 for determination of LOD and LOQ.
1 Visual evaution method - For non instrument and sometimes
for instrumental also.
2 Based on residual standard deviation of the response and
the slope
obtained will be calculated as:
Limit of Detection (LOD):
DL= 3(SD/Slope)
Limit of quantification(LOQ):
3 Signal to noise ratio method
But residual standard deviation method is more reliable
method for determination of LOD and LOQ value
Is This Answer Correct ? 104 Yes 15 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer There are three different method indiated in ICH guidelines 0 Balaji
# 3 for determination of LOD and LOQ.
1 Visual evaution method - For non instrument and sometimes
for instrumental also.
2 Slope Method:Based on residual standard deviation of the
response and
the slope
obtained will be calculated as:
Limit of Detection (LOD):
DL= 3(RESIDUAL SD/Slope)
Limit of quantification(LOQ):
DL=10(RESIDUAL SD/Slope)
3 Signal to noise ratio method
Limit of Detection (LOD):
S/N NLT 3
Limit of quantification(LOQ):
S/N NLT 10
Slope and S/N methods are more reliable
method for determination of LOD and LOQ value
Is This Answer Correct ? 40 Yes 9 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer We can calculate LOD and LOQ values using linearity 0 Sunil.madhukar Rao Kulkarni
# 4 graph,insted of sigma value STEYX can be considered for
more acuurate vale of LOD and LOQ
where STEYX : Standard error of the pridected Y-value for
each X in a regression
Final calulation given as below :
LOD = STEYX x 3/ Slope
LOQ = STEYX x 10 /Slope
Is This Answer Correct ? 28 Yes 5 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer Signal of noise ratio X 3 = LOD 0 Ashutosh Kumar Srivastava
# 5 LOD X 3.33 =LOQ
Is This Answer Correct ? 22 Yes 9 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer Based on standard deviation of the response and the slope 0 Rakesh P. Kotkar
# 6 obtained will be calculated as:
Limit of Detection (LOD):
DL= 3.3(ASD/Slope)
Limit of quantification(LOQ):
Is This Answer Correct ? 5 Yes 2 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer Limit of Detection (LOD): 0 Bhupalreddy
# 7 • Based on visual limitation
– Visual evaluation may be used for non-instrumental
methods but may also be used with instrumental methods.
– The detection limit is determined by the analysis
of samples with known concentrations of analyte and by
establishing the minimum level at which the analyte can be
reliably detected.
• Based on signal-to-noise
– This approach can only be applied to analytical
procedures which exhibit baseline noise.
– Determination of the signal-to-noise ratios is
performed by comparing measured signals from samples with
known low concentrations of analyte with those of blank
samples an establishing the minimum concentration at which
the analyte can be reliably detected. A signal-to-noise
ratio between 3 or 2:1 is generally considered acceptable
for estimating the detection limit.
• Based on the standard deviation of the response
and the slope
– The detection limit = 3.3 s / S
– Where s is the standard deviation of the response
and S is slope of the calibration curve.
– The estimate of S may be carried out in a variety
of ways, for example:
• Based on the standard deviation of the blank
• Based on the calibration curve
Limit of Quantification (LOQ):
• Based on visual limitation
– Visual evaluation may be used for non-instrumental
methods but may also be used with instrumental methods.
– The quantification limit is determined by the
analysis of samples with known concentrations of analyte
and by establishing the minimum level at which the analyte
can be quantified with acceptable accuracy and precision.
• Based on signal-to-noise
– This approach can only be applied to analytical
procedures which exhibit baseline noise.
– Determination of the signal-to-noise ratios is
performed by comparing measured signals from samples with
known low concentrations of analyte with those of blank
samples an establishing the minimum concentration at which
the analyte can be reliably quantified. A typical signal-to-
noise ratios is 10:1.
• Based on the standard deviation of the response
and the slope
– The quantification limit = 10s / S
– Where s is the standard deviation of the response
and S is slope of the calibration curve.
– The estimate of S may be carried out in a variety
of ways, for example:
• Based on the standard deviation of the blank
• Based on the calibration curve
Is This Answer Correct ? 13 Yes 1 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer by HPLC LOD @ LOQ calculated by S/N ratio that is the best 0 Dr.praveen Choubey
# 8 method.LOD s/N NLT 3 and LOQ S/N NLT 10 and also calculated
by visual evalution.
Is This Answer Correct ? 4 Yes 1 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer Look at DIN ISO 32645 how to "really" calculate LOQ and 0 Guido
# 9 LOD. You need to perform a linearity test before !
Everyt5hing else is not very serious.
Is This Answer Correct ? 2 Yes 2 No
Re: How to calculate LOD & LOQ in HPLC Validation Method
Answer Based on ICH Guideline there are many method given to 0 Nikhil Gandhi
# 10 identify LOD and LOQ, but most useful method is slope and
residual standard deviation is used.
LOD : 3.3 X sigma/ Slope
LOQ : 10 X sigma/ Slope
Sigma : Residual Standard Deviation
Is This Answer Correct ? 4 Yes 2 No
12 >>
|
{"url":"http://www.allinterview.com/showanswers/106644.html","timestamp":"2014-04-17T04:32:11Z","content_type":null,"content_length":"49672","record_id":"<urn:uuid:b17a5694-58cc-4830-a624-be343334210d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Polynomial Graphs: End Behavior
Polynomial Graphs: End Behavior (page 1 of 5)
Sections: End behavior, Zeroes and their multiplicities, "Flexing", Turnings & "bumps", Graphing
When you're graphing (or looking at a graph of) polynomials, it can help to already have an idea of what basic polynomial shapes look like. One of the aspects of this is "end behavior", and it's
pretty easy. Look at these graphs:
┃ │ with a positive │ with a negative ┃
┃ │ leading coefficient │ leading coefficient ┃
┃ ├──────────────────────┼──────────────────────┨
┃ ├──────────────────────┼──────────────────────┨
Copyright © Elizabeth Stapel 2005-2011 All Rights Reserved
┃│ with a positive │ with a negative ┃
┃│leading coefficient │leading coefficient ┃
As you can see, even-degree polynomials are either "up" on both ends (entering and then leaving the graphing "box" through the "top") or "down" on both ends (entering and then leaving through the
"bottom"), depending on whether the polynomial has, respectively, a positive or negative leading coefficient. On the other hand, odd-degree polynomials have ends that head off in opposite directions.
If they start "down" (entering the graphing "box" through the "bottom") and go "up" (leaving the graphing "box" through the "top"), they're positive polynomials; if they start "up" and go "down",
they're negative polynomials.
All even-degree polynomials behave, on their ends, like quadratics, and all odd-degree polynomials behave, on their ends, like cubics.
• Which of the following could be the graph of a polynomial
whose leading term is "–3x^4"?
The important things to consider are the sign and the degree of the leading term. The exponent says that this is a degree-4 polynomial, so the graph will behave roughly like a quadratic: up on
both ends or down on both ends. Since the sign on the leading coefficient is negative, the graph will be down on both ends. (The actual value of the negative coefficient, –3 in this case, is
actually irrelevant for this problem. All I need is the "minus" part of the leading coefficient.)
Clearly Graphs A and C represent odd-degree polynomials, since their two ends head off in opposite directions. Graph D shows both ends passing through the top of the graphing box, just like a
positive quadratic would. The only graph with both ends down is:
Graph B
• Describe the end behavior of f(x) = 3x^7 + 5x + 1004
This polynomial is much too large for me to view in the standard screen on my graphing calculator, so either I can waste a lot of time fiddling with WINDOW options, or I can quickly use my
knowledge of end behavior.
This function is an odd-degree polynomial, so the ends go off in opposite directions, just like every cubic I've ever graphed. A positive cubic enters the graph at the bottom, down on the left,
and exits the graph at the top, up on the right. Since the leading coefficient of this odd-degree polynomial is positive, then its end-behavior is going to mimic a positive cubic.
"Down" on the left and "up" on the right.
Top | 1 | 2 | 3 | 4 | 5 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Polynomial Graphs: End Behavior." Purplemath. Available from
http://www.purplemath.com/modules/polyendshtm. Accessed
|
{"url":"http://www.purplemaths.com/modules/polyends.htm","timestamp":"2014-04-17T21:32:04Z","content_type":null,"content_length":"27832","record_id":"<urn:uuid:e32f6940-2c3e-4946-b00d-c34d543995a4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can someone show me work? Solve the system using elimination 5x+4y=12 3x-3y=18
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/507b5e4fe4b07c5f7c1f2a5e","timestamp":"2014-04-21T15:40:22Z","content_type":null,"content_length":"39629","record_id":"<urn:uuid:6591f02a-c364-476f-bb1e-6dffd1e19ccb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Paul C. Kainen
Adjunct Associate Professor
Department of Mathematics and Statistics
Georgetown University
Last edited Dec. 20, 2013
For information on current courses and related material, please see my index page.
My research page needs updating.
The following papers are available here; see also "explore page" publications for some selected papers.
With Rachel Hunter: Quadrilateral embedding of G x Q_s, Bulletin of the Institute of Combinatorics and Its Applications, 52 (Sept.)(2007) 13--20. See a pdf copy of the paper .
Topological constancy in the perception of Lissajous figures. Draft paper of 10 pages, Jan. 24, 2005.
(pdf file) Replacing points by compacta in neural network approximation Journal of the Franklin Institute, Vol. 341, No. 4, pp. 391--399, July 2004. A subset M of a metric space X is called
approximatively compact if for any x in X, any sequence in M which converges in distance to the infimum of the distances from x to m in M must contain a subsequence which is convergent to some
element in M. In particular, the infimum is achieved. It is shown that for subsets A and B of a metric space X, A x B (the cartesian product) is approximatively compact (ac) when A is ac and B is
compact. More briefly, the product of an ac and a compact set is ac. Since product with a point is the identity, this result embodies the title's description of a program proposed by Dugundji. It is
also shown that A + B is ac when A is ac and B is compact, where A and B are subsets of an F-space and so of a normed linear space, and A + B {a + b: a in A and b in B}. The paper is dedicated to the
memory of Hewitt Kenyon, who was the author's topology professor at George Washington University and supervised my honors' thesis.
(pdf file) On robust cycle bases , Proc. 9th Quadrennial Conf. on Graph Theory, Combinatorics, Algorithms and Applications, Ed. by Yousef Alavi, Dawn Jones, Don R. Lick and Jiuqiang Liu, Kalamazoo,
MI, 4--9 June 2000; conference in honor of Yousef Alavi. Introduces cycle bases for graphs satisfying a recursive condition, applications to "forcing" commutativity of diagrams, especially cubes.
This paper is available on the Elsevier site, Science Direct website (search for "robust cycle basis"); the citation is: "Electronic Notes in Discrete Mathematics," VOl. 11, (July 2002), article #
38, pp. 430--437.
(pdf file) Isolated squares in hypercubes and robustness of commutativity Cahiers de Topologie et Geom\'etrie Diff\'erentielle Cat\'egoriques, XLIII (2002) 213--220. On "blocking" the commutativity
of cubical diagrams and statistical commutativity.
(pdf file) An octonion model for physics Fourth International Conference on Emergence, Coherence, Hierarchy, and Organization (ECHO IV), Odense, Denmark, July 31 -- Aug. 4, 2000. This paper contains
some remarks on the octonions and their possible relevance for physics. There are also some connections with the 4-color theorem and quantum algebra.
With Shannon Overbay: Extension of a theorem of Whitney (pdf file) to appear in Applied Math Letters (AML 2315), Vol. 20, No. 7, July 2007. Here is an earlier version of this paper Book embeddings of
graphs and a theorem of Whitney (Tech. Report GUGU-2/25/03) which has been cited in the literature.
Papy16 (with Vera Kurkova and Marcello Sanguineti, in a normed linear space, the error functional of a compact subset is well-posed in the generalized Hadamard sense, when restricted to an approx.
compact subset)
(pdf file) Best approximation by linear combinations of characteristic functions of half-spaces, Journal of Approximation Theory, Volume 122, Number 2, June 2003, 151-159, with Vera Kurkova and
Andrew Vogt, in L_p of the unit cube in d-dimensional space, p in [1,oo), for n a positive integer, the set of all n-fold linear combinations of half-space characteristic functions, restricted to the
cube, is an approximatively compact set, so in particular best approx. exists using a fixed number of hidden units of Heaviside type in a feedforward neural net.
(pdf file) Best approximation by Heaviside perceptron networks Neural Networks 13(7) (2000) 695-697, with Vera Kurkova and Andrew Vogt. This was an announcement of the results proved in JAT 2003
above and considered the application to neural networks.
(pdf file) A graph-theoretic model for time, appeared in Computing Anticipatory Systems, Daniel M. Dubois, Ed., American Institute of Physics Conf. Proc. #573, 2001, pp. 490--495.) A graph model for
time which extends the usual path-model by making two vertices adjacent if they have distance at most 3 along the path; by a result of Harary and Kainen, the model is maximal planar. The specific
form of the model as ``cube of a path'' produces various combinatorial and analytic properties which are described here. This paper won a prize for best in its session at the conference.
A rather old resume is available.
List of all writings (reports, abstracts, preprints and papers): publications . Will be updated eventually.
|
{"url":"http://www9.georgetown.edu/faculty/kainen/homepage.html","timestamp":"2014-04-20T16:24:40Z","content_type":null,"content_length":"6545","record_id":"<urn:uuid:06b28350-ad6e-4054-a2e5-66bb6c56ebf6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1-D Spatial Data
One-dimensional spatial data is very common in all geoscience related disciplines. Spatial trends or patterns are of key interest. The graph at left is a gamma ray well log from
Reading the Rocks from Wireline Logs (more info)
, put out by the Kansas Geological Survey. Geoscientists typically use gamma ray logs to infer shale content versus depth.
As another example, the image at left shows the vertical profile of both ozone (on July 8 and October 6) and temperature (October 6) over the South Pole. Plotting several variables on the same plot
can help make comparisons easier and can also help infer causal relationships between variables. Note that the lowest ozone concentrations are located where temperatures are low enough to allow Polar
Stratospheric Clouds to form.
A working knowledge of fundamental statistics (mean, variance, standard deviation, and correlation, and linear regression or trend analysis) is an important aspect of working with bivariate data.
learn more here
Graphs are often used to help students visualize data. Having students make their own graphs, read data/information from graphs, and describe graphs in their own words are all important "Using Data"
learning objectives.
learn more here
|
{"url":"http://serc.carleton.edu/introgeo/teachingwdata/1DSpacial.html","timestamp":"2014-04-17T12:45:18Z","content_type":null,"content_length":"24287","record_id":"<urn:uuid:e768fdbf-a36e-4c46-b906-8835f86d72f9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
September 04, 2009
Market Reductions
Last month, Ye Du defended his dissertation in Computer Science at the University of Michigan. His thesis included several interesting contributions at the boundary of economic and computational
theory, for example regarding the complexity of equilibrium computations, connecting game theory and general equilibrium.
One particular result I found interesting was a reduction from Markov chains to Cobb-Douglas exchange economies. That is, Ye showed that any Markov chain could be mapped to a set of consumers with
Cobb-Douglas utility functions, such that the steady-state probabilities of the chain correspond to competitive equilibrium prices in the economy. This result can be applied in a standard way to
solve Markov chains by computing economic equilibria, though there seems little advantage to this. More intriguing is the potential that the economic interpretation may lend to applications of Markov
chains. In his thesis, Ye notes the use of Markov chains in PageRank and similar algorithms, and suggests that variations on the Cobb-Douglas economy (i.e., different classes of utility functions)
could define a broad family of ranking algorithms.
To be frank, one reason the market reduction piqued my interest was that Dave Pennock and I did something analogous in a 1996 UAI paper, where we reduce Bayesian networks to production economies.
(Though Bayes nets are more general than Markov chains, Ye Du's reduction is technically superior for the latter case.)
Market reductions like these provide answers to the question "What can a market compute?", and we might hope also yield insights on the source problem for the reduction. Are there other examples of
market reductions? A catalog might be useful.
Posted by wellman at 09:22 PM | Comments (0) | TrackBack
|
{"url":"http://mblog.lib.umich.edu/strategic/archives/computational_markets/index.html","timestamp":"2014-04-16T05:27:28Z","content_type":null,"content_length":"4518","record_id":"<urn:uuid:c487b820-f988-4560-b170-e6a2ce1fac0b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. The ratio of adult tickets to student tickets for the school play was four to five. If the sum of the adult tickets and one half of the students tickets is 260, how many adult tickets were sold?
2. The hourly wages of Luis and Eric are in the ratio of 17:18. Today, they each worked nine hours. Eric earned $5.67 more than Luis for today's work. How much does Luis earn per hour?
3. Some money will be divided in the ratio of 1 to 3. Four times the smaller amount is five hundred ninety-five dollars less than three times the larger amount. William will receive the smaller
amount and Michael will receive the larger amount. How much will William receive?
4. The ratio of votes for Brian to votes for Jose in an election is 13:5. There were a total of 1,530 votes. How many people voted for Jose?
|
{"url":"http://www.edhelper.com/AlgebraWorksheet15.html","timestamp":"2014-04-17T04:05:33Z","content_type":null,"content_length":"3057","record_id":"<urn:uuid:679c2718-9f0c-47f4-987b-4d96f77218ee>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-dev] Updated generic optimizers proposal
william ratcliff william.ratcliff@gmail....
Wed Apr 25 18:19:26 CDT 2007
What I suggest is simply a slightly modified version of what's in
anneal.pyto add the following:
class simple_sa(base_schedule):
def init(self, **options):
if self.m is None:
self.m = 1.0
if self.n is None:
self.n = 1.0
self.c = self.m * exp(-self.n * self.quench)
def update_guess(self, x0):
x0 = asarray(x0)
T = self.T
while myFlag:
u = squeeze(random.uniform(0.0, 1.0, size=self.dims))
y = sign(u-0.5)*T*((1+1.0/T)**abs(2*u-1)-1.0)
xc = y*(self.upper - self.lower)
indu=where(xt>self.upper) # find where it goes above the upper
indl=where(xt<self.lower) # below the lower bounds
if ((indu[0].size==0) & (indl[0].size==0)):
myFlag=False #if it goes out of the box, try a
new guess
# restart
xnew = xt
return xnew
def update_temp(self):
self.T = self.T0*exp(-self.c * self.k**(self.quench))
self.k += 1
This will keep solutions within specified upper and lower bounds. It does
not sequentially choose parameters to vary. It simply asks, is my trial
move valid? You can think of it as adding an infinite barrier at the
walls. Then, one asks, if the trail move is not valid, then what do I do?
The simplest solution is to stay where I am and to try again. It can still
get stuck on the walls if the true solution is outside of the bounds. It
should respect detailed balance. In anneal.py, if you have a glassy
function, with a downturn near one of the walls, then it will still be able
to explore outside of the specified bounds. There are some cases where you
have information (for example, energies are positive) that allow you to
constrain your search space and one should have the option of doing so. If
the search gets stuck on the walls and the limits given were based on
intuition rather than physical constraints, then one can simply expand the
limits to see if it is a question of the constraints being too tight.
On 4/25/07, dmitrey <openopt@ukr.net> wrote:
> Check for the case:
> objFun(x) = sum(x)
> x0 = [0 0 0 0 0 0 0 0 0 1] (or very small numbers instead of zeros,
> smaller than typicalDeltaX/50 for example)
> lb = zeros
> ub = ones (or any other)
> so if you use random shift for all coords (x = x_prev + deltaX, all
> coords of deltaX are random), the probability of "move" is 2^(-9)=1/512
> and probability of "stay" is 1-2^(-9) = 511/512.
> this is valid for current update_guess() from anneal
> class fast_sa:
> def update_guess(self, x0):
> x0 = asarray(x0)
> u = squeeze(random.uniform(0.0, 1.0, size=self.dims))
> T = self.T
> y = sign(u-0.5)*T*((1+1.0/T)**abs(2*u-1)-1.0)
> xc = y*(self.upper - self.lower) (so xc=deltaX change ALL coords)
> xnew = x0 + xc
> return xnew
> class cauchy_sa(base_schedule):
> def update_guess(self, x0):
> x0 = asarray(x0)
> numbers = squeeze(random.uniform(-pi/2, pi/2, size=self.dims))
> xc = self.learn_rate * self.T * tan(numbers)
> xnew = x0 + xc (ALSO modify ALL coords)
> return xnew
> class boltzmann_sa(base_schedule):
> def update_guess(self, x0):
> std = minimum(sqrt(self.T)*ones(self.dims),
> (self.upper-self.lower)/3.0/self.learn_rate)
> x0 = asarray(x0)
> xc = squeeze(random.normal(0, 1.0, size=self.dims))
> xnew = x0 + xc*std*self.learn_rate (ALSO modify ALL coords)
> return xnew
> If you use random shift for 1 coord only (sequential) there can be other
> problems.
> WBR, D.
> william ratcliff wrote:
> > The 'simple' way applies only to the anneal algorithm in scipy. When
> > one chooses steps in a simulated annealing algorithm, there is always
> > the question of how to step from the current point. For anneal, it is
> > currently done based on an upper bound and lower bound (in one
> > option). However, there is nothing to prevent the searcher from
> > crawling its way out of the box. When most people imagine themselves
> > searching in a bounded parameter space, that is not the expected
> > behavior. Now, it is possible that the optimum solution is truly
> > outside of the box and that the searcher is doing the right thing.
> > However, if that is not the case, then there is a problem. So, what
> > is one to do? The first obvious thing to try is to say, if you reach
> > the edge of a bounded parameter, stay there. However, that is not
> > ideal as you get stuck and can't explore the rest of the phase space.
> > So, I use the simple heuristic that if a trial move is to take you
> > outside of the box, simply stay where you are. In the next cycle, try
> > to move again. This will keep you in the box, and if there is truly
> > a solution outside of the box, will still move you towards the walls
> > and let you know that maybe you've set your bounds improperly. Now,
> > there are questions of efficiency. For example, how easy is it to get
> > out of corners? Should one do reflections? However, I believe that
> > my rather simple heuristic will preserve detailed balance and results
> > in an algorithm that has the expected behavior and is better than
> > having no option ;>
> >
> > As for deprecation--is it really true that
> > scipy.optimize.anneal is deprecated?
> >
> > As for issues of this global optimizer or that global optimizer, why
> > not let the user decide based on their expectations of their fitting
> > surface? For some truly glassy surfaces, one is forced into
> > techniques like simulated annealing, parrallel tempering, genetic
> > algorithms, etc. and I imagine that their relative performance is
> > based strongly on the particular problem that their are trying to solve.
> >
> > Cheers,
> > WIlliam Ratcliff
> _______________________________________________
> Scipy-dev mailing list
> Scipy-dev@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/scipy-dev/attachments/20070425/49c028f0/attachment.html
More information about the Scipy-dev mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2007-April/007001.html","timestamp":"2014-04-20T01:55:37Z","content_type":null,"content_length":"10127","record_id":"<urn:uuid:4b79af87-e11e-4c87-90db-269aa2af0887>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ISQM Chapter 4a
Relationship between two (collinearity) or more (multicollinearity) variables. Variables exhibit complete collinearity if their correlation coefficient is 1 and a complete lack of collinearity if
their correlation coefficient is 0.
Condition index
Measure of the relative amount of variance associated with an eigenvalue so that a large ________________ indicates a high degree of collinearity.
Cook's distance (Di)
Summary measure of the influence of a single case (observation) based on the total changes in all other residuals when the case is deleted from the estimation process. Large values (usually greater
than 1) indicate substantial influence by the case in affecting the estimated regression coefficients.
Measure of the influence of a single observation on the entire set of estimated regression coefficients. A value close to 1 indicates little influence. If the _________ value minus 1 is greater than
± 3p/n (where p is the number of independent variables + 1, and n is the sample size), the observation is deemed to be influential based on this measure.
Deleted residual
Process of calculating residuals in which the influence of each observation is removed when calculating its residual. This is accomplished by omitting the ith observation from the regression equation
used to calculate its predicted value.
Measure of the change in a regression coefficient when an observation is omitted from the regression analysis. The value of _________ is in terms of the coefficient itself; a standardized form
(SDFBETA) is also available. No threshold limit can be established for DFBETA, although the researcher can look for values substantially different from the remaining observations to assess potential
influence. The SDFBETA values are scaled by their standard errors, thus supporting the rationale for cutoffs of 1 or 2, corresponding to confidence levels of .10 or .05, respectively.
Measure of an observation's impact on the overall model fit, which also has a standardized version (SDFFIT). The best rule of thumb is to classify as influential any
standardized values (SDFFIT) that exceed 2 p n , where p is the number of
independent variables + 1 and n is the sample size. There is no threshold value for
the DFFIT measure.
Measure of the amount of variance contained in the correlation matrix so
that the sum of the _____________ is equal to the number of variables. Also known as
the latent root or characteristic root.
Hat matrix
Matrix that contains values for each observation on the diagonal, known as
hat values, which represent the impact of the observed dependent variable on its predicted value. If all cases have equal influence, each would have a value of p/n, where p equals the number of
independent variables + 1, and n is the number of cases. If a case has no influence, its value would be ‐1 ÷ n, whereas total domination by a single case would result in a value of (n ‐ 1)/n. Values
exceeding 2p/n for larger samples, or 3p/n for smaller samples (n ≤ 30), are candidates for classification as influential observations.
Influential observation
Observation with a disproportionate influence on one or more aspects of the regression estimates. This influence may have as its basis (1) substantial differences from other cases on the set of
independent variables, (2) extreme (either high or low) observed values for the criterion variables, or (3) a combination of these effects. Influential observations can either be "good," by
reinforcing the pattern of the remaining data, or "bad," when a single or small set of cases unduly affects (biases) the regression estimates.
Leverage point
An observation that has substantial impact on the regression results due to its differences from other observations on one or more of the independent variables. The most common measure of a leverage
point is the hat value, contained in the hat matrix.
Mahalanobis distance (D2)
Measure of the uniqueness of a single observation based on differences between the observation's values and the mean values for all other cases across all independent variables. The source of
influence on regression results is for the case to be quite different on one or more predictor variables, thus causing a shift of the entire regression equation.
In strict terms, an observation that has a substantial difference between its
actual and predicted values of the dependent variable (a large residual) or between its independent variable values and those of other observations. The objective of denoting outliers is to identify
observations that are inappropriate representations of the population from which the sample is drawn, so that they may be discounted or even eliminated from the analysis as unrepresentative.
Regression coefficient variance-decomposition matrix
Method of determining the relative contribution of each eigenvalue to each estimated coefficient. If two or more coefficients are highly associated with a single eigenvalue (condition index), an
unacceptable level of multicollinearity is indicated.
Measure of the predictive fit for a single observation, calculated as the difference between the actual and predicted values of the dependent variable. Residuals are assumed to have a mean of zero
and a constant variance. They not only play a key role in determining if the underlying assumptions of regression have been met, but also serve as a diagnostic tool in identifying outliers and
influential observations.
Standardized residual
Rescaling of the residual to a common basis by dividing each
residual by the standard deviation of the residuals. Thus, standardized residuals have a mean of 0 and standard deviation of 1. Each standardized residual value can now be viewed in terms of standard
errors in middle to large sample sizes. This provides a direct means of identifying outliers as those with values above 1 or 2 for confidence levels of .10 and .05, respectively.
Studentized residual
Most commonly used form of standardized residual. It differs from other standardization methods in calculating the standard deviation employed. To minimize the effect of a single outlier, the
standard deviation of residuals used to standardize the ith residual is computed from regression estimates omitting the ith observation. This is done repeatedly for each observation, each time
omitting that observation from the calculations. This approach is similar to the deleted residual, although in this situation the observation is omitted from the calculation of the standard
Commonly used measure of collinearity and multicollinearity. The tolerance of variable i (TOLi) is 1‐ Ri2, where Ri2 is the coefficient of determination for the prediction of variable i by the other
predictor variables. Tolerance values approaching zero indicate that the variable is highly predicted (collinear) with the other predictor variables.
Variance inflation factor (VIF)
Measure of the effect of other predictor variables on a regression coefficient. _______ is inversely related to the tolerance value ( ______ = 1 ÷ TOLi).
The __________ reflects the extent to which the standard error of the regression coefficient is increased due to multicollinearity. Large VIF values (a usual threshold is 10.0, which corresponds to a
tolerance of .10) indicate a high degree of collinearity or multicollinearity among the independent variables, although values of as high as four have been considered problematic.
|
{"url":"http://quizlet.com/3106367/isqm-chapter-4a-flash-cards/","timestamp":"2014-04-16T13:28:50Z","content_type":null,"content_length":"71843","record_id":"<urn:uuid:abed4c37-adfb-444f-b148-53eb832730ae>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Suitland ACT Tutor
Find a Suitland ACT Tutor
...Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. I feel very strongly about help students succeed in math because I believe a
true understanding of math can make many other subjects and future studies easier and more rewarding. I graduated from University of Virginia with a degree in economics and mathematics.
22 Subjects: including ACT Math, calculus, geometry, GRE
...I enjoy math and I am very patient. As an engineering professor, I use calculus often and provide one to one tutoring to my engineering students. All 3 of my children took Chemistry when they
were in high school.
15 Subjects: including ACT Math, chemistry, physics, ASVAB
...I have multiple ways of explaining things that help students grasp the knowledge that they need to be more successful learners. I have great culinary skills. I have been cooking for over 30
35 Subjects: including ACT Math, chemistry, reading, biology
...I have taken many math classes and received a perfect score on my SAT Math test. I have tutored students for years in standardized tests and provide a customized tutoring plan for each
individual student. I have a Master's degree in Chemistry and I am extremely proficient in mathematics.
11 Subjects: including ACT Math, chemistry, geometry, algebra 2
...I can show you how to avoid ever making a mistake in Spanish spelling, explain regular and irregular verb conjugations, how to use the Spanish subjunctive mood, the simple and easy ways to
learn Spanish noun genders, and how to use cognates to turn English words into Spanish and back again. Hist...
15 Subjects: including ACT Math, Spanish, English, reading
Nearby Cities With ACT Tutor
Capitol Heights ACT Tutors
Cheverly, MD ACT Tutors
Clinton, MD ACT Tutors
District Heights ACT Tutors
Fairmount Heights, MD ACT Tutors
Greenbelt ACT Tutors
Lanham Seabrook, MD ACT Tutors
Morningside, MD ACT Tutors
Oxon Hill ACT Tutors
Riverdale Park, MD ACT Tutors
Riverdale Pk, MD ACT Tutors
Riverdale, MD ACT Tutors
Seat Pleasant, MD ACT Tutors
South Bowie, MD ACT Tutors
Temple Hills ACT Tutors
|
{"url":"http://www.purplemath.com/suitland_act_tutors.php","timestamp":"2014-04-20T09:11:47Z","content_type":null,"content_length":"23633","record_id":"<urn:uuid:5ea086c1-6f0c-4256-b58b-fa7efbdc5840>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tarzana SAT Math Tutor
Find a Tarzana SAT Math Tutor
...I also, have great experience tutoring high school students in multiple subjects in one-to-one sessions throughout their academic year. Students I worked with have scored higher on their finals
and other placement tests. I am very flexible and available weekdays and weekends.
11 Subjects: including SAT math, chemistry, geometry, algebra 1
...My score as a junior put me in the 99th percentile (with a selection index of 231) and in the company of other National Merit Semifinalists. I later went on to become a National Merit Finalist,
and, ultimately, a National Merit Scholar. I have tutored students studying for the PSAT specifically...
60 Subjects: including SAT math, reading, Spanish, chemistry
...Cancellation Policy: I recognize that sickness, family events or other unexpected occurrences may arise which make it difficult to keep your reserved time slot. With 12 hours advance notice I'm
happy to reschedule your tutoring session at no additional cost. However, since it is nearly impossi...
17 Subjects: including SAT math, geometry, ASVAB, algebra 1
...I've even received honorable mention from the New York Songwriters Circle for a song I wrote. As a singer/songwriter, I've toured the U.S. playing colleges and venues like The Shrine and Ford
Amphitheatre L.A., and the Lincoln Center in NYC. If you have the passion and curiosity to combine words and music, I am well-equipped to guide you through the songwriting process!
42 Subjects: including SAT math, reading, English, elementary (k-6th)
...I first understand the student's weakness so that I do not waste time strengthening skills that a student already has mastered. Next, I demonstrate how to do certain problems or teach concepts
that I believe the student needs to work on. Finally, I present multiple problems relevant to the area of weakness to guarantee that the student has mastered the concept.
8 Subjects: including SAT math, statistics, biology, algebra 1
|
{"url":"http://www.purplemath.com/Tarzana_SAT_math_tutors.php","timestamp":"2014-04-18T05:36:42Z","content_type":null,"content_length":"23977","record_id":"<urn:uuid:29a5db24-0426-4b1a-8210-3d81ec3ad410>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Specific Heat
Einstein-Debye Specific Heats
The Einstein-Debye phonon model produced agreement with the low-temperature cubic dependence of specific heat upon temperature. Explaining the drastic departure from the Law of Dulong and Petit was a
major contribution of the Einstein and Debye models. The final step in explaining the low temperature specific heats of metals was the inclusion of the electron contribution to specific heat. When
these were combined, they produced the expression
Note that the vibrational part is only the low temperature limit of the more general Debye specific heat. The data below show that the Debye phonon model with its cubic dependence on temperature
matches the silicon data to very low temperatures. The copper shows a departure from the cubic dependence, showing evidence of electron specific heat.
The vibrational term here is only the low temperature limit of the Debye specific heat expression; the full expression includes an integral which must be evaluated numerically. It produces good
agreement with the transition to the Dulong and Petit limit at high temperatures.
Note that the model for specific heat presented here uses both forms of quantum statistics. Bose-Einstein statistics is used to describe the contribution from lattice vibrations ( phonons), and
Fermi-Dirac statistics must be used to describe the electron contribution to the specific heat.
|
{"url":"http://hyperphysics.phy-astr.gsu.edu/HBASE/thermo/debye.html","timestamp":"2014-04-18T16:26:46Z","content_type":null,"content_length":"6182","record_id":"<urn:uuid:23c1dac0-96a0-402a-9092-776155aa7aed>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cynwyd, PA Geometry Tutor
Find a Cynwyd, PA Geometry Tutor
...My degree is from RPI in Theoretical Mathematics, and thus included taking Ordinary and Partial Differential Equations as well as Linear Algebra a variety of other courses that require the use
of differential equations. I have worked with students on many courses which applies differential equat...
58 Subjects: including geometry, reading, GRE, biology
...I have taught students in grades 2-12 in a variety of settings - urban classrooms, after-school programs, summer enrichment, and summer schools. I work with students to develop strong
conceptual understanding and high math fluency through creative math games. Having worked with a diverse popula...
9 Subjects: including geometry, ESL/ESOL, algebra 1, algebra 2
...By the end of our session, I hope we can come away with a better understanding of how we have improved your writing style and how you can continue to do so on your own. The Detailed Version of
the Details: You might be asking yourself, "Why would a mechanical engineer be a good tutor for me or...
37 Subjects: including geometry, English, reading, precalculus
...Throughout my years tutoring all levels of mathematics, I have developed the ability to readily explore several different viewpoints and methods to help students fully grasp the subject
matter. I can present the material in many different ways until we find an approach that works and he/she real...
19 Subjects: including geometry, calculus, trigonometry, statistics
...I look forward to working with you!Algebra 1 lays the foundation for all higher math classes. This subject requires abstract thinking, which is why so many students struggle. It's not that
they aren't ready to think in this new way - they just need lots of time, practice, and patient understanding.
23 Subjects: including geometry, reading, writing, algebra 1
Related Cynwyd, PA Tutors
Cynwyd, PA Accounting Tutors
Cynwyd, PA ACT Tutors
Cynwyd, PA Algebra Tutors
Cynwyd, PA Algebra 2 Tutors
Cynwyd, PA Calculus Tutors
Cynwyd, PA Geometry Tutors
Cynwyd, PA Math Tutors
Cynwyd, PA Prealgebra Tutors
Cynwyd, PA Precalculus Tutors
Cynwyd, PA SAT Tutors
Cynwyd, PA SAT Math Tutors
Cynwyd, PA Science Tutors
Cynwyd, PA Statistics Tutors
Cynwyd, PA Trigonometry Tutors
Nearby Cities With geometry Tutor
Bala Cynwyd geometry Tutors
Bala, PA geometry Tutors
Belmont Hills, PA geometry Tutors
Carroll Park, PA geometry Tutors
Gulph Mills, PA geometry Tutors
Llanerch, PA geometry Tutors
Merion Park, PA geometry Tutors
Merion Station geometry Tutors
Merion, PA geometry Tutors
Miquon, PA geometry Tutors
Overbrook Hills, PA geometry Tutors
Penn Valley, PA geometry Tutors
Penn Wynne, PA geometry Tutors
Upton, PA geometry Tutors
Wynnewood, PA geometry Tutors
|
{"url":"http://www.purplemath.com/Cynwyd_PA_Geometry_tutors.php","timestamp":"2014-04-20T23:36:41Z","content_type":null,"content_length":"24119","record_id":"<urn:uuid:203a2601-8685-43c9-9d80-6aae69c2140e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nathaniel Johnston
February 6th, 2012
Final Fantasy XIII-2 is a role-playing game, released last week in North America, that contains an abundance of mini-games. One of the more interesting mini-games is the “clock puzzle”, which
presents the user with N integers arranged in a circle, with each integer being from 1 to $\lfloor N/2 \rfloor$.
The way the game works is as follows:
1. The user may start by picking any of the N positions on the circle. Call the number in this position M.
2. You now have the option of picking either the number M positions clockwise from your last choice, or M positions counter-clockwise from your last choice. Update the value of M to be the number in
the new position that you chose.
3. Repeat step 2 until you have performed it N-1 times.
You win the game if you choose each of the N positions exactly once, and you lose the game otherwise (if you are forced to choose the same position twice, or equivalently if there is a position that
you have not chosen after performing step 2 a total of N-1 times). During the game, N ranges from 5 to 13, though N could theoretically be as large as we like.
To demonstrate the rules in action, consider the following simple example with N = 6 (I have labelled the six positions 0 – 5 in blue for easy reference):
If we start by choosing the 1 in position 1, then we have the option of choosing the 3 in either position 0 or 2. Let’s choose the 3 in position 0. Three moves either clockwise or counter-clockwise
from here both give the 1 in position 3, so that is our only possible next choice. We continue on in this way, going through the N = 6 positions in the order 1 – 0 – 3 – 4 – 2 – 5, as in the
following image:
We have now selected each position exactly once, so we are done – we solved the puzzle! In fact, this is the unique solution for the given puzzle.
Counting Clock Puzzles
Let’s work on determining how many different clock puzzles there are of a given size. As mentioned earlier, a clock puzzle with N positions has an integer in the interval $[1, \lfloor N/2 \rfloor]$
in each of the positions. There are thus $\lfloor N/2 \rfloor^N$ distinct clock puzzles with N positions, which grows very quickly with N – its values for N = 1, 2, 3, … are given by the sequence 0,
1, 1, 16, 32, 729, 2187, 65536, 262144, … (A206344 in the OEIS).
However, this rather crude count of the number of clock puzzles ignores the fact that some clock puzzles have no solution. To illustrate this fact, we present the following simple proposition:
Proposition. There are unsolvable clock puzzles with N positions if and only if N = 4 or N ≥ 6.
To prove this proposition, first note that the clock puzzles for N = 2 or N = 3 are trivially solvable, since each number in the puzzle is forced to be $\lfloor N/2 \rfloor = 1$. The 32 clock puzzles
in the N = 5 case can all easily be shown to be solvable via computer brute force (does anyone have a simple or elegant argument for this case?).
In the N = 4 case, exactly 3 of the 16 clock puzzles are unsolvable:
To complete the proof, it suffices to demonstrate an unsolvable clock puzzle for each N ≥ 6. To this end, we begin by considering the following clock puzzle in the N = 6 case:
The above puzzle is unsolvable because the only way to reach position 0 is to select it first, but from there only one of positions 2 or 4 can be reached – not both. This example generalizes in a
straightforward manner to any N ≥ 6 simply by adding more 1′s to the bottom: it will still be necessary to choose position 0 first, and then it is impossible to reach both position 2 and position N-2
from there.
There doesn’t seem to be an elegant way to count the number of solvable clock puzzles with N positions (which is most likely related to the apparent difficulty of solving these puzzles, which will be
discussed in the next section), so let’s count the number of solvable clock puzzles via brute force. Simply constructing each of the $\lfloor N/2 \rfloor^N$ clock puzzles and determining which of
them are solvable (via the MATLAB script linked at the end of this post) shows that the number of solvable clock puzzles for N = 1, 2, 3, … is given by the sequence 0, 1, 1, 13, 32, 507, 1998, 33136,
193995, … (A206345 in the OEIS).
This count of puzzles is perhaps still unsatisfying, though, since it counts puzzles that are simply mirror images or rotations of each other multiple times. Again, there doesn’t seem to be an
elegant counting argument for enumerating the solvable clock puzzles up to rotation and reflection, so we compute this sequence by brute force: 0, 1, 1, 4, 8, 72, 236, 3665, 19037, … (A206346 in the
Solving Clock Puzzles
Clock puzzles are one of the most challenging parts of Final Fantasy XIII-2, and with good reason: they are a well-studied graph theory problem in disguise. We can consider each clock puzzle with N
positions as a directed graph with N vertices. If position N contains the number M, then there is a directed edge going from vertex N to the vertices M positions clockwise and counter-clockwise from
it. In other words, we consider a clock puzzle as a directed graph on N vertices, where the directed edges describe the valid moves around the circle.
The problem of solving a clock puzzle is then exactly the problem of finding a directed Hamiltonian path on the associated graph. Because finding a directed Hamiltonian path in general is NP-hard,
this seems to suggest that solving clock puzzles might be as well. There of course is the problem that the directed graphs relevant to this problem have very special structure – in particular, every
vertex has outdegree ≤ 2, and the graph has a symmetry property that results from clockwise/counter-clockwise movement allowed in the clock puzzles.
The main result of [1] shows that the fact that the outdegree of each vertex is no larger than 2 is no real help: finding directed Hamiltonian paths is still NP-hard given such a promise. However,
the symmetry condition seems more difficult to characterize in graph theoretic terms, and could potentially be exploited to produce a fast algorithm for solving these puzzles.
Regardless of the problem’s computational complexity, the puzzles found in the game are quite small (N ≤ 13), so they can be easily solved by brute force. Attached is a MATLAB script (solve_clock.m)
that can be used to solve clock puzzles. The first input argument is a vector containing the numeric values in each of the positions, starting from the top and reading clockwise. By default, only one
solution is computed. To compute all solutions, set the second (optional) input argument to 1.
The output of the script is either a vector of positions (labelled 0 through N-1, with 0 referring to the top position, 1 referring to one position clockwise from there, and so on) describing an
order in which you can visit the positions to solve the puzzle, or 0 if there is no solution.
For example, the script can be used to find our solution to the N = 6 example provided earlier:
>> solve_clock([3,1,3,1,2,3])
ans =
Similarly, the script can be used to find all four solutions [Update, October 1, 2013: Whoops, there are six solutions! See the comments.] to the puzzle in the screenshot at the very top of this
>> solve_clock([6,5,1,4,2,1,6,4,2,1,5,2], 1)
ans =
1. J. Plesnik. The NP-completeness of the Hamiltonian cycle problem in planar digraphs with degree bound two. Inform. Process. Lett., 8:199–201, 1979.
MATLAB Scripts for Computing Completely Bounded Norms via Semidefinite Programming
July 23rd, 2011
In operator theory, the completely bounded norm of a linear map on complex matrices $\Phi : M_m \rightarrow M_n$ is defined by $\|\Phi\|_{cb} := \sup_{k \geq 1} \| id_k \otimes \Phi \|$, where $\|\
Phi\|$ is the usual norm on linear maps defined by $\|\Phi\| := \sup_{X \in M_m} \{ \|\Phi(X)\| : \|X\| \leq 1\}$ and $\|X\|$ is the operator norm of $X$ [1]. The completely bounded norm is
particularly useful when thinking of $M_m$ and $M_n$ as operator spaces.
The dual of the completely bounded norm is called the diamond norm, which plays an important role in quantum information theory, as it can be used to measure the distance between quantum channels.
The diamond norm of $\Phi$ is typically denoted $\|\Phi\|_{\diamond}$. For properties of the completely bounded and diamond norms, see [1,2,3].
A method for efficiently computing the completely bounded and diamond norms via semidefinite programming was recently presented in [4]. The purpose of this post is to provide MATLAB scripts that
implement this algorithm and demonstrate its usage.
Download and Install
In order to make use of these scripts to compute the completely bounded or diamond norm, you must download and install two things: the SeDuMi semidefinite program solver and the MATLAB scripts
1. SeDuMi – Please follow the instructions on the SeDuMi website to download and install it. If possible, you should install SeDuMi 1.1R3, not SeDuMi 1.21 or SeDuMi 1.3, since there is a bug with
the newer versions when dealing with complex matrices.
2. CB Norm MATLAB Package – Once SeDuMi is installed, download the CB norm MATLAB scripts, unzip them, and place them in your MATLAB scripts directory. The zip file contains 10 MATLAB scripts.
Once the scripts are installed, type “help CBNorm” or “help DiamondNorm” at the MATLAB prompt to learn how to use the CBNorm and DiamondNorm functions. Several usage examples are provided below.
Usage Examples
The representation of the linear map $\Phi$ that the CBNorm and DiamondNorm functions take as input is a pair of arrays of its left- and right- generalized Choi-Kraus operators. That is, an array of
operators $\{A_i\}$ and $\{B_i\}$ such that $\Phi(X) = \sum_i A_i X B_i$ for all $X$.
Basic Examples
If we wanted to compute the completely bounded and diamond norms of the map
the MATLAB input and output would be as follows:
>> PhiA(:,:,1) = [1,1;1,0];
>> PhiA(:,:,2) = [1,0;1,2];
>> PhiB(:,:,1) = [1,0;0,1];
>> PhiB(:,:,2) = [1,2;1,1];
>> CBNorm(PhiA,PhiB)
ans =
>> DiamondNorm(PhiA,PhiB)
ans =
So we see that its completely bounded norm is 7.2684 and its diamond norm is 7.4124.
If we instead want to compute the completely bounded or diamond norm of a completely positive map, we only need to provide its Kraus operators – i.e., operators $\{A_i\}$ such that $\Phi(X) = \sum_i
A_i X A_i^\dagger$ for all $X$. Furthermore, in this case semidefinite programming isn’t used at all, since [1, Proposition 3.6] tells us that $\|\Phi\|_{cb} = \|\Phi(I)\|$ and $\|\Phi\|_{\diamond} =
\|\Phi^\dagger(I)\|$, and computing $\|\Phi(I)\|$ is trivial. The following example demonstrates the usage of these scripts in this case, via a completely positive map $\Phi : M_3 \rightarrow M_2$
with four (essentially random) Kraus operators:
>> PhiA(:,:,1) = [1 0 0;0 1 1];
>> PhiA(:,:,2) = [-3 0 1;5 1 1];
>> PhiA(:,:,3) = [0 2 0;0 0 0];
>> PhiA(:,:,4) = [1 1 3;0 2 0];
>> CBNorm(PhiA)
ans =
>> DiamondNorm(PhiA)
ans =
Transpose Map
Suppose we want to compute the completely bounded or diamond norm of the transpose map on $M_n$. A generalized Choi-Kraus representation is given by defining $A_{ij} = B_{ij} = e_i e_j^\dagger$,
where $\{e_i\}$ is the standard basis of $\mathbb{C}^n$ (i.e., $A_{ij}$ and $B_{ij}$ are the operators with matrix representation in the standard basis with a one in the $(i,j)$-entry and zeroes
elsewhere). It is known that the completely bounded and diamond norms of the n-dimensional transpose map are both equal to n, which can be verified in small dimensions as follows:
>> % 2-dimensional transpose
>> PhiA(:,:,1) = [1 0;0 0];
>> PhiA(:,:,2) = [0 1;0 0];
>> PhiA(:,:,3) = [0 0;1 0];
>> PhiA(:,:,4) = [0 0;0 1];
>> PhiB = PhiA;
>> CBNorm(PhiA,PhiB)
ans =
>> DiamondNorm(PhiA,PhiB)
ans =
>> % 3-dimensional transpose
>> I = eye(3);
>> for i=1:3
for j=1:3
PhiA(:,:,3*(i-1)+j) = I(:,i)*I(j,:);
>> PhiB = PhiA;
>> CBNorm(PhiA,PhiB)
ans =
>> DiamondNorm(PhiA,PhiB)
ans =
Difference of Unitary Channels
Now consider the map $\Phi : M_2 \rightarrow M_2$ defined by $\Phi(X) = X - UXU^\dagger$, where $U$ is the following unitary matrix:
We know from [2, Theorem 12] that the CB norm and diamond norm of $\Phi$ are both equal to the diameter of the smallest closed disc containing all of the eigenvalues of $U$. Because the eigenvalues
of $U$ are $(1 \pm i)/\sqrt{2}$, the smallest closed disc containing its eigenvalues has diameter $\sqrt{2}$, so $\|\Phi\|_{cb} = \|\Phi\|_{\diamond} = \sqrt{2}$. This result can be verified as
>> PhiA(:,:,1) = [1 0;0 1];
>> PhiA(:,:,2) = [1 1;-1 1]/sqrt(2);
>> PhiB(:,:,1) = [1 0;0 1];
>> PhiB(:,:,2) = -[1 -1;1 1]/sqrt(2);
>> CBNorm(PhiA,PhiB)
ans =
>> DiamondNorm(PhiA,PhiB)
ans =
1. V. I. Paulsen. Completely bounded maps and operator algebras. Cambridge University Press, 2003.
2. N. Johnston, D. W. Kribs, and V. I. Paulsen. Computing stabilized norms for quantum operations via the theory of completely bounded maps. Quantum Inf. Comput., 9:16-35, 2009.
3. J. Watrous. Theory of quantum information lecture notes.
4. J. Watrous. Semidefinite programs for completely bounded norms. Theory Comput., 5:217–238, 2009.
Separability-Preserving Operators in Entanglement Theory
June 14th, 2011
One of the key concepts in quantum information theory is the difference between separable states and entangled states. A pure quantum state (that is, a unit vector) v ∈ C^n ⊗ C^n is said to be
separable if it can be written as v = a ⊗ b for some a,b ∈ C^n; otherwise v is called entangled. In this post we will investigate what operators preserve the set of separable pure states, as well as
what operators entangle all separable pure states.
Separable Pure State Preservers and Entangling Gates
In the design of quantum algorithms, entangling gates play a very important role. Entangling gates are unitary operators that are able to generate entanglement. A bit more specifically, a unitary
operator U ∈ M[n] ⊗ M[n] (where M[n] is the space of n × n complex matrices) is called an entangling gate if there exists a separable pure state v = a ⊗ b ∈ C^n ⊗ C^n such that Uv is entangled.
Conversely, we will say that a unitary operator U preserves separability if Uv is separable whenever v is separable.
In order to answer the question of what unitaries preserve separability, it is instructive to consider some simple examples (this is often a useful way to formulate conjectures regarding preserver
problems). For example, it is clear that if U = A ⊗ B for some unitary operators A, B ∈ M[n], then U preserves separability (because U(a ⊗ b) = Aa ⊗ Bb is separable). Another example of a unitary
operator that preserves separability is the swap (or flip) operator S defined on separable states by S(a ⊗ b) = b ⊗ a (the action of S on the rest of C^n ⊗ C^n is determined by extending linearly).
It turns out that these are essentially the only operators that preserve separability [1,2,3]:
Theorem 1. Let U ∈ M[n] ⊗ M[n] be a unitary operator. Then U preserves separability (i.e., U is not an entangling gate) if and only if there exist unitary operators A, B ∈ M[n] such that either U = A
⊗ B or U = S(A ⊗ B).
As we already saw, the “if” direction of the above result is trivial – the meat and potatoes of the theorem comes from the “only if” direction (as is typically the case with results about linear
preservers). Theorem 1 was first proved in [1] essentially by case analysis and checking the action of a separability-preserving unitary on a basis of C^n ⊗ C^n, and was subsequently re-proved using
similar techniques (but with different motivations and connections) in [2]. The result was proved in [3] by using the vector-operator isomorphism and the fact that a linear map Φ : M[n] → M[n]
preserves the set of rank-1 operators if and only if there exist A, B ∈ M[n] such that either Φ(X) ≡ AXB or Φ(X) ≡ AX^tB [4].
Theorem 1 also follows as a simple corollary of several related results that have recently been proved in [5,6]. A version of Theorem 1 for multipartite systems (i.e., systems that are the tensor
product of more than two copies of C^n) can be found in [3] and [7].
Universal Entangling Gates
A universal entangling gate is, as its name suggests, a stronger form of an entangling gate – it is a unitary operator U such that U(a ⊗ b) is entangled for all a, b ∈ C^n (contrast this with
entangling gates, which require only that U(a ⊗ b) is entangled for some a, b ∈ C^n). The structure of universal entangling gates is much less well-understood than that of entangling gates, though we
can still at least say when they exist.
It is not difficult to convince yourself that universal entangling gates can’t exist in small dimensions. Let’s begin by supposing n = 2. The set of pure states in C^2 ⊗ C^2 can be regarded as a
7-dimensional real manifold (7 = 2 × (n × n) – 1, where we subtract one because pure states all have unit length), while the set of separable pure states in C^2 ⊗ C^2 can be regarded as a
5-dimensional real manifold (5 = (2 × n – 1) + (2 × n – 1) – 1, where the final one is subtracted because the overall phase of the first system relative to the second system is irrelevant). Thus, if
U ∈ M[2] ⊗ M[2] were a universal entangler, it would have to send a 5-dimensional manifold into the 7 – 5 = 2 remaining dimensions of the space, which seems unlikely. Similarly, if n = 3 and U ∈ M[3]
⊗ M[3] were a universal entangler, it would have to send a 9-dimensional manifold into the 17 – 9 = 8 remaining dimensions of the space, which also seems unlikely.
Indeed, this type of argument was made rigorous via methods of algebraic geometry in [8], where the following result was proved:
Theorem 2. There exists a universal entangling gate in M[n] ⊗ M[n] if and only if n ≥ 4.
Despite knowing when universal entangling gates exist, we still don’t have a characterization of such operators, nor do we even have many explicit examples (does anyone have an explicit example for 3
⊗ 4 or 4 ⊗ 4 systems?). Similar techniques to those used in the proof of Theorem 2 should also shed light on when universal entangling gates exist in multipartite systems M[n1] ⊗ M[n2] ⊗ … ⊗ M[nk],
but to my knowledge this calculation has not been explicitly carried out.
1. M. Marcus and B. N. Moyls, Transformations on tensor product spaces. Pacific Journal of Mathematics 9, 1215–1221 (1959).
2. F. Hulpke, U. V. Poulsen, A. Sanpera, A. Sen De, U. Sen, and M. Lewenstein, Unitarity as preservation of entropy and entanglement in quantum systems. Foundations of Physics 36, 477–499 (2006).
E-print: arXiv:quant-ph/0407118
3. N. Johnston, Characterizing Operations Preserving Separability Measures via Linear Preserver Problems. To appear in Linear and Multilinear Algebra (2011). E-print: arXiv:1008.3633 [quant-ph]
4. L. Beasley, Linear operators on matrices: the invariance of rank k matrices. Linear Algebra and its Applications 107, 161–167 (1988).
5. E. Alfsen and F. Shultz, Unique decompositions, faces, and automorphisms of separable states. Journal of Mathematical Physics 51, 052201 (2010). E-print: arXiv:0906.1761 [math.OA]
6. S. Friedland, C.-K. Li, Y.-T. Poon, and N.-S. Sze, The automorphism group of separable states in quantum information theory. Journal of Mathematical Physics 52, 042203 (2011). E-print:
arXiv:1012.4221 [quant-ph]
7. R. Westwick, Transformations on tensor spaces. Pacific Journal of Mathematics 23, 613–620 (1967).
8. J. Chen, R. Duan, Z. Ji, M. Ying, J. Yu, Existence of Universal Entangler. Journal of Mathematical Physics 49, 012103 (2008). E-print: arXiv:0704.1473 [quant-ph]
The Q-Toothpick Cellular Automaton
March 26th, 2011
The Q-toothpick cellular automaton (defined earlier this month by Omar E. Pol) is described by the following simple rules:
1. On an infinite square grid, draw a quarter circle from one corner of a square to the opposite corner of that square:
2. Call an endpoint of a quarter circle (or a “Q-toothpick”) exposed if it does not touch the endpoint of any other quarter circle.
3. From each exposed endpoint, draw two more quarter circles, each of the same size as the first quarter circle you drew. Furthermore, the two quarter circles that you draw are the ones that can be
drawn “smoothly” (without creating a 90° or 180° corner). Thus the next two generations of the automaton are (already-placed quarter circles are green, newly-added quarter circles are red):
The name “Q-toothpick” comes from its analogy to the more well-studied toothpick automaton (see Sloane’s A139250 and this paper), in which toothpicks (rather than quarter circles) are repeatedly
placed on a grid where exposed ends of other toothpicks lie. In this post, we will examine how this automaton evolves over time, and in particular we will investigate the types of shapes that it
Counting Q-Toothpicks
While the Q-toothpick automaton appears quite random and unpredictable for the first few generations, evolving past generation 6 or so reveals several patterns. The following image depicts the
evolution of the automaton for its first 19 generations.
Perhaps the most notable pattern is that the grid is more or less filled up in an expanding square starting from the initial Q-toothpick. In fact, by inspecting generations 4, 6, 10, 18, we see that
at generation 2^n + 2 (n = 1, 2, 3, …) the automaton has roughly filled in a square of side length 2^n+1 + 1, and then evolution continues from there on out of the corners of that square. Also, the
number of cells added (A187211) at these generations can now easily be computed:
A187211(2^n + 2) = 16 + 8(2^n-1 – 1) for n ≥ 3.
Furthermore, the growth in the following generations repeats itself. In particular, we have:
A187211(2^n + 3) = 22 for n ≥ 1,
A187211(2^n + 4) = 40 for n ≥ 2,
A187211(2^n + 5) = 54 for n ≥ 2.
Similarly, for n ≥ 3, the four values of A187211(2^n + 6) through A187211(2^n + 9) are similarly constant (their values are 56, 70, 120, and 134). In general, for n ≥ k the 2^k-1 values of A187211(2^
n + 2^k-1 + 2) through A187211(2^n + 2^k + 1) are constant in n, though I am not aware of a general formula for what these constants are. If we ignore the first four generations and arrange the
number of Q-toothpicks added in each generation in rows of length 2^n, we obtain a table that begins as follows:
22, 20
22, 40, 54, 40
22, 40, 54, 56, 70, 120, 134, 72
22, 40, 54, 56, 70, 120, 134, 88, 70, 120, 150, 168, 246, 360, 326, 136
C scripts are provided at the end of this post for computing the values of A187210 and A187211 (and hence the values in the above table).
Shapes Traced Out by Q-Toothpicks
In the graphic above that depicts the initial 19 generations of the Q-toothpick automaton, several shapes are traced out, including circles, diamonds, hearts, and several nameless blobs:
By far the most common of these shapes are circles, diamonds and hearts. The fourth shape appears only on the diagonal and it’s not difficult to see that it forever will make up the entirety of the
diagonal (with the exception of the circle in the center). The fifth and sixth objects are the first two members of an infinite family of objects that appear as the automaton evolves. The fifth
object first appears in generation 9, and sixth object (which is basically two copies of the fifth object) first appears in generation 17. The following object, which is basically made up of two
copies of the sixth object (i.e., four copies of the fifth object) first appears in generation 33:
In general, a new object of this type (made of 2^n copies of the fifth object above) first appears in generation 2^n+3 + 1. In fact, these objects are the only ones that are traced out by this
automaton. [Edit: this final claim is not true! See ebcube's great post that shows a double-heart shape in generation 31.]
Update [March 28, 2011]: I have added a script that counts the number of circles, diamonds, and hearts in the nth generation of the Q-toothpick automaton, and another script that computes Sloane’s
The Maximum Score in the Game “Entanglement” is 9080
January 21st, 2011
Entanglement is a browser-based game that has gained a fair bit of popularity lately due to its recent inclusion in Google’s Chrome Web Store and Chrome 9. The way the game works is probably best
understood by actually playing it, but here is my brief attempt:
• You are given a hexagonal tile with six paths printed on it, with two path ends touching each side of the hexagon. One such tile is as follows:
• You may rotate, but not move the hexagon that has been provided to you.
• Once you have selected an orientation of the hexagon, a path is traced along that hexagon, and you are provided a new hexagon that you may rotate at the end of your current path.
• The goal of the game is to create the longest path possible without running into either the centre hexagon or the outer edge of the game board.
To make things a bit more interesting, the game was updated in November 2010 to include a new scoring system that gives you 1 + 2 + 3 + … + n (the nth triangular number) points on a turn if you
extend the length of your path by n on that turn. This encourages clever moves that significantly extend the length of the path all at once. The question that I am going to answer today is what the
maximum score in Entanglement is under this scoring system (inspired by this reddit thread).
On a Standard-Size Game Board
The standard Entanglement game board is made up of a hexagonal ring of 6 hexagons, surrounded by a hexagonal ring of 12 hexagons, surrounded by a hexagonal ring of 18 hexagons, for a total of 36
hexagons. In order to maximize our score, we want to maximize how much we increase the length of our path on our final move. Thus, we want to just extend our path by a length of one on each of our
first 35 moves, and then score big on the 36th move.
Well, each hexagon that we lay has six paths on it, for a total of 6*36 = 216 paths on the board. 35 of those paths will be used up by our first 35 moves. It is not possible to use all of the
remaining 181 paths, however, because many of them lead into the edge of the game board or the central hexagon, and connecting to such a path immediately ends the game. Because there are 12 path ends
that touch the central hexagon and 84 path ends that touch the outer border, there must be at least (12+84)/2 – 1 = 47 unused paths on the game board (we divided by 2 because each unused path takes
up two path ends and we subtracted 1 because one of the paths will be used by us).
Thus we can add a length of at most 181 – 47 = 134 to our path on the 36th and final move of the game, giving a total score of at most 35 (from the first 35 moves of the game) + 1 + 2 + 3 + … + 134 =
35 + 9045 = 9080. Not only is this an upper bound of the possible scores, but it is actually attainable, as demonstrated by the following optimal game board:
Paths in red are unused, the green line depicts the portion of the path laid by the first 35 moves of the game, and the blue line depicts the portion of the path (of length 134) gained on the 36th
move. One fun property of the above game board is that it is actually completely “unentangled” – no paths cross over any other paths.
On a Larger or Smaller Game Board
Other than being a good size for playability purposes, there is no reason why we couldn’t play Entanglement on a game board of larger or smaller radius (by radius I mean the number of rings of
hexagons around the central hexagon – the standard game board has a radius of 3). We will compute the maximum score simply by mimicking our previous analysis for the standard game board. If the board
has radius n, then there are 6 + 12 + 18 + … + 6n = 3n(n+1) hexagons, each of which contains 6 paths. Thus there are 18n(n+1) lengths of path, 3n(n+1)-1 of which are used in the first 3n(n+1)-1 moves
of the game, and we want to add as many as possible of the remaining 15n(n+1)+1 lengths of path in the final move of the game. There are 12 path ends that touch the central hexagon and 12 + 24n path
ends that touch the outer edge of the game board. Thus there are at least (12 + 12 + 24n)/2 – 1 = 11 + 12n unused paths on the game board.
Tallying the numbers up, we see that on the final move, we can add at most 15n(n+1)+1 – (11 + 12n) = 15n^2 + 3n – 10 lengths of path. If T(n) = n(n+1)/2 is the nth triangular number, then we see that
it’s not possible to obtain more than 3n(n+1)-1 + T(15n^2 + 3n – 10) = (225/2)n^4 + 45n^3 – 135n^2 – (51/2)n + 44 points. In fact, this score is obtainable via the exact same construction as the
optimal board in the n = 3 case – just extend the (counter)clockwise rotation of the path in the obvious way. Thus, the maximum score for a game of Entanglement on a board of radius n for n = 1, 2,
3, … is given by the sequence 41, 1613, 9080, 29462, 72479, … (A180667 in the OEIS).
Further Variants of the “Look-and-Say” Sequence
January 13th, 2011
In two previous posts, I explored Conway’s famous “look-and-say” sequence 1, 11, 21, 1211, 111221, 312211, …, obtained by repeatedly describing the sequence’s previous term, as well as a simple
binary variant of the sequence. In this post I will use similar techniques to explore some further variations of the sequence – a version where each term in the sequence is read in ternary, and a
related sequence where no digit larger than 2 may be used when describing its terms.
As with the regular look-and-say sequence, the way we will attack these sequences is by constructing a “periodic table” of elementary non-interacting subsequences that all terms in the sequence are
made up of. Then standard recurrence relation techniques will allow us to determine the rate of growth of the length of the terms in the sequences as well as the limiting distribution of the
different digits in the sequence.
The Ternary Look-and-Say Sequence
Since we have already looked at the regular (i.e., decimal) look-and-say sequence, which is equivalent to the base-4 version of the sequence since it never contains a digit of 4 or larger, and we
have also looked at the binary version of the sequence, it makes sense to ask what happens in the intermediate case of the ternary (base-3) version of the sequence: 1, 11, 21, 1211, 111221, 1012211,
… (see A001388).
As always, we begin by listing the noninteracting subsequences that make this version of the sequence tick. Not surprisingly, it is more complicated than the corresponding table (of 10 subsequences)
in the binary case, but not as complicated as the corresponding table (of 92 subsequences) in the decimal case.
# Subsequence Evolves Into
1 1 (3)
2 10 (5)
3 11 (19)
4 110 (21)
5 1110 (2)(4)
6 111210 (2)(8)
7 111221 (2)(16)
8 1121110 (22)(4)
9 112211 (23)
10 112221 (21)(20)
11 11222110 (21)(24)
12 1122211210 (21)(25)
13 1211 (7)
14 121110 (6)(4)
15 1221 (9)
16 12211 (10)
17 122110 (11)
18 1221121110 (12)(4)
19 21 (13)
20 211 (15)
21 2110 (17)
22 211210 (18)
23 212221 (14)(20)
24 22110 (26)
25 221121110 (27)(4)
26 222110 (2)(24)
27 22211210 (2)(25)
The (27×27) transition matrix for this evolution rule is included in the text file at the end of this post. Its characteristic polynomial is
The maximal eigenvalue of the transition matrix is thus the largest root of x^3 – x – 1, which is approximately 1.324718. It follows that the number of digits in the terms of this sequence grows on
average by about 32.5% from one term to the next.
The Look-and-Say Sequence with Digits 1 and 2
Closely related to the ternary version of the sequence is the sequence obtained by reading the previous term in the sequence, but with the restriction that you can never use a number larger than 2
(see A110393). This sequence begins 1, 11, 21, 1211, 111221, 21112211, …, and the sixth term is obtained by reading the fifth term as “two ones, one one, two twos, one one”. Because only two
different digits appear in this sequence, it is perhaps not surprising that its table of noninteracting subsequences is quite simple:
# Subsequence Evolves Into
1 1 (2)
2 11 (5)
3 111 (7)
4 1211 (3)(6)(1)
5 21 (4)
6 22 (6)
7 2111 (1)(6)(3)
The transition matrix associated with this evolution rule is
As before, the average rate of growth of the number of digits in the terms of this sequence is determined by the magnitude of the largest eigenvalue of this matrix. A simple calculation reveals that
this eigenvalue is √φ = 1.272…, where φ = (1 + √5)/2 is the golden ratio. Furthermore, we can answer the question of how many 1s there are in the terms of this sequence compared to 2s by looking at
the eigenvector corresponding to the maximal eigenvalue:
What this means is, for example, that the second elementary subsequence (11) occurs φ times as frequently as the fourth elementary subsequence (1211). By weighting the subsequences by the entries in
this vector appropriately, we can calculate the limiting ratio of the number of ones to the number of twos as
Download: Transition matrices [plaintext file]
|
{"url":"http://www.njohnston.ca/page/2/","timestamp":"2014-04-20T23:26:48Z","content_type":null,"content_length":"83414","record_id":"<urn:uuid:ff202bc4-8886-4396-8f0c-afe3b91c689f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bayesian informatics
Bayesian informatics
Bayesian informatics is a convergent field that connects tools of statistical inference (like parameter-estimation & model-selection) to thermal physics, network analysis, the code-based sciences
(like genetics, linguistics, & computer science), and even the role of information industries in the evolution of complex systems. To the extent that Bayesian inference is the science of making the
most of limited data, one might even imagine that data-driven crime scene investigation is a special case.
Find here links and possible titles in a developing series of talks, as well as a cross-disciplinary course on Bayesian data analysis for the sciences.
Cross-disciplinary courses
The focus of these courses is on two things. The first-focus is "inverse problems" i.e. the inverse-complement to the "forward" problem of predicting outcomes given a model. This science, of
selecting a model and its parameters given observations, has been moved forward by developments in more than one field. The second-focus is the role of log-probability (i.e. information) measures of
sub-system correlation in the making of such inferences, and in understanding complex-systems generally. The senior-undergrad course is followup to Fall 2010 development work with a group of physics
grad-students, two of whom were fresh out of a graduate stat-mech course using Sethna's cross-disciplinary text. The working texts for that project were Phil Gregory's text on Bayesian Data Analysis
and Martin Nowak's text on Evolutionary Dynamics.
When offered e.g. in Fall 2012 as a senior-undergrad course, the course boiler-plate and syllabus will look something like:
PHYSICS 4305 Bayesian Data Analysis for the Sciences; Prerequisite: Consent of Instructor (3.0 credit hours)
This is a cross-disciplinary course in two parts. Part one covers Bayesian inference as applied to data analysis in general, with a special focus on the mathematics of model-selection in the physical
and life sciences. Part two concentrates specifically on the Bayesian use of log-probability (i.e. information) measures to track order-disorder transitions in thermodynamics, and to track the
evolution of sub-system correlations (via both digital and analog means) in a wide variety of complex systems. Expect weekly empirical observation exercises, and opportunities for asynchronous as
well as synchronous collaboration.
Meetings: TuTh 12:30pm-1:45pm (recorded class-sessions will be made available 24/7)
Room: All interactive classes will be accessible live through Wimba classroom. They will likely be organized into short (e.g. 15 minute) sessions of blackboard work, followed by a problem to work
together. Session recordings can be viewed after-the-fact. Joint-editing space for asychronous technical-collaboration on class material 24/7 using medi-wiki software will also be made available, in
part to facilitate participation by students not able to participate in class synchronously. Students in St. Louis can attend the classes in person, with computer-access for electronic-interaction in
Benton Rm. 225.
Text: Bayesian logical data analysis for the physical sciences: A comparative approach with Mathematica support by Phil Gregory (Cambridge U. Press, 2005).
Other materials: Regular browser-access (e.g. several times per week) to the course Blackboard site and the collaboration wiki through an internet browser is required. Access to Mathematica, or
another high-level language with symbolic, numeric, and model-plotting capabilities, is recommended as well.
Topics to be covered in the course:
• 1. Bayesian inference
□ 1.1 Empirical-observation and scientific-inference (ch1)
□ 1.2 Probability-theory as extended logic (ch2)
□ 1.3 Nuts & bolts of Bayes theorem (ch3)
□ 1.4 Assigning probabilities (ch4)
• 2. Multiplicities & surprisals
□ 2.1 Quantifying risk & evidence
□ 2.2 Choice multiplicities & entropies
□ 2.3 Matchup multiplicities & KL-divergence
□ 2.4 Bayesian maximum entropy (ch8)
• 3. Parameter-estimation & model-selection
□ 3.1 Inference with Gaussian errors (ch9)
□ 3.2 Linear model fitting (ch10)
□ 3.3 Non-linear fit strategies (ch11)
□ 3.4 Bayesian model-selection e.g. AIC
• 4. Correlation-based complexity
□ 4.1 Sub-system correlations in thermal physics
□ 4.2 Ordered-energy & Gibbs availability
□ 4.3 Symmetry-breaking & correlation-hierarchies
□ 4.4 Layered-network analysis-tools
Goals of the course: The first goal is to introduce Bayesian-inference as applied to data analysis in a cross-disciplinary context, with a special focus on the mathematics of model-selection (e.g. as
a complement to parameter-estimation) in the physical and life sciences. The second goal is bring students up to speed (in context of their background & interests) on the Bayesian use of
log-probability (i.e. information) measures to track order-disorder transitions in thermodynamics as well as the evolution of sub-system correlations (via both digital and analog means) in a wide
variety of complex systems.
If you are considering the Fall 2012 on-line version of the course and prefer, you may contact the instructor by e-mail (pfraundorf at umsl) about access to the course-development/collaboration wiki
over the summer. Students at umsl can already access course-related notes under Bayesian informatics on the campus wiki.
Course policies: Expect weekly empirical observation exercises, and opportunities for asynchronous as well as synchronous collaboration.
Final grade makeup:
• Weekly "pre-chapter" experiment reports: 30%. These give participants a reason to have worked with Chapter material before it comes up in class.
• Participation credit: 10%. This will involve synchronous participation in Wimba classroom peer-instruction quizzes and/or (e.g. for those only participating asynchronously) regular-contribution
to discussions on the course media-wiki site.
• Four Section Exams: 40%. Two will be (locally on-site) synchronous paper-submit, and two will be asynchronous electronic-submit through Blackboard.
• Comprehensive final exam (locally on-site paper-submit): 20%.
Local on-site exams:
• The two on-site section-exams will be given in class on Tuesdays during regular class, dates TBA.
• The comprehensive final exam will be given in class during the finals week, exact date/time TBA.
• Off-campus students will need to arrange for a faculty sponsor who would (i) administer the test, (ii) ensure test security, (iii) transmit the test to me for grading.
Makeup tests:
• In exceptional cases of documented medical or personal emergencies, a makeup test will be provided.
Model-selection and your inner detective
Clues to this might be found in these April 2011 journal club talk slides on "Astronomer Phil Gregory's book & model-selection in physics".
Paths to simplification of evolving idea-sets
Clues to this may be found in this June 2011 (pr)eprint on metric-first & entropy-first approaches, and in links on our information physics page including these slides from the 1998 Jaynes symposium
and these notes on a summer 1999 crackerbarrel organized with Edwin Taylor.
Task-layer health-tracking with attention-slice updates
Clues to this may be found in this May 2010 contest entry on attention-slice status updates, and in these notes on an earlier NSF proposal draft.
Internalizing participatory ideastreams
Clues to this might be found in these notes on invitation-only joint-editing space.
Interactive talk notes
These might be good places to post talk notes for moderated commentary.
Interactive tutorials
This might be a good place to post voice-thread powered interactive Kahn-academy style tutorials for moderated commentary. Any volunteers to help?
For the first talk, related tutorials on model selection might cover:
• evolving normals/idea-sets/models,
• parameter-estimation/model-selection math,
• Cartesian/polar views of complex add/multiply,
• geometric algebra in 2D, 3D & 4D,
• integral of sin[x]cos[x]dx versus udu,
• Bertrand's paradox in reverse,
• icotwin ddf-tableau analyses,
• constant vs. line fits etc.,
• Occam factors & Akaike information criteria,
• and what else?
For the 2nd talk, related tutorials on developing models might cover:
• proper & geometric accelerations at low speed,
• one map + two clocks ? dt/dt, dx/dt & dx/dt [tutorial],
• high-speed constant proper-acceleration round-trips [tutorial],
• information in bits, nats, J/K, gibibytes, etc.,
• quantifying risk with a handful of coins,
• Bayesian jurisprudence using evidence in bits,
• on/off choice/matchup multiplicities & bits [tutorial],
• choice-multiplicity ? gas laws, equipartition & mass action,
• KL-divergence ? available work & correlations,
• making ice water from boiling on a hot day,
• mutual information & evolving correlations,
• and what else?
For the 3rd talk, related tutorials on sustaining complexity might cover:
• Chaisson's cosmic evolution ? correlation-based complexity,
• boundary emergence from broken symmetry,
• correlation layers looking in/out from self, family & culture,
• we-memes that grab attention and/or that build correlations,
• and what else?
For the 4th talk, related tutorials on ideastream health might cover:
• evolving molecular & memetic codes,
• tools to build an organization-internal ideastream,
• nurturing internal ideastream flow,
• perturbing concept-sets & layer-focus w/unscoped-broadcasts,
• and what else?
Supplementary links...
...that might materialize here in days ahead will hopefully cover topics like:
This page is hosted by the UM-StL Department of Physics and Astronomy, and the person responsible for corrections is P. Fraundorf.
|
{"url":"http://www.umsl.edu/~fraundorfp/BayesianInformatics.html","timestamp":"2014-04-20T09:17:04Z","content_type":null,"content_length":"16636","record_id":"<urn:uuid:15ab23e0-bab8-4224-bac6-76a48e22b090>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3 Digit Subtraction with Regrouping
We can perform three digit subtractions easily, when we have to subtract smaller digit in a number from larger digit in a number then we can do it easily but when we have to subtract a larger digit
in a number from smaller digit in another number then we use a method called regrouping or borrowing.
First we start with simple subtraction; in simple subtraction we will subtract smaller digits from larger digits so the subtraction is not complex enough. Let us take some examples to understand
subtraction than we will see subtraction with regrouping-
Example: We take 954 – 733, we subtract from right to left so,
1. First subtract 3 from 4, we get 1.
2. Now subtract 3 from 5, we get 2.
3. Now subtract 7 from 9, we get 2.
So our result is 221.
Now we will see subtraction using regrouping or borrowing-
Regrouping is the process of borrowing a value from a previous digit by another digit, let us take some examples to understand regrouping-
Example: Let us take 765 – 692, in this we begin subtraction from right to left as in simple subtraction-
1) First we subtract 2 from 5; the result we get is 3.
2) Now we will subtract 9 from 6 but as we can see 9 is larger than 6, so in this case digit 6 will borrow 1 from previous digit 7 and now 6 will become 16 so now we will subtract 9 from 16 and
result is 7.
3) As 6 borrowed from 7, so 7 will now become 6, so we will subtract 6 – 6 and result is 0.
4) So our final result will be 73.
This is the process of
3 digit subtraction
with regrouping.
|
{"url":"http://math.tutorcircle.com/number-sense/3-digit-subtraction-with-regrouping.html","timestamp":"2014-04-18T06:23:49Z","content_type":null,"content_length":"19996","record_id":"<urn:uuid:0a7279fa-3956-4ef1-a72e-0d009b70a5cd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Printable Worksheets
Laws of Thermodynamics:
The Laws of Thermodynamics, in principle, describe the specifics for the transport of heat and work in thermodynamic processes.
Zeroeth Law of Thermodynamics
Two systems in thermal equilibrium with a third system are in thermal equilibrium to each other.This zeroeth law is sort of a transitive property of thermal equilibrium.
The transitive property of mathematics says that if A = B and B = C, then A = C. The same is true of thermodynamic systems that are in thermal equilibrium.
First Law of Thermodynamics
The change in a system's internal energy is equal to the difference between heat added to the system from its surroundings and work done by the system on its surroundings.
Mathematical Representation of the First Law
Physicists typically use uniform conventions for representing the quantities in the first law of thermodynamics. They are:
• U1 (or U[i]) = initial internal energy at the start of the process
• U2 (or U[f]) = final internal energy at the end of the process
• delta-U = U2 - U1 = Change in internal energy (used in cases where the specifics of beginning and ending internal energies are irrelevant)
• Q = heat transferred into (Q > 0) or out of (Q < 0) the system
• W = work performed by the system (W > 0) or on the system (W < 0).
This yields a mathematical representation of the first law which proves very useful and can be rewritten in a couple of useful ways:
U2 - U1 = delta-U = Q - W
Q = delta-U + W
Second Law of Thermodynamics:
It is impossible for a process to have as its sole result the transfer of heat from a cooler body to a hotter one.
Third Law of Thermodynamics:
The third law of thermodynamics is essentially a statement about the ability to create an absolute temperature scale, for which absolute zero is the point at which the internal energy of a solid is
precisely 0.
Various sources show the following three potential formulations of the third law of thermodynamics:
• It is impossible to reduce any system to absolute zero in a finite series of operations.
• The entropy of a perfect crystal of an element in its most stable form tends to zero as the temperature approaches absolute zero.
• As temperature approaches absolute zero, the entropy of a system approaches a constant
Add to My Article | Refer this Article | Print this Article
|
{"url":"http://www.kidzpark.com/content.asp?c_id=247","timestamp":"2014-04-19T02:06:14Z","content_type":null,"content_length":"32838","record_id":"<urn:uuid:825cb470-8449-4250-ba1b-4ac66f00983f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] empty data matrix (are they really empty ?)
Christopher Barker Chris.Barker at noaa.gov
Thu Dec 14 11:40:43 CST 2006
Sven Schreiber wrote:
>> In the old file I created a matrix on the fly. I know that Numpy and
>> python cannot do that so I found a workaround
numpy can create matrices on the fly, in fact, you are doing that with
this code! The only thing it doesn't do is have a lateral that joins
matrices the way matlab does -- you need to use vstack and the like.
>> First I create the empty matrix
To get better performance, you could create the entire empty matrix, not
just one row -- this is the same as MATLAB -- if you know how big your
matrix is going to be, it's better to create it first with "zeros". In
numpy you can use either zeros or empty - just make sure that if you use
empty, you fill the whole thing later, or you'll get garbage.
Your code:
# you've just created and empty single row
#now you are creating a whole new array, with one more row than before.
The alternative:
lev2=empty((nstep+1,h)) 3 create the whole empty array
for j in arange(1,nstep+2):
lev2[j,:] = clev # fill in the row you've just calculated
print lev2
I may have got some of the indexing wrong, but I hope you get the idea.
By the way, if you sent a complete, runnable sample, we can test out
suggestions, and you'll get better answers.
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker at noaa.gov
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-December/025034.html","timestamp":"2014-04-16T23:02:19Z","content_type":null,"content_length":"4485","record_id":"<urn:uuid:7c718029-d0e5-4760-94f6-b3fe543ef5ce>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Neighborhood of Infinity
...monad, of course. I keep telling myself that I'll write a post on topology or history of mathematics or something and then I end up saying something about Haskell monads. It seems to have happened
Anyway, it suddenly dawned on me that there's a monad to be wrung out of any recursive data structure. Roughly it goes like this: take a tree-like type,
Tree a
with some leaves are of type
. Visualise, if you will, an object of type
Tree (Tree a)
, ie. a tree with leaves are themselves trees. You can graft the leaves that are trees directly into the parent tree giving you a
Tree a
. So grafting gives you a map
Tree (Tree a) -> Tree a
. That should remind you of part of the definition of a monad - this is the function called
in Haskell, or μ by category theorists. Note, for example, that grafting grandchildren into children before grafting the children into their parents gives the same result as grafting the children
before the grandchildren. These, and other similar observations, give the usual monad laws.
Enough talk. Here's an expression datastructure.
> data Expr b = (Expr b) :+ (Expr b)
> | (Expr b) :* (Expr b)
> | F (Expr b) (Expr b)
> | I Int
> | V b deriving (Eq,Show)
Think of
V b
as a variable whose name is of type
. These are the points at which we'll graft in the subtrees. In this case, grafting will be the well known operation of substituting the value of a variable for the variable.
Here are the definitions:
> instance Monad Expr where
> return b = V b
> (a :+ b) >>= f = (a >>= f) :+ (b >>= f)
> (a :* b) >>= f = (a >>= f) :* (b >>= f)
> F a b >>= f = F (a >>= f) (b >>= f)
> I i >>= f = I i
> V b >>= f = f b
To apply
>>= f
you recursively apply it to the children except for
V b
where you actually get to apply f. So now we can define an 'enviroment' mapping variable names to values and construct an example tree
> env "a" = I 1
> env "b" = V "a" :+ I 2
> env "c" = V "a" :+ V "b"
> e = V "c" :+ I 3
It should be easy to see that
e >>= env
substitutes the value of c for
V "c"
. But it's more fun to write this as the slightly bizarre looking:
> subst1 env e = do
> a <- e
> env a
> test1 = subst1 env e
We can read "a <- e" in English as "drill down into e setting a to the name of each variable in turn". So this is a backtracking monad like
taking on multiple values. But notice how it only does one level of substitution. We can do two levels like this
> subst2 env e = do
> a <- e
> b <- env a
> env b
> test2 = subst2 env e
With each line we drill down further into e. But of course we really want to go all the way:
> subst3 env e = do
> a <- e
> subst3 env (env a)
> test3 = subst3 env e
And so now I can confess about why I wrote this post.
looks like a non-terminating loop with
simply calling itself without ever apparently checking for termination, and yet it terminates with the correct result. Well I found it mildly amusing anyway.
Anyway, this
is a generalised fold meaning there's an F-algebra lurking around. Armed with the words F-algebra, monad and tree I was able to start Googling and I found
which points out that the connection between trees and monads is "well known".
Oh well, I've got this far, I may as well finish the job. That paper is about dualising the above construction so I'll implement it. This time, instead of considering trees where each node is
optionally replaced by a label, we now have trees where every node has a label as well as a subtree. We call these labels decorations. In this case I think I'll work with trees are hierarchical
descriptions of the parts of something (in a vain attempt to pretend this stuff is actually practical :-)
> class Comonad w where
> counit :: w a -> a
> cobind :: (w a -> b) -> w a -> w b
> data Parts' a = A [Parts a] | S String deriving (Eq,Show)
> data Parts a = P (Parts' a,a) deriving (Eq,Show)
> instance Comonad Parts where
> counit (P (_,a)) = a
> cobind f z = case z of
> P (S s,a) -> P (S s,f z)
> P (A l,a) -> P (A $ map (cobind f) l,f z)
We'll consider parts of a plane. We'll decorate the parts with their price. The function
computes the total price of a tree of parts.
now extends
so that every part is (unfortunately, inefficiently) redecorated with its total price including that of its subparts. If you're having trouble visualising comonads, this seems like a nice elementary
example to think about.
> total (P (S _,n)) = n
> total (P (A x,_)) = sum $ map total x
> lwing = P (S "left wing",1000000)
> rwing = P (S "right wing",1000000)
> cockpit = P (S "cockpit",2000000)
> fuselage = P (S "fuselage",2000000)
> body = P (A [fuselage,cockpit],undefined)
> plane = P (A [lwing,rwing,body],undefined)
> test4 = cobind total plane
Uustalu and Vene's paper is pretty scary looking for something that's ultimately fairly simple!
Anyway, one last thought: this paper points out that in some sense,
monads and comonads can be thought of as trees with substitutions or redecoration.
Update: Modified the date of this entry. Blogger defaults to the day you first create an entry, not the date you publish. Let's see how many things break by fixing it to today...
5 comments:
This looks kinda like the context in which I first encountered monads... The operad theorists occasionally look at operads that live in some F-algebra instead of just in some category, and use
the "datatype" viewpoint to explicate various ways to handle the input.
At some point I need to sit down and write about operads and monads in a way that Haskellers could potentially digest...
Please do!
This sounds interesting as I'm working on a theoretical physics problem (higher spin gauge field interactions) using computer science inspired methods and I'm planning to implement in Haskell.
Furthermore operads seems just the right abstract thing to use. So I've been thinking about operads in Haskell, after all they, the look just like generalizations of monads. Anyone who knows if
this is something that's been worked out?
Anders Bengtsson
This is the free monad generated from an algebra construction I mention on LtU here in late 2005. Very well known stuff that is usually one of the beginning examples of a monad in mathematical
treatments (with an algebraic bent) of monads.
...please where can I buy a unicorn?
|
{"url":"http://blog.sigfpe.com/2006/11/variable-substitution-gives.html?showComment=1163375640000","timestamp":"2014-04-23T15:11:28Z","content_type":null,"content_length":"67577","record_id":"<urn:uuid:9b337571-1301-4735-bc9b-253f015c850b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Get Kids Thinking in Math Class - Algebra 1 Teachers
How to Get Kids Thinking in Math Class
Solve this problem in a different way.
I love this one! Kids that have always been good at math, are very good at following the algorithms. But not thinking outside of the given method. They often do not like this question in the
beginning of the year, but as they get more comfortable I can see them excel.
Kids that have struggled in math in the past will find this question a relief. “You mean I can do it my way?” These are the kids that have their own system for solving problems, but have been told
they were wrong because it did not follow the teacher’s way.
Write a story for this question.
So many students will mess up a simple integer subtraction problem like -12 – (-6), but if is say “You owe me $12 and your dad offers to take away $6 in debt, How much do you owe me?” They get it
right every time.
I want these kids to begin thinking of these situations on their own. Connecting the numbers to the real world is something most students will not do without help.
Prove to me that your answer is correct.
This will be different with every concept and age group. But, introducing this concept at a younger age will help students begin to think and answer the question, “Is this right?” for themselves,
building confidence in themselves.
Being able to prove shows a higher level understanding of concepts and will be a valuable skill in life as well as school.
Draw a picture to help me understand the problem.
Drawing a picture or a graph, showing numbers in a different way is so powerful for kids of all ages. Being able to convert numbers into pictures connects mathematics for kids. Leaving this open
ended just increases the power of the statement.
Use manipulatives (blocks, etc.) to explain the problem.
Giving abstract concepts concrete examples is powerful at every level. This one will take the most time, but will give you great rewards in understanding and retention. There are virtual algebra
blocks online and many other manipulatives as well.
I find that in my classroom I have so many students that are so insecure in their mathematics, in many cases these suggestions build confidence in their ability to solve problems on their own.
One thought on “How to Get Kids Thinking in Math Class”
1. buycollegeessay
I recently became a tutor – not my dream job though – but I think I have a few things to learn from your blog
Reply ↓
|
{"url":"http://www.algebra1teachers.com/how-to-get-kids-thinking-in-math-class/","timestamp":"2014-04-16T15:59:00Z","content_type":null,"content_length":"53314","record_id":"<urn:uuid:0050a508-5695-458d-8909-4579097d4244>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
11. What is a quartic function with only the two real zeroes given? x = –1 and x = –3 (1 point) y = x4 –4x3 – 4x2 – 4x – 3 y = –x4 + 4x3 + 4x2 + 4x + 3 y = x4 +4x3 + 3x2 + 4x – 4 y = x4 + 4x3 +4x2
+4x +3
• one year ago
• one year ago
Best Response
You've already chosen the best response.
((x+1)*(x+3))^2 I think and expand x^4+8*x^3+22*x^2+24*x+9 I dont think it is a choice tho but it is right.
Best Response
You've already chosen the best response.
thank you ( :
Best Response
You've already chosen the best response.
I'll figure it out( :
Best Response
You've already chosen the best response.
graph each one on google. and the answer is the one which intersects the x axis to copy to google for example instead of thish which you have y = x4 –4x3 – 4x2 – 4x – 3 do this instead x^4 –4*x^3
– 4*x^2 – 4*x – 3 * get rid of the "y=" ad the ^ for exponent and ad * between the coefficient and the x
Best Response
You've already chosen the best response.
Gotta go now
Best Response
You've already chosen the best response.
thank ya!!
Best Response
You've already chosen the best response.
\[y = x ^{4}+4x ^{3}+4x ^{2}+4x+3\] has -1 and -3 as its only real zeroes.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50c0400be4b0231994ece41c","timestamp":"2014-04-17T07:05:28Z","content_type":null,"content_length":"45059","record_id":"<urn:uuid:9e55529a-6a32-4b0e-8cb8-31d47fce8d93>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is $k$-diagonalizable element in split maximal torus of $G(k)$?
up vote 0 down vote favorite
let $k$ be any field of char 0. $G$ is split reductive algebriac group over k. Let p in G(k) be k-diagonalizable. Does there exist a split maximal torus of G(k) containing p? I know that is ture for
lie algbera case.
Please define "k"-diagonalizable. If you mean $p$ lies in a split multiplicative type $k$-subgroup (perhaps disconnected), a counterexample is $G = {\rm{PGL}}_2$ over $k = \mathbf{Q}$, with $p$
off-diagonal having off-diagonal entries equal 1 and $-1$. Indeed, $p$ generates a $\mu_2$ inside $G$ and its "determinant" in $k^{\times}/(k^{\times})^2$ is 1 whereas the unique nontrivial
2-torsion element in any 1-dimensional split maximal $k$-torus has "determinant" equal to $-1$ (as may be checked by inspection of the diagonal one, due to $G(k)$-conjugacy). Is your $G$ ss and
simply connected? – user27056 Oct 24 '12 at 13:29
I forget to mention G is reductive but its derived subgroup (G,G) is simply connected. I mean diagonalizable for any representaions like acting on k[G]. But in your example p have eigenvalues $\pm
i$? – user27501 Oct 24 '12 at 20:55
The eigenvalues in my example aren't $\pm i$ (mixing up GL$_2$ and PGL$_2$) since my $p$ has order 2 in $G(k)$. By your definition of "$k$-diagonalizable" there's a closed immersion $G \
hookrightarrow {\rm{GL}}(V)$ so that $p$ is diagonalizable on $V$ and hence lies in a split torus $T$ of GL($V$). Hence, $M := T \cap G$ is a split $k$-subgroup of $G$ of mult. type with $p \in M
(k)$, so your definition implies mine. The converse is easy (and my definition is intrinsic to $G$ and more robust). Thanks for clarifying that $(G,G)$ is simply connected. – user27056 Oct 25 '12
at 4:22
Thank you so much! I got it. How about simply connected case? – user27501 Oct 25 '12 at 21:44
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged torus-action or ask your own question.
|
{"url":"http://mathoverflow.net/questions/110482/is-k-diagonalizable-element-in-split-maximal-torus-of-gk","timestamp":"2014-04-19T04:50:07Z","content_type":null,"content_length":"49856","record_id":"<urn:uuid:e3c51e35-cde5-4ba0-af75-7f7d610cf116>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
More on Congruence mod primes
August 16th 2009, 02:24 PM #1
Aug 2009
More on Congruence mod primes
Hi ... I'm going through the Zhao and Sun proof of "Some Curious Congruences Modulo Primes", and I'm stuck in several places, hehehe. In particular, I was wondering how the following congruence
was shown.
For $i = 1,2,...,p-1$
$\frac{(-1)^{i-1}}{p} \left(\begin{array}{cc}p\\i\end{array}\right) = \frac{(-1)^{i-1}}{i} \left(\begin{array}{cc}p-1\\i-1\end{array}\right) \equiv \frac{1}{i} \ (mod \ p)$
I get the equality part, no problem. But it's not so clear (for me) how the congruence can be shown. Any help would be great ... I'm learning quite a bit
The congruence is clearly true for $p=2.$ Suppose $p$ is odd. We have
as $p$ is odd. Therefore
$\implies\ \frac{(-1)^{i-1}(p-1)!}{(i-1)!([p-1]-[i-1])!}\equiv1\,(\bmod\,p)$
and you can easily complete the proof from here.
Ah, thank you ... I thought it was related to set congruences somehow, but I wasn't too sure how. I should have spotted that though, hehehe ... but still, I wouldn't claim the congruence right
away, it's not that obvious to me
August 16th 2009, 03:04 PM #2
August 17th 2009, 07:19 AM #3
Aug 2009
|
{"url":"http://mathhelpforum.com/number-theory/98281-more-congruence-mod-primes.html","timestamp":"2014-04-19T13:43:44Z","content_type":null,"content_length":"36663","record_id":"<urn:uuid:d1f1bb06-a6e9-4a72-a150-9953fa46a472>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MISHNAT HA-MIDDOT
MISHNAT HA-MIDDOT (Heb. מִשְׁנַת הַמִּדּוֹת; "treatise of measures"), considered the earliest Hebrew geometry. Mishnat ha-Middot comprises various methods for determining the dimensions of various plane and
solid geometric figures. Its five chapters include, among other matters, a discussion of triangles, quadrilaterals, and frusta. The Heronic formula for the area of a triangle in terms of the lengths
of the sides is given. For π the value of 3^1/[7] is used and this divergence from the biblical 3 is homiletically justified. One of the extant manuscripts has a sixth chapter dealing with the
Tabernacle which is similar to sections of the *Baraita de-Melekhet ha-Mishkan. In spite of the similar names, there seems to be no connection between this work and the Baraita de-49 Middot which is
frequently cited by medieval commentators. This treatise is written in a distinctive Hebrew that combines mishnaic style with a technical terminology that has affinities with Arabic, although it
stands apart from the Hebrew mathematical terminology of the Hispano-Arabic period. In content, the Mishnat ha-Middot belongs to the stream of Oriental mathematics represented, e.g., by Heron, Greek
mathematician (c. 100 C.E.) in the Hellenistic period, and al-Khwarizmi (c. 825 C.E.) in the Arabic period, to both of whose works it offers striking parallels. Some attribute it to R. *Nehemiah (c.
150 C.E.), and see it as a link between the Hellenistic and Arabic texts, while others assign it to an unknown author of the Arabic period.
S. Gandz (ed.), Mishnat ha-Middot (Eng., trans. 1932); Ẓarefati, in: Leshonenu, 23 (1958/59), 156–71; 24 (1959/60), 73–94.
[Benjamin Weiss]
Source: Encyclopaedia Judaica. © 2008 The Gale Group. All Rights Reserved.
|
{"url":"http://www.jewishvirtuallibrary.org/jsource/judaica/ejud_0002_0014_0_14000.html","timestamp":"2014-04-17T12:33:32Z","content_type":null,"content_length":"5544","record_id":"<urn:uuid:46e82403-cd57-453f-ab1f-400b3837a071>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Thermo] Derivation of compressibility factor vs reduced pressure
1. The problem statement, all variables and given/known data
derivation of compressibility factor vs. reduced pressure
I am supposed to derive the graph by solving equations
2. Relevant equations
Van der Waals equation of state
compressibility factor, Z = (Pv)/(RT)
reduced pressure = P/critical pressure
Z = f(T[r], P[r])
3. The attempt at a solution
I sat for 12 hours attempting to find a solution but just spent time trying to understand what I was doing instead.
Is there a way to get the graph mathematically without using any values for critical pressure or temperature?
Thank you!
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
|
{"url":"http://www.physicsforums.com/showthread.php?t=400952","timestamp":"2014-04-17T12:37:46Z","content_type":null,"content_length":"26136","record_id":"<urn:uuid:c7abc743-19fe-4641-94ef-f77859c9b966>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|